294 Chapter 8 Evaluation
We only measure the performance of mobile agents and do not compare
it to the client-server approach. Kalong only provides a framework to
attack the performance bottleneck problem of mobile agents, so it does
not make sense to compare client-server approaches with Kalong at this
early stage. We are currently working on approaches for sophisticated
migration strategies to solve this problem.
We only measure migration times and do not assess the performance of
an entire agent system. The agents that we used in the experiments do
not produce load on each visited agency.
We only measure the time for a single mobile agent. We have no data to
predict how Kalong’s performance will change with a higher number of
agents that migrate in parallel.
We only have a few network nodes available, especially in the wide-area
network, so we could not study how migration times increase in real-
8.1 Related Work
8.1.1 Performance Evaluation of Existing Mobile
As far as we know, only two toolkits have ever been explored concerning
migrationperformance. First, Gray[1997b]proposes some performance eval-
uations in his thesis on the AgentTCL toolkit (which was later renamed in
D’Agents). The AgentTCL toolkit provides some basic functions for a ﬂexible
and secure mobile agent toolkit. Gray’s results for migration times show long
delays because of the slow TCL script interpreter and the migration protocol
Second, the Tacoma toolkit was evaluated by Johansen et al. .
Tacoma is also a non-Java–based mobile agent toolkit. The authors give values
for the migration time of one agent from its current server to a remote server,
including time for serializing and deserializing, creating and initiating, as
well as sending an acknowledgement message.
We are not aware of any broad analysis of performance aspects of a Java-
based mobile agent toolkit. For a discussion of the scalability of the Jade
toolkit, see Korba and Song .