O'Reilly logo

Mobile Agents by Wilhelm R. Rossak, Peter Braun

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

8.2 Methodology 295
8.1.2 Performance Comparison of Mobile Agent Toolkits
Some work has been done to compare existing mobile agent toolkits.
Dikaiakos and Samaras [2000] define some micro-benchmarks to assess a
mobile agent toolkit (e.g., one to capture the overhead of local agent cre-
ation or one to capture the overhead of point-to-point messaging). Silva et al.
[2000] compare eight mobile agent toolkits using twelve experiments. Their
results show the influence of several factors (e.g., the number of agent servers
to visit on one tour, the influence of the agent’s size, and the influence of
class caching) on the performance of mobile agents. In our opinion, different
mobile agent toolkits cannot be compared without taking some fundamental
design issues of each system into account. Unfortunately, Silva et al. did not
consider the different security strategies, different migration and transmis-
sion strategies, and other differences in each toolkit’s implementation.
8.2 Methodology
8.2.1 Experiments and Measurements
We conducted eight differentexperiments; for each experiment the migration
time foraspecific mobile agent in a specificenvironmentwas measured. Each
experiment consisted of several measurements for which the same agent was
started several times. Agents used in different measurements varied (e.g., in
code size or in the number of servers to be visited).
To conduct these experiments, we developed a simple mobile agent
toolkit. The main function of this agency is to start agents, to measure the
migration time for each agent, to compute statistical information (mean
value and confidence interval) for a measurement, and to generate a file
that contains all the results of the experiment.
In each experiment we distinguish two roles for the computers involved.
The computer on which all agents are star ted is the master, all computers
that are only visited by the agents are called clients.
For each experiment the Java virtual machine must be restarted.When the
agency is started it is parameterized with the name of the experiment to start.
It then starts all the measurements sequentially. As already stated, the only
information we are interested in is the time an agent needs for a migration.
To measure the time for a single migration of a mobile agent, we have
to consider the period of time from the initiation of the migration process
296 Chapter 8 Evaluation
(go-statement) to the point when the agent is restarted at the destination
server. Because of the lack of a global time in a distributed system, we cannot
simply compare time stamps originating from different computer systems.
Therefore, we always consider at least two migrations: the first one to the
destination server and the second one back to the origin—we call this a
ping-pong migration. Therefore, printed times are never those for a single
migration but always for a complete round tr ip, which usually consists of only
two computers, but in some cases includes as many as seven computers. As
a consequence, the measured migration times not only consist of the pure
network transmission time but also the time for serializing the agent at the
sender agency and deserializing it at the receiver agency for each migration.
We also consider the time necessary to link the agent’s code, which involves
verifying and preparing class code. The process of serializing an agent takes,
according to our measurements less than 2 milliseconds (ms). The process of
deserializing the agent’s state and the linking agent’s code takes on average
between 1 and 5 ms, and is linear with respect to state size and class size.
Each agent migration is repeated between 200 and 1000 times; we only
report mean values and the 95% significance interval. The longest 5% of the
values were dropped, because we want to disregard times lengthened by
the Java garbage-collector task.
1
To illustrate our results we always used line
graphs, although insomeexperiments boxcharts would havebeenthe correct
diagramming technique, because intermediate values cannot be interpo-
lated. However, in our opinion, line graphs make our results more obvious to
the reader.
8.2.2 Programming Agents for the Measurements
The common behavior of the agents used in the experiments is defined in
class BaseAgent in package examples.agent, which states that an agent
executes the itinerary given in a configuration file and then migrates back to
its home agency.
In general there is a single agent class for each measurement. This class
extends class BaseAgent and defines special functions as necessary in the
concrete measurement, (e.g., sending data items back to the agent’s home
agency).
1. The Java garbage collector is started whenever there is not enough memory to create new objects.
Freeing memory takes between 300 and 900 ms in our experiments.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required