You view the results in the line graph and the bar graph. The Time menu filters raw time. The Results menu selects the type of information that is displayed in the graphs.
The Time menu filters raw execution time for the line and bar graphs. The following mutually exclusive items are available from the menu:
When optimizing code or testing a new algorithm, you typically view execution time. When reducing the amount of garbage, you typically view scavenge or global garbage collection time. Raw time may show spikes when garbage collection occurred.
The Results menu selects the comparison metric that the bar graph displays. The following mutually exclusive items are available from the menu:
The bars show the mean time for each benchmark. The standard deviation as a percentage of the mean is a good measure of the stability of a benchmark. Values below 5% indicate a stable benchmark.
The bars show the percentage of time spent collecting garbage for each benchmark. This option is useful when reducing the amount of garbage created by the application code. Usually, values below 5% are acceptable. Improving the algorithm or optimizing the code almost always has a greater affect than reducing garbage.
The formula for the percentage improvement is (A-B)/A, where A is the baseline and B is the candidate for improvement. A positive value shows improvement in the code, negative shows degradation, and zero shows no change.
The formula for the difference is A-B, where A is the baseline and B is the candidate for improvement.
The formula for the ratio is A/B, where A is the baseline and B is the candidate for improvement. A value less than one shows degradation, greater than one shows improvement, and equal to one shows no change.
The percentage improvement and the benchmark ratio present the same information in different ways. An improvement of 50% in benchmark B over benchmark A means that B is two times faster than A.
The Results menu also stores, loads, and clears benchmarks. The following menu items are available:
The Stats tool does not save sample [S] or trace [T] information in the benchmark file. To load a benchmark file, the classes and methods that were used to create the benchmarks must be in the image.