The Performance Profile viewer displays the execution time for all methods executing within the application of interest, thereby allowing the user to uncover potential bottlenecks. First, the one or more methods requiring the most amount of time are displayed graphically in a pie chart. Up to six functions are displayed if each is individually responsible for more than 5% of total execution time. This is then followed by a sortable list of every method, with timing measurements displayed.
To view the Performance Profile report:
Select the Performance Profile tab.
Notice how the function checkLog() was responsible for around 75% to 85% of the time spent processing information in the UMTS base station. By looking at the table, where times are listed in milliseconds, we can see that this function's average execution time was between 6 to 7 seconds (it will vary somewhat based on your machine) and that it has no descendents - i.e. it never calls and then awaits the return of other functions or methods, which explains why the Function time matches the F+D time. Is this to be expected? If you wished, you could click on the function name in the table to jump to that function to see if its execution time can be reduced.
Each column can be used to sort the table - simply click on the column heading.
Click the column heading entitled F+D Time
Interestingly, though checkLog() clearly uses the largest amount of execution time, it is not the "slowest" method when considering descendants. That distinction goes to readMsg(); though quick by itself, it's execution time when including descendants is the slowest of all. However, a quick investigation of the readMsg() function would reveal that this function calls - and that awaits the return of - readString(), which explains why the execution time of readMsg() takes longer than readString().
Of course, since this is a multi-threaded application, it
is possible for one function to reveal itself as the slowest performer
while, overall, the monitored application is typically busy doing other
things. This would explain why the runtime tracing diagram does not indicate
monopolization of UMTS base station execution following a call to the
checkLog() method (have a look;
search for *checkLog* using the Find
button from the toolbar), and thus why performance profiling
is such a valuable supplement to code optimization.
As with the memory profiling feature, notice how easy it was to gather this information. Performance profiling can now also be part of your regression test suite.