Uncovering unexpected behavior

After running the testHash program, click Function List in the Control Panel to display a list of functions called in the testHash program.

Since you are interested in the compute-bound functions of the testHash program, select View > Restrict functions > Compute-bound functions only. Quantify displays only the functions that made no operating system calls.

You would expect hashIndex to be high on the list since it is used to compute the hash keys, but strcmp is a surprise.

The strcmp function is considered to be efficient, so perhaps it was called a large number of times. Select View > Display data > Number of function calls to sort the compute-bound functions by the number of times they were called by any function.

Why should strcmp be called so many times over such a small test dataset?

The function detail suggests long buckets

Double-click strcmp to open the Function Detail window.

 

Double-click getHash to inspect its function detail.

The minimum and maximum time spent in the getHash code varies between 44 and 937 cycles. This wide variation is presumably because getHash had to traverse hashtable buckets of different sizes in its scanning loop.

The strcmp function is called 10 to12 times for each call to getHash, making the scanning loop and the calls to strcmp the major contributors to getHash's accumulated time.

To confirm this, you can look at the annotated source code for getHash.

Annotated source confirms excessive calls

Click Show Annotated Source in the Function Detail window to open the Annotated Source window.

The annotated source for getHash shows the function+descendants time distributed on each source line and scaled as a percentage of its overall function+descendants time.

To find out how much of getHash's time is spent in the loop that calls strcmp (exclusive of the time in the strcmp function itself), select View > Annotations  > Function time(% of function).

The Function time (% of function) view shows that over 90 percent of strcmp's time was spent in the scanning loop.

Saving the baseline data

Now that you have identified the performance bottleneck, save the collected data so it can serve as a baseline against which you can measure performance changes. Select File > Save collected data and File > Export data as to save both the binary data (in case you want to rerun Quantify on this same dataset at a later time) and save the collected data in export format for use in comparisons with subsequent runs of the program.

After saving the data, select File > Exit to exit Quantify. Click to continue.