I have conducted the results on the coachbench benchmark. Each one performs repeated access to data items on varying vector lengths. Timings are taken for each vector length over a number of iterations. Computing the product of iterations and vector length gives us the total amount of data accessed in bytes. This total is then divided by the total time to compute a bandwidth figure. This figure is in megabytes per second. Here we define a Megabyte as being 10242 or 1048576 bytes. In addition to this figure, the average access time in nanoseconds per each data item is computed

and reported.

But I just got the result in the following form: why there is a sudden decrease in the time by increasing vector size.

output:

C Size         Nanosec

256             11715.798191

336             11818.694309

424              11812.024314

512               11819.551479

.... ......

..... .....

4194309 9133.330071

I need the results in the following form. How I get this result.

The output looks like this:

Read Cache Test

C Size    Nanosec    MB/sec    % Change

------- ------- ------- -------

4096     7.396        515.753       1.000

6144     7.594        502.350       1.027

8192     7.731         493.442       1.018

12288    17.578     217.015        2.274

More Zakira Inayat's questions See All
Similar questions and discussions