Quantcast
Channel: Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
Viewing all articles
Browse latest Browse all 2652

Optimizing matrix multiplication algorithm on Intel Xeon Gold (DevCloud)

$
0
0

Hi,

 

I am working on Case #03357624 - Benchmarking algorithms on Intel Xeon Gold (DevCloud):

https://communities.intel.com/thread/124090

 

Summary:

The concern is on time overhead while running compiled mmatest1.c, attached to the link: Performance of Classic Matrix Multiplication Algorithm on Intel® Xeon Phi™ Processor System | Intel® Software 

 

Observation:

First occurrence of loop is taking huge time. Second loop is also taking comparatively more time. Time taken by rest is similar.

I ran the code with 16 loop count and matrix size 256 and got following result for each loop:

        MKL:

        MKL  - Completed 1 in: 0.2302730 seconds

        MKL  - Completed 2 in: 0.0001534 seconds

        MKL  - Completed 3 in: 0.0001267 seconds

        MKL  - Completed 4 in: 0.0001275 seconds

        ..................

        MKL  - Completed 15 in: 0.0001280 seconds

        MKL  - Completed 16 in: 0.0001347 seconds

 

        CMMA:

        CMMA - Completed 1 in: 0.0504993 seconds

        CMMA - Completed 2 in: 0.0003169 seconds

        CMMA - Completed 3 in: 0.0001666 seconds

        CMMA - Completed 4 in: 0.0001687 seconds

        ................

        CMMA - Completed 15 in: 0.0001638 seconds

        CMMA - Completed 16 in: 0.0001636 seconds

 

Time taken by first loop should be due to warm up (initial process of loading the data in cache and Translation Look-Aside Buffer (TLB) etc.)

 

=> I need advise and confirmation on following Questions and answers which I got as per my understanding:

1) Should first result (time taken by first occurrence of loop) be included in time estimation while benchmarking?

Ans I have) No, it should be excluded. 

Further Q) Why time taken by second loop is more than other following loops? Should it also be excluded from benchmarking? How many initial loops should we not include in time estimation?

 

2)  Is the overhead primarily due to the cache misses or the warm up time?

Ans I have) It’s due to warm up time. If we will use large matrices cache miss will also come to effect. 

Further Q) As per the user it’s due to cache misses. How cache miss is effecting initially when it has no data? Is warm up not a right term instead?

 

3) If it is indeed cache misses, how can he work on that? He thought its always accessed in a row-major format and thus cache misses would be avoided if he would have accessed it in the same format.

Ans I have) It’s correct, we should access in row-major format. Data layout in memory and data access scheme should be kept best same. Possible solutions (if it’s a big matrix) are:

a) Transpose matrix B to access it with row major

b) Use loop blocking optimization technique (LBOT) with block size equal to virtual page size.

 

4) How to debug cblas_sgemm() or where to find source code of it to debug using gdb? 

 

Please advise.

Thanks and regards,

Rishabh Kumar Jain


Viewing all articles
Browse latest Browse all 2652

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>