使用 valgrind 进行分块矩阵乘法的 C++ 性能分析

标签 c++ performance algorithm optimization

我正在尝试弄清楚如何正确地实现循环平铺。我的代码基于 http://people.freebsd.org/~lstewart/articles/cpumemory.pdf .从理论上讲,我应该通过使用平铺矩阵乘法来获得性能提升。但我不一定。我还将展示 valgrind 的 cachegrind 的结果,我认为这非常有趣。

我注释掉了不同的方法。

// cpp program, matrix multiplication
// returns the elapsed time of the loop iterations measured by omp_get_wtime()

#include <iostream>
#include <algorithm>            // std::min
#include <omp.h>

int main(int argc, char *argv[])
{

    // matrix dimensions
    const int row = 1000;
    const int col = 1000;

    // matrix stored as an array of size 1000*1000
    // temp will be b transposed, recommendation from the article mentioned above
    // res is of double precision, I ran into errors displaying the data when using a different data type
    int *a = new int[row*col];
    int *b = new int[row*col];
    int *temp = new int[row*col];
    double *res = new double[row*col];

    // initialization
    for(int i = 0; i < row; ++i){
        for (int j = 0; j < col; ++j) {
            a[i*col+j] = i*col+j;
            b[i*col+j] = i*col+j;
        }
    }

    // transposition of b
    for(int i = 0; i < row; ++i){
        for (int j = 0; j < col; ++j) {
            temp[i*col+j] = b[j*col+i];
        }
    }


    int i,j,k,x,y,z;

// "naive" matrix multiplication
    // double start = omp_get_wtime();
    // for (i = 0; i < row; ++i) {
    //     for (j = 0; j < col; ++j) {
    //         for (k = 0; k < row; ++k) {
    //             res[ i * col + j ] +=  a[ i * col + k ] * b[ k * col + j ];      
    //         }
    //     } 
    // }
    // double end = omp_get_wtime();
    // std::cout << end-start << std::endl;



// "transposed" matrix multiplication
        // for (i = 0; i < row; ++i) {
           //  for (j = 0; j < col; ++j) {
                // for (k = 0; k < row; ++k) {
                   // res[ i * col + j ] +=  a[ i * col + k ] * temp[ k  + j * col  ];      
                // }
            // } 
        // }

// tiled (parallel) matrix multiplication
// from /sys/devices/system/cpu/cpu0/cache/index0
// cat coherency_line_size returns 64;
// thus I will use 64 as the blocking size;

    int incr = 64;
    for (i = 0; i < row; i += incr) {
         for (j = 0; j < col; j += incr) {
             res[i*col+j] = 0.0;
             for (k = 0; k < row; k += incr) {
                 for (x = i; x < std::min( i + incr, row ); x++) {
                     for (y = j; y < std::min( j + incr, col ); y++) {
                         for (z = k; z < std::min( k + incr, row ); z++) {

                             res[ x * col + y ] +=  a[ x * col + z ] * b[ z * col  + y  ];

                         }
                     } 
                 }
             }
         }
     }

     return 0;
}

结果:

现在我将展示在配备英特尔双核和 4Gb DRAM 的 Linux 机器上编译这三种方法的结果。首先,我将介绍没有优化的编译结果,然后是优化编译的结果。对于每个结果,将添加相应的 valgrinds cachegrind 结果。对于那些不熟悉该软件的人:来自 http://www.valgrind.org/docs/manual/cg-manual.html

"Cache accesses for instruction fetches are summarised first, giving the number of fetches made (this is the number of instructions executed, which can be useful to know in its own right), the number of I1 misses, and the number of LL instruction (LLi) misses."

“天真”方法:

$ g++ -fopenmp parallel -o parallel.cpp
$ ./parallel

16.5305    

$ valgrind --tool=cachegrind ./parallel

==12558== I   refs:      39,054,659,801
==12558== I1  misses:             1,758
==12558== LLi misses:             1,738
==12558== I1  miss rate:           0.00%
==12558== LLi miss rate:           0.00%
==12558== 
==12558== D   refs:      20,028,690,508  (18,024,512,540 rd   + 2,004,177,968 wr)
==12558== D1  misses:     1,064,759,236  ( 1,064,571,085 rd   +       188,151 wr)
==12558== LLd misses:        62,877,799  (    62,689,774 rd   +       188,025 wr)
==12558== D1  miss rate:            5.3% (           5.9%     +           0.0%  )
==12558== LLd miss rate:            0.3% (           0.3%     +           0.0%  )
==12558== 
==12558== LL refs:        1,064,760,994  ( 1,064,572,843 rd   +       188,151 wr)
==12558== LL misses:         62,879,537  (    62,691,512 rd   +       188,025 wr)
==12558== LL miss rate:             0.1% (           0.1%     +           0.0%  )

“转置”方法:

$ g++ -fopenmp parallel -o parallel.cpp
$ ./parallel

9.40104 

$ valgrind --tool=cachegrind ./parallel

==13319== I   refs:      39,054,659,804
==13319== I1  misses:             1,759
==13319== LLi misses:             1,739
==13319== I1  miss rate:           0.00%
==13319== LLi miss rate:           0.00%
==13319== 
==13319== D   refs:      20,028,690,508  (18,024,512,539 rd   + 2,004,177,969 wr)
==13319== D1  misses:        63,823,736  (    63,635,585 rd   +       188,151 wr)
==13319== LLd misses:        62,877,799  (    62,689,774 rd   +       188,025 wr)
==13319== D1  miss rate:            0.3% (           0.3%     +           0.0%  )
==13319== LLd miss rate:            0.3% (           0.3%     +           0.0%  )
==13319== 
==13319== LL refs:           63,825,495  (    63,637,344 rd   +       188,151 wr)
==13319== LL misses:         62,879,538  (    62,691,513 rd   +       188,025 wr)
==13319== LL miss rate:             0.1% (           0.1%     +           0.0%  )

“平铺”方法:

$ g++ -fopenmp parallel -o parallel.cpp
$ ./parallel

13.4941 

==13872== I   refs:      62,967,276,691
==13872== I1  misses:             1,768
==13872== LLi misses:             1,747
==13872== I1  miss rate:           0.00%
==13872== LLi miss rate:           0.00%
==13872== 
==13872== D   refs:      35,593,733,973  (28,411,716,118 rd   + 7,182,017,855 wr)
==13872== D1  misses:         6,724,892  (     6,536,740 rd   +       188,152 wr)
==13872== LLd misses:         1,377,799  (     1,189,774 rd   +       188,025 wr)
==13872== D1  miss rate:            0.0% (           0.0%     +           0.0%  )
==13872== LLd miss rate:            0.0% (           0.0%     +           0.0%  )
==13872== 
==13872== LL refs:            6,726,660  (     6,538,508 rd   +       188,152 wr)
==13872== LL misses:          1,379,546  (     1,191,521 rd   +       188,025 wr)
==13872== LL miss rate:             0.0% (           0.0%     +           0.0%  )

注意引用文献。大幅上涨。

优化编译:

“天真”方法:

$ g++ -fopenmp -O3 parallel -o parallel.cpp
$ ./parallel

4.87246

$ valgrind --tool=cachegrind ./parallel

==11227== I   refs:      9,021,661,364
==11227== I1  misses:            1,756
==11227== LLi misses:            1,734
==11227== I1  miss rate:          0.00%
==11227== LLi miss rate:          0.00%
==11227== 
==11227== D   refs:      4,008,681,781  (3,004,505,045 rd   + 1,004,176,736 wr)
==11227== D1  misses:    1,065,760,232  (1,064,572,078 rd   +     1,188,154 wr)
==11227== LLd misses:       62,877,794  (   62,689,768 rd   +       188,026 wr)
==11227== D1  miss rate:          26.5% (         35.4%     +           0.1%  )
==11227== LLd miss rate:           1.5% (          2.0%     +           0.0%  )
==11227== 
==11227== LL refs:       1,065,761,988  (1,064,573,834 rd   +     1,188,154 wr)
==11227== LL misses:        62,879,528  (   62,691,502 rd   +       188,026 wr)
==11227== LL miss rate:            0.4% (          0.5%     +           0.0%  )

“转置”方法:

$ g++ -fopenmp -O3 parallel -o parallel.cpp
$ ./parallel 

2.02121 

$ valgrind --tool=cachegrind ./parallel

==12076== I   refs:      8,020,662,317
==12076== I1  misses:            1,753
==12076== LLi misses:            1,731
==12076== I1  miss rate:          0.00%
==12076== LLi miss rate:          0.00%
==12076== 
==12076== D   refs:      4,006,682,757  (3,002,508,030 rd   + 1,004,174,727 wr)
==12076== D1  misses:       63,823,733  (   63,635,579 rd   +       188,154 wr)
==12076== LLd misses:       62,877,795  (   62,689,769 rd   +       188,026 wr)
==12076== D1  miss rate:           1.5% (          2.1%     +           0.0%  )
==12076== LLd miss rate:           1.5% (          2.0%     +           0.0%  )
==12076== 
==12076== LL refs:          63,825,486  (   63,637,332 rd   +       188,154 wr)
==12076== LL misses:        62,879,526  (   62,691,500 rd   +       188,026 wr)
==12076== LL miss rate:            0.5% (          0.5%     +           0.0%  )

“平铺”方法:

$ g++ -fopenmp -O3 parallel -o parallel.cpp
$ ./parallel 

1.78285   

$ valgrind --tool=cachegrind ./parallel

==14365== I   refs:      8,192,794,606
==14365== I1  misses:            1,753
==14365== LLi misses:            1,732
==14365== I1  miss rate:          0.00%
==14365== LLi miss rate:          0.00%
==14365== 
==14365== D   refs:      4,102,512,450  (3,083,324,326 rd   + 1,019,188,124 wr)
==14365== D1  misses:        6,597,429  (    6,409,277 rd   +       188,152 wr)
==14365== LLd misses:        1,377,797  (    1,189,770 rd   +       188,027 wr)
==14365== D1  miss rate:           0.1% (          0.2%     +           0.0%  )
==14365== LLd miss rate:           0.0% (          0.0%     +           0.0%  )
==14365== 
==14365== LL refs:           6,599,182  (    6,411,030 rd   +       188,152 wr)
==14365== LL misses:         1,379,529  (    1,191,502 rd   +       188,027 wr)
==14365== LL miss rate:            0.0% (          0.0%     +           0.0%  )

我的问题是:为什么未优化的“平铺”方法的性能比优化的要差?我的平铺算法实现有问题吗?

我的意思是很明显,虽然这两种方法的缓存未命中率大约是。同样,裁判。 (获取的数量)从 60 bio+ 下降到 8 bio。因此,它现在更快也就不足为奇了。但我不太清楚的是,那些额外的 20 条 bio+ 指令是从哪里来的?它应该是这三个未优化实现中最快的实现,对吧?

嗯,谢谢你很多次。

体重

文森特

最佳答案

您的平铺方法在代码方面更复杂,因此会产生额外的开销。使用优化代码当然这不是什么大问题,因为矩阵足够大,可以通过适当的缓存使用产生更多好处。

现在看看未优化的代码:

                     for (z = k; z < std::min( k + incr, row ); z++) {
                                     -------------------------

这些计算将在紧密循环中执行。这是一个完美的性能 killer 。

将它们移动到外部范围(例如:一旦 k 可用)就会产生很大的不同。优化器当然可以做到这一点,但前提是你要求它优化它。这就是为什么测量未优化的代码通常毫无值(value)

0m16.186s  "tiled" approach
0m11.543s  "tiled" approach with the hand optimization
0m10.919s  "transposed" approach

这是我在我的机器上测得的。对我来说看起来不错。

关于使用 valgrind 进行分块矩阵乘法的 C++ 性能分析,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/26892504/

相关文章:

c++ - 如何旋转 ATL::CImage 对象

c++ - 在 C++ 中精确复制对象(构造后的数据集)

python - 列表上的总和可能更快

java - 如何在 Java 8 中向 LocalDate 添加天数时跳过周末?

c# - 任意长度字符串的基数排序

algorithm - 什么是最有效的通用 sin(x) 算法,它的效率如何?

c++ - 源代码在 C++ 中调用了错误的模板类函数,如何解决?

c++ - 在 OpenGl 核心中使用单个像素绘制一条线

c++ -++iterator 和 iterator++ 之间的性能差异?

file - 写入大文件的性能问题?