Quantcast
Channel: Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
Viewing all articles
Browse latest Browse all 2652

CSCMM for Armadillo Sparse dense multiplications

$
0
0

Environment : armadillo 4.320.0 and 4.400

Compiler : Intel CPP compiler

OS : Ubuntu 12.04

I am trying to replace the Armadillo's native sparse dense multiplication with Intel MKL's CSCMM call. I wrote the following code.

#include <mkl.h>
#define ARMA_64BIT_WORD
#include <armadillo>

using namespace std;
using namespace arma;

int  main(int argc, char *argv[])
{
   long long m = atoi(argv[1]);
   long long k = atoi(argv[2]);
   long long n = atoi(argv[3]);
   float density = 0.3;
   sp_fmat A = sprandn<sp_fmat>(m,k,density);
   fmat B = randu<fmat>(k,n);
   fmat C(m,n);
   C.zeros();
 //C = alpha * A * B + beta * C;
 //mkl_scscmm (char *transa, MKL_INT *m, MKL_INT *n, MKL_INT *k, float *alpha, char *matdescra,
 //float *val, MKL_INT *indx, MKL_INT *pntrb, MKL_INT *pntre, float *b, MKL_INT *ldb, float *beta,
//float *c, MKL_INT *ldc);
  char transa = 'N';
  float alpha = 1.0;
  float beta = 0.0;
  char* matdescra = "GUUC";
  long long ldb = k;
  long long ldc = m;
  cout << "b4 Input A:"<< endl << A;
  cout << "b4 Input B:"<< endl << B;
  mkl_scscmm (&transa,&m,&n,&k,&alpha,matdescra,
              const_cast<float *>(A.values), (long long *)A.row_indices,
             (long long *)A.col_ptrs,(long long *)(A.col_ptrs + 1),
             B.memptr(),&ldb,&beta, C, &ldc);
  cout << "Input A:"<< endl << A;
  cout << "Input B:"<< endl << B;
  cout << "Input C:"<< endl << C;
  return 0;
}

I compiled the above code and ran it as "./testcscmm 10 4 6". I am getting a segmentation fault (core dumped).

 

(0, 0)         1.1123
 (4, 0)        -0.3453
 (8, 0)         0.6081
 (1, 1)         0.6410
 (4, 1)        -0.7121
 (5, 1)         1.1592
 (9, 1)        -1.7189
 (0, 2)         0.4175
 (2, 2)        -0.4001
 (4, 2)         2.2809
 (4, 3)        -2.2717
 (9, 3)         0.2251

b4 Input B:
0.1567   0.9989   0.6126   0.4936   0.5267   0.2833
0.4009   0.2183   0.2960   0.9728   0.7699   0.3525
0.1298   0.5129   0.6376   0.2925   0.4002   0.8077
0.1088   0.8391   0.5243   0.7714   0.8915   0.9190
Input A:
[matrix size: 13715672716573367337x13744746204899078486; n_nonzero: 12; density: 0.00%]

Segmentation fault (core dumped)

 

For some reason the structure of A is getting corrupted. I have the following questions.

  1. Does MKL_CSCMM modify the input array? If not why should A get corrupted?
  2. I changed the matrix C to native float. Still the error persists.
  3. Valgrind shows some memory errors.

Let me know how to make an intel MKL call using Armadillo's matrix data structures. Especially Sparse dense multiplication.

AttachmentSize
Downloadtestprograms.tgz337.89 KB

Viewing all articles
Browse latest Browse all 2652

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>