I am trying to determine whether the Intel Direct Sparse Solver for Clusters is a good parallel solver for our application. I have implemented the sparse solver in Fortran to solve a linear FEA problem. For the call to the sparse solver, I am seeing speed-up with increasing number of MPI processes, but not seeing good speed-up with increasing number of threads per MPI process.
In this case, the A matrix is generated from a finite-difference-type grid in distributed format (DCSR). Node ordering is such that distributing the matrix results in gaps in the sparse matrix storage of each part, similar to the example given here: https://software.intel.com/en-us/articles/intel-math-kernel-library-parallel-direct-sparse-solver-for-clusters
This case is a linear time-domain problem so we factorize the matrix once and then solve it many times with evolving boundary conditions. Should I expect to see good scaling over MPI processes and OpenMP threads in the solve phase of the direct sparse solver for clusters?
I have benchmarked a 10 million DOF model on a linux cluster with the number of MPI processes ranging from 2 to 128, with 1 process per hardware node, and the number of OpenMP threads per node ranging from 2 to 16. I see speed-up in increasing the number of MPI processes up to about 32 processes, but very little improvement in using more OpenMP threads. I see about the same speed-up on 8 or 16 threads as I see on 2 threads.
I am using the following iparm variables:
iparm(1) = 1 !no default values
iparm(2) = 2
iparm(10) = 8
iparm(18) = -1
iparm(27) = 0
iparm(28) = 1
iparm(40) = 2
iparm(41) = ibegin
iparm(42) = iend
iparm(60) = 1
Is the Direct Sparse Solver for Clusters suitable here and what issues should I look at to try to improve scaling with number of OpenMP threads? Thanks for your help.