We need to find the Schur complement matrix of a sparse matrix A of form
A11 A12
A21 A22
I.e., we want the Schur block defined by S = A22 - A21 A11-1 A12, which can be done by the new sparse solver update. In cluster_sparse_solver, S is stored as a dense matrix.
Our problem is, S is too large to store on a single compute node, so we would like it to be distributed across all compute nodes. We can distribute the input matrix A using the current interface, but the Schur matrix S is always returned to MPI process 0. Is there some option to make it return distributed in MKL 2018 update 2?
If there is no option, we know we can work around the problem by partitioning A22 further and finding the Schur complement matrix of each section. However, this would appear to require a lot of repeated calculations. Is there some way to save intermediate calculations involving A11 so that these calculations don't need repeating for subsequent Schur calculations?
Thank you,
Laura