Solving a linear system
The configuration files .cfg allow for a wide range of options to solve a linear or non-linear system.
We consider now the following example
Unresolved include directive in modules/reference/pages/Solver/solving.adoc - include::example$mylaplacian.cpp[]
To execute this example
# sequential
./feelpp_tut_laplacian
# parallel on 4 cores
mpirun -np 4 ./feelpp_tut_laplacian
As described in section
1. Direct solver
cholesky and lu factorisation are available. However the parallel implementation depends on the availability of MUMPS. The configuration is very simple.
# no iterative solver
ksp-type=preonly
#
pc-type=cholesky
Using the PETSc backend allows to choose different packages to compute the factorization.
Package  | 
Description  | 
Parallel  | 
  | 
PETSc own implementation  | 
yes  | 
  | 
MUltifrontal Massively Parallel sparse direct Solver  | 
yes  | 
  | 
Unsymmetric MultiFrontal package  | 
no  | 
  | 
Parallel Sparse matriX package  | 
yes  | 
To choose between these factorization package
# choose mumps
pc-factor-mat-solver-package=mumps
# choose umfpack (sequential)
pc-factor-mat-solver-package=umfpack
In order to perform a cholesky type of factorisation, it is required to set the underlying matrix to be SPD.
# matrix
auto A = backend->newMatrix(_test=...,_trial=...,_properties=SPD);
# bilinear form
auto a = form2( _test=..., _trial=..., _properties=SPD );