Finite element and virtual element library with support for polytopic elements.
In order to build PTEMS, the following is required:
Make
, Ninja, or Visual Studio[MSVC]shared_ptr
to arrays)PTEMS is currently supported on Linux, expected to work on macOS but untested, and partially supported (without MUMPS) on Windows under 32-bit builds ONLY, cf. issue #2. Windows support on Windows Subsystem for Linux, MSYS, or similar, is untested.
The following instructions document how to build PTEMS' with default options using the current configured or default CMake generator and C++ compiler for the platform. For details on how to set the generator/compiler to use see the CMake documentation.
We strongly suggest reading the rest of the build instructions before using the commands in this section.
Note, these instructions are based on the use of the standard command line in Linux/macOS or a suitable developer command prompt on Windows (such as the Visual Studio Developer Command Prompt), and require that the cmake
command is available on the search PATH
. Alternatively, the project folder can be used from Visual Studio Code, providing C++ and CMake Tools extensions are installed; then, you can configure and build using Visual Studio Code's command palette, see Get started with CMake Tools.
Change to the directory containing the PTEMS source and then configure PTEMS with the following command:
See Build Options below for more detailed options to configure PTEMS.
Then build PTEMS by executing the following command:
Note that the build system can perform multiple 'jobs' at once (by running multiple compile processes). CMake and the build system will choice a default for how many jobs to run at once, which may be one or potentially the number of processors on the machine. However, you can specify the number of simultaneous jobs to run with the -j
argument
where N
is the number of jobs to run.
Note that running the build command multiple times only compiles changed files (should be none!). If you want to force a rebuild of PTEMS library you can specify --clean-first
:
This is NOT necessary the first time build is ran.
You can just clean the build using the clean
target:
The PTEMS library has support for optional external libraries. The basic build instructions above will attempt to do the sensible choice depending on the presence of these libraries (see details for each library below for what the sensible thing is).
PTEMS contains experimental support for parallel processing using any available C MPI library. PTEMS will by default be built without MPI, but it can be forced to be enabled (see Build Options).
The default linear solver is to use MUMPS (MUltifrontal Massively Parallel sparse direct Solver). If MUMPS is detected by the CMake script then PTMES will be built with MUMPS support, otherwise the default libnear solver will fallback to a built-in GMRES with ILU0 preconditioner (optimiality of solver not guaranteed).
MUMPS is usually as multiple binaries, which each binary being a combination of s data type of the matrix (float
, double
, complex float
, and complex double
) and whether the solver is a parallel or sequential solver. Multiple binaries of different data types can be included in PTEMS, but it is not possible to combine parallel and sequential solvers. As such, PTEMS will bind use the parallel versions if MPI is enabled and the sequential version otherwise.
Matrix Data Type | Sequential Library | Parallel Library | Header File | Build Option Name |
---|---|---|---|---|
float | smumps_seq | smumps | smumps_c.h | PTEMS_USE_MUMPS |
double | dmumps_seq | dmumps | dmumps_c.h | PTEMS_USE_MUMPS_F |
complex float | cmumps_seq | cmumps | cmumps_c.h | PTEMS_USE_MUMPS_COMPLEX |
complex double | zmumps_seq | zmumps | zmumps_c.h | PTEMS_USE_MUMPS_COMPLEX_F |
The double
and complex double
variants installed in standard search locations should be found by default. The float
and complex float
variants are by default not included. Setting the specified build option in the above table to ON
, OFF
, or DEFAULT
, (see Build Options) enables, disables, or enables only if the library is found, respectively, for the matching data type. See Build Options for how to aid the detection.
MUMPS can be built from source but is also available for some Linux distributions via their package manager:
Debian/Ubuntu : lib-mumps-seq-dev
and lib-mumps-dev
for the sequential and parallel versions for all data types, respectively,
Red Hat/Fedora : MUMPS-devel
The default mesh agglomeration technique uses METIS. If METIS is detected by the CMake script then PTEMS will be built with METIS support, otherwise without. This opposite option can be forced (see Build Options).
METIS can be built from source but is also available for some Linux distributions via their package manager:
Debian/Ubuntu : libmetis-dev
Red Hat/Fedora : metis-devel
A library installed in standard search locations should be found by default. Similarly if the CMake projects for METIS is added before the project for PTEMS in a parent CMake build it should be detected. See Build Options for how to aid the detection.
Several options can be passed to the initial configuration step using the -D
command line option
where NAME
is the name of the option in the below table and VALUE
is one of the values listed for the option. If not specified the default value in the below table is used.
Option Name | Values | Default | Description |
---|---|---|---|
PTEMS_USE_MPI | ON , OFF | OFF | Build with MPI support |
PTEMS_USE_TRIANGLE | ON , OFF | ON | Build with Triangle library, see Included Libraries |
PTEMS_TRIANGLE_INCDIR | Folder path | "" | Path to directory containing triangle.h from Triangle, see Included Libraries |
PTEMS_USE_TETGEN | ON , OFF | ON | Build with TetGen library, see Included Libraries |
PTEMS_TETGEN_INCDIR | Folder path | "" | Path to directory containing tetgen.h from TetGen, see Included Libraries |
PTEMS_USE_METIS | DEFAULT , ON , OFF | DEFAULT | Build with METIS support for mesh agglomeration |
PTEMS_USE_MUMPS | DEFAULT , ON , OFF | DEFAULT | Build with support for MUMPS with double precision arithmetic |
PTEMS_USE_MUMPS_F | DEFAULT , ON , OFF | OFF | Build with support for MUMPS with single precision arithmetic |
PTEMS_USE_MUMPS_COMPLEX | DEFAULT , ON , OFF | DEFAULT | Build with support for MUMPS with double precision arithmetic for complex number |
PTEMS_USE_MUMPS_COMPLEX_F | DEFAULT , ON , OFF | OFF | Build with support for MUMPS with single precision arithmetic for complex number |
PTEMS_BUILD_TESTS | ON , OFF | ON | Build tests |
PTEMS_BUILD_EXAMPLES | ON , OFF | ON | Build the example programs in example |
PTEMS_LIBRARY_PATHS | List [List] | "" | Specify extra search directories for finding libraries |
PTEMS_INCLUDE_PATHS | List [List] | "" | Specify extra include directories for header files |
ON
and OFF
enable, or disable, the relevant option. DEFAULT
indicates it assumes ON
if the required library is found and OFF
otherwise.
Setting PTEMS_USE_TRIANGLE
or PTEMS_USE_TETGEN
to OFF
, or PTEMS_TRIANGLE_INCDIR
or PTEMS_TETGEN_INCDIR
to a valid path automatically sets PTEMS_BUILD_TESTS
and PTEMS_BUILD_EXAMPLES
to OFF
(see Included Libraries).
CMake caches the options provided. To reconfigure PTEMS with different options, it is necessary to clear option out of the cache using the extra -U NAME
before the corresponding -D
option.
The following external libraries are included in the PTEMS repository at fixed versions under the terms of their licences:
Library | Version | License |
---|---|---|
Triangle | 1.6 | see License |
TetGen | 1.4.3 [TetGen] | MIT with Commercial Restriction |
Note that the included versions of these libraries contain only a subset of the files provided by a full distribution (only the files necessary to build as a library), plus an additional CMakeLists.txt
file for use in building with CMake.
TetGen is not currently used (3D meshes not yet implemented).
Instead of using the built-in version of Triangle or TetGen externally built versions of Triangle and TetGen can be used by specifying the folder containing the include files (triangle.h
or tetgen.h
) using the option PTEMS_TRIANGLE_INCDIR
and PTEMS_TETGEN_INCDIR
, respectively. When linking the program using PTEMS ensure that the compiled objects or library for Triangle/TetGen are linked manually.
Triangle and TetGen can be disabled completely by specifying PTEMS_USE_TRIANGLE=NO
or PTEMS_USE_TETGEN=NO
, respectively.
If Triangle or TetGen are disabled certain mesh construction functions (such as CreateUnstructuredMesh
) will be unavailable.
Setting PTEMS_USE_TRIANGLE
or PTEMS_USE_TETGEN
to OFF
, or PTEMS_TRIANGLE_INCDIR
or PTEMS_TETGEN_INCDIR
to a valid path automatically sets PTEMS_BUILD_TESTS
and PTEMS_BUILD_EXAMPLES
to OFF
as they cannot automatically find the binary objects.
The usage of PTEMS is currently fairly low level as a C++ library. High-level interfaces are in progress. The following will demonstrate how to use PTEMS for a simple Poisson problem both a discontinuous Galerkin (DG) finite element method and a conforming virtual element method (VEM). More detailed examples can be found in the examples
directory.
As a C++ library, the program using PTEMS should be set up to find the PTEMS header and built shared library files.
In the source file, first include the key headers files
and a header file for the discrete space type; e.g,
for DG or VEM, respectively.
Within the main function, the following should be performed.
Create domain using ptems::DomainDefinition2D
; for example, to define the unit square domain \(\Omega=(0,1)^2\) use
Create a parallel processing communication object (this is necessary even in single thread mode; note that MPI is still experimental):
Construct a mesh from the domain:
Set initial order of approximation/polynomial degree on each element:
Create functions/lambdas to define the right-hand side forcing functions and Dirichlet boundary conditions (in this case, simply zero):
Construct a DG space
or a VEM space
Note, the VEM space requires the boundary condition at construction, as opposed to DG, as it strongly imposes boundary conditions
We then iterate for a prescribed number of iterations solving the method on a sequence of uniformly refined meshes using the following:
Here, SolveSystem
is a function we need to define, see Solve System.
Within the loop body we can also compute errors, a posteriori error bounds, (see examples for details), and output the solution to VTK files:
Alternatively, instead of uniform mesh refinement we can perform uniform polynomial refinement:
If we have a variable errIndicators
of type std::vector<double>
(or any STL compatible sequence) containing the a posteriori error indicators we can perform h- or p-adaptive mesh refinement:
Or even hp-adaptive:
Note, that the hpRefinement
variable should be created once outside the loop, and additionally the following headers may be needed for adaptive refinement:
We have deferred until now the discussion of the main important routine SolveSystem
. This contains the bulk of the program, and should eventually be replaced with high-level alternatives (although, the low level option will remain).
The general signature of the function is as follows:
Note, for simplicity, we pass the Space type (DG or VEM) and the boundary condition as templated arguments so we don't have to know the exact type. From the space type we can deduce the type of the discrete function (which is returned) using the ptems::DiscreteFunctionType
helper struct:
Similarly, we can deduce it from the variable Vh
by:
Within the body of the code, we first set initialise an empty variable of the specified type (set to zero):
It is important to pass the 0
argument for VEM (even though it is an optional argument), as this forces the constructed function to impose any strongly imposed boundary conditions (not passing 0
does not impose the boundary conditions, which may be desirable for iterative nonlinear solvers).
It is then necessary to create the sparsity pattern for the matrix used by the solver. This is the first candidate for a function to move into PTEMS. Note, that the sparsity pattern is different depending on the method type.
globalDofMap
will map the index of each global degree of freedom into an index in the actual matrix, or to sizeFullSystem
to denote the degree of freedom was removed as it represents a strongly imposed boundary condition degree of freedom. It is now possible to construct the sparsity pattern: Now we have the sparsity pattern, we can construct the linear solver and right-hand side vector:
The last argument is designed for use for distributed construction when using MPI (should be ignored for single process, but we set to false
anyway).
We now have a matrix and right-hand side containing zeros, so we need to iterate over the elements and populate the matrix and right-hand side using numerical integration. In order to do this we first initialise a cache object for caching the quadrature points on the elements and weights, as PTEMS dynamically generates these:
Additionally, for DG we need edge integration, so we need a cache for edge quadrature points:
We now loop through each element and perform numerical integration of \((\nabla \phi_j, \nabla \phi_i)_K\) and \((f,\phi_i)_K\) for each pair of degrees of freedom of index i
and j
. The following is the code for VEM:
For DG, we simply replace the lines
with
and similarly for dofJ
; additionally, remove the strongly imposed boundary conditions case
This constructs the matrix and right-hand side for the volume integrations, but both methods require additional terms:
theta
set to -1, 0, or 1 for symmetric, incomplete, and nonsymmetric, respectively), requires face integral terms. Inside the loop over elements we add the following, where interiorPenalty
is set to the constant to use for the interior penalty term: Now, we just need to solve and return the result. This is slightly different for the two methods again, due to the fact that DG gives the full solution (no strongly imposed boundaries); whereas, VEM doesn't give the boundary dofs:
The above is for a linear problem. For a nonlinear problem a damped Newton solver is provided, which requires additional steps.
The nonlinear solver is designed to implement a damped Newton solver; however, it can easily be used for any damped or undamped Newton-like solver of the following form:
\[u_{n+1} = u_{n} - \varepsilon A^{-1}R(u_n)\]
where \(\varepsilon\in(0,1]\) is the damping parameter, which the algorithm may alter, R is the residual operator used to detect convergence and A is a linear operator.
To solve a nonlinear system we change the SolveSystem
implementation. The key ingredients are as follows:
DampingParameters
constructor are the initial damping, increment, and minimum; therefore passing zero increment and initial ==
minimum removes damping; alternatively, for no damping and damping parameter of 1, simply use the default constructor to NonlinearSolver
.phi
), and a vector (of length of the system size minus strongly imposed boundary) to populate with the residual vector. The contents of this function is similar to the linear solver case; except, only one basis (test) functions are needed.Create a function to solve the linear system \(\langle A(u),v\rangle = \langle R(\phi), v \rangle\) (e.g., Jacobian for Newton):
This function takes the solution from the previous step (phi
), and the residual returned by the previous function, and should return the solution of the linearised system at this step. This function is, therefore, almost identical to the linear case.
When constructing the solution for the linear step it is important not to pass the second argument to the constructor in the case of strongly imposed boundary conditions as the update must have zero boundary conditions:
SolveWithStatus
can be used instead where the tolerance and the number of iterations are passed by mutable reference, and are updated with the achieved tolerance and actual number of iterations, respectively.Below are listed some known issues and experimental features:
ptems::DiscreteCartesianProductSpace
) is broken and should not be used (issue #1)win64
branchThe following describes the list of future features, in decreasing order of importance, to be implemented:
spacetime
branch exists)The PTEMS project is licensed under the BSD 3-Clause license, see License.
The libraries used by PTEMS may use other licenses. Notably, two libraries are included with the PTEMS source (Triangle and TetGen), which contain licenses which are ONLY valid for private, research, or educational use; i.e., NOT FOR COMMERCIAL USE (unless permission is obtained from the respective copyright owners). See LICENSE for more details.
Note, that Triangle and TetGen can be removed from PTEMS (or replaced with different versions), see Included Libraries.