Skip to content

FEMSparse.jl aims to develop fast finite element assembly procedure. Main goal is to utilize threading of multicore computeres in order to calculate global FEM matrices in a efficient way.

License

Notifications You must be signed in to change notification settings

zhuoju36/FEMSparse.jl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FEMSparse.jl

FEMSparse package contains sparse matrix operations spesifically designed for finite element simulations. In particular, we aim to provide support for sparse matrices which are fast to fill with dense local element matrices. In literature, this is called to finite element assembly procedure, where element local degrees of freedom are connected to the global degrees of freedom of model. Typically this procedure looks something similar to above:

K = zeros(N, N)
Ke = [1.0 -1.0; -1.0 1.0]
dofs1 = [4, 5]
dofs2 = [4, 5]
K[dofs1, dofs2] += Ke

Performance test

To demonstrate the performance of the package, Poisson problem in 1 dimension is assembled (see examples/poisson1d.jl) using three different strategies:

  1. assemble to dense matrix, like shown above
  2. assemble to sparse matrix of CSC format
  3. assemble to sparse matrix of COO format

Assembling to dense matrix

[ Info: Dense matrix:
 2.298 s (30004 allocations: 6.71 GiB)

Dense matrix is not suitable for global (sparse) assembly due to it's massive requirement of available memory.

Assembling to the sparse matrix format CSC

Naive attempt

[ Info: Sparse matrix (CSC format):
 15.536 s (568673 allocations: 26.97 GiB)

SparseMatrixCSC is not suitable for (naive) assembly because the change of sparsity pattern is very expensive.

Exisitng sparsity pattern

However, if an existing "sparsity pattern" exist (a sparse matrix where the location sof all non zeros have already been allocated) it is possible to efficiently assemble directly into it.

For example,

julia> K = sparse(Float64[1 0 1 1;
                          0 1 0 1;
                          1 0 1 0;
                          1 1 0 1];)

julia> fill!(K, 0.0)

julia> K.colptr'
1×5 LinearAlgebra.Adjoint{Int64,Array{Int64,1}}:
 1  3  5  7  9

julia> K.rowval'
1×8 LinearAlgebra.Adjoint{Int64,Array{Int64,1}}:
 1  3  2  4  1  3  2  4

Assembling into this sparsity pattern is now done by

dofs1 = [1, 3]
dofs2 = [2, 4]
dofs3 = [1, 4]
Ke1 = ones(2, 2)
Ke2 = ones(2, 2)
Ke3 = ones(2, 2)
assembler = FEMSparse.start_assemble(K)
for (dofs, Ke) in zip([dofs1, dofs2, dofs3], [Ke1, Ke2, Ke3])
    FEMSparse.assemble_local_matrix!(assembler, dofs, Ke)
end

resulting in that the content of K (here shown as a dense matrix for clarity) contains.

4×4 Array{Float64,2}:
 2.0  0.0  1.0  1.0
 0.0  1.0  0.0  1.0
 1.0  0.0  1.0  0.0
 1.0  1.0  0.0  2.0

might be a sparsity pattern

Assembling to the sparse matrix format COO

[ Info: Sparse matrix (COO format):
 5.854 ms (73 allocations: 9.89 MiB)

SparseMatrixCOO is suitable sparse format for assembling global matrices, yet it still have some shortcomings. In practice for solving linear system, COO format needs to be converted to CSC format and it costs. Thus it would be benefical to do first-time assembly in COO format, and after that store the sparsity pattern and move to use direct assembly to CSC format.

About

FEMSparse.jl aims to develop fast finite element assembly procedure. Main goal is to utilize threading of multicore computeres in order to calculate global FEM matrices in a efficient way.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Julia 100.0%