[NDTensors] Roadmap to removing TensorStorage
types #1250
Open
Description
Here is a roadmap to removing TensorStorage
types (EmptyStorage
, Dense
, Diag
, BlockSparse
, DiagBlockSparse
, Combiner
) in favor of more traditional AbstractArray
types (UnallocatedZeros
, Array
, DiagonalArray
, BlockSparseArray
, CombinerArray
), as well as removing Tensor
in favor of NamedDimsArray
.
NDTensors reorganization
Followup to BlockSparseArrays
rewrite in #1272:
- Move some functionality to
SparseArrayInterface
, such asTensorAlgebra.contract
. - Clean up tensor algebra code in
BlockSparseArray
, making use of broadcasting and mapping functionality defined inSparseArrayInterface
.
Followup to SparseArrayInterface
/SparseArrayDOKs
defined in #1270:
-
TensorAlgebra
overloads forSparseArrayInterface
/SparseArrayDOK
, such ascontract
. - Use
SparseArrayDOK
as a backend forBlockSparseArray
(maybe call itBlockSparseArrayDOK
?). - Consider making a
BlockSparseArrayInterface
package to define an interface and generic functionality for block sparse arrays, analogous toSparseArrayInterface
(EDIT: Currently lives insideBlockSparseArray
library.)
Followup to the reorganization started in #1268:
- Move low rank
qr
,eigen
,svd
definitions toNDTensors.RankFactorization
module. Currently they are defined inNamedDimsArrays.NamedDimsArraysTensorAlgebraExt
, those should be wrappers around the ones inNDTensors.RankFactorization
. - Split off the
SparseArray
type into anNDTensors.SparseArrays
module (maybe come up with a different name likeNDSparseArrays
,GenericSparseArrays
,AbstractSparseArrays
, etc.). Currently it is inNDTensors.BlockSparseArrays
. Also rename itSparseArrayDOK
(for dictionary-of-keys) to distinguish it from other formats. - Clean up
NDTensors/src/TensorAlgebra/src/fusedims.jl
. - Remove
NDTensors.TensorAlgebra.BipartitionedPermutation
, figure out how to disambiguate between partitioned permutation and named dimension interface. How much dimension name logic should go inNDTensors.TensorAlgebra
vs.NDTensors.NamedDimsArrays
? - Create
NDTensors.CombinerArrays
module. MoveCombiner
andCombinerArray
type definitions there. - Create
NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt
extension. MoveCombiner
contract
definition fromITensorsNamedDimsArraysExt/src/combiner.jl
toCombinerArraysTensorAlgebraExt
(which is just a simple wrapper aroundTensorAlgebra.fusedims
andTensorAlgebra.splitdims
). - Dispatch ITensors.jl definitions for
qr
,eigen
,svd
,factorize
,nullspace
, etc. ontypeof(tensor(::ITensor))
so then forITensor
wrappingNamedDimsArray
we can fully rewrite those functions usingNamedDimsArrays
andTensorAlgebra
where the matricization logic can be handled more elegantly withfusedims
. - Get all the same functionality working for
ITensor
wrapping aNamedDimsArray
wrapping aBlockSparseArray
. - Make sure all
NamedDimsArrays
-based code works on GPU. - Make
Index
a subtype ofAbstractNamedInt
(or maybeAbstractNamedUnitRange
?). - Make
ITensor
a subtype ofAbstractNamedDimsArray
. - Deprecate from
NDTensors.RankFactorization
:Spectrum
,eigs
,entropy
,truncerror
. - Decide if
size
andaxes
ofAbstractNamedDimsArray
(including theITensor
type) should output named sizes and ranges. - Define an
ImmutableArrays
submodule and have theITensor
type default to wrappingImmutableArray
data, with copy-on-write semantics. Also come up with an abstraction for arrays that can manage their own memory, such asAbstractCOWArray
(for copy-on-write) orAbstractMemoryManagedArray
, as well asNamedDimsArray
versions, and makeITensor
a subtype ofAbstractMemoryManagedNamedDimsArray
or something like that (perhaps a good use case for anisnamed
trait to opt-in to automatic permutation semantics for indexing, contraction, etc.). - Use StaticPermutations.jl for dimension permutation logic in
TensorAlgebra
andNamedDimsArrays
.
Testing
- Unit tests for
ITensors.ITensorsNamedDimsArraysExt
. - Run
ITensorsNamedDimsArraysExt
examples in tests. - Unit tests for
NDTensors.RankFactorization
module. - Unit tests for
NamedDimsArrays.NamedDimsArraysTensorAlgebraExt
:fusedims
,qr
,eigen
,svd
. - Unit tests for
NDTensors.CombinerArrays
andNDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt
.
EmptyStorage
- Define
UnallocatedZeros
(in progress in [NDTensors]UnallocatedArrays
andUnspecifiedTypes
#1213). - Use
UnallocatedZeros
as default data type instead ofEmptyStorage
in ITensor constructors.
Diag
- Define
DiagonalArray
. - Tensor contraction, addition, QR, eigendecomposition, SVD.
- Use
DiagonalArray
as default data type instead ofDiag
in ITensor constructors.
UniformDiag
- Replace with
DiagonalArray
wrapping anUnallocatedZeros
type.
BlockSparse
- Define
BlockSparseArray
. - Tensor contraction, addition, QR, eigendecomposition, SVD.
- Use
BlockSparseArray
as default data type instead ofBlockSparse
in ITensor QN constructors.
DiagBlockSparse
- Use
BlockSparseArray
with blocks storingDiagonalArray
, make sure all tensor operations work. - Replace
DiagBlockSparse
in ITensor QN constructors.
Combiner
- Not sure what to do with this, but a lot of functionality will be replaced by the new
fusedims
/matricize
functionality inTensorAlgebra
/BlockSparseArrays
and also by the newFusionTensor
type. Likely will be superseded byCombinerArray
,FusionTree
, or something like that.
Simplify ITensor and Tensor constructors
- Make ITensor constructors more uniform by using a style
tensor(storage::AbstractArray, inds::Tuple)
, avoid constructors likeDenseTensor
,DiagTensor
,BlockSparseTensor
, etc. - Use
rand(i, j, k)
,randn(i, j, k)
,zeros(i, j, k)
,fill(1.2, i, j, k)
,diagonal(i, j, k)
, etc. instead ofrandomITensor(i, j, k)
,ITensor(i, j, k)
,ITensor(1.2, i, j, k)
,diagITensor(i, j, k)
. Maybe make lazy/unallocated by default where appropriate, i.e. useUnallocatedZeros
forzeros
andUnallocatedFill
forfill
. - Consider
randn(2, 2)(i, j)
as a shorthand for creating an ITensor with indices(i, j)
wrapping an array. Also could usesetinds(randn(2, 2), i, j)
. - Remove automatic conversion to floating point in ITensor constructor.
Define TensorAlgebra
submodule
-
TensorAlgebra
submodule which definescontract[!][!]
,mul[!][!]
,add[!][!]
,permutedims[!][!]
,fusedims
/matricize
,contract(::Algorithm"matricize", ...)
, truncated QR, eigendecomposition, SVD, etc. with generic fallback implementations forAbstractArray
and maybe some specialized implementations forArray
.(Started in [NDTensors] StartTensorAlgebra
module #1265, [TensorAlgebra] Matricized QR tensor decomposition #1266.) - Use ErrorTypes.jl for catching errors and calling fallbacks in failed matrix decompositions.
- Move most matrix factorization logic from ITensors.jl into
TensorAlgebra
.
New Tensor
semantics
- Make
Tensor
fully into a wrapper array type with named dimensions, with similar "smart indices" for contraction and addition like theITensor
type has right now. Rename toNamedDimsArray
. (Started in [NDTensors]NamedDimsArrays
module #1267.) - Use
struct NamedAxis{Axis,Name} axis::Axis; name::Name; end
as a more generic version ofIndex
, whereIndex
has aname
that store the ID, tags, and prime level. (Started in [NDTensors]NamedDimsArrays
module #1267.) - Replace
ITensors.val
for named indexing with dictionaries attached to dimensions/axes, like in AxisKeys.jl, DimensionalData.jl, NamedArrays.jl, etc.