Remove the symCoeff() method of the the Tensor module and move the
functionality into a new operator() of the symmetry classes. This makes
the Tensor module now completely self-contained without symmetry
support (even though previously it was only a forward declaration and a
otherwise harmless trivial templated method) and also removes the
inconsistency with the rest of eigen w.r.t. the method's naming scheme.
When constructing a symmetry group, make the code automatically detect
the number of indices required from the indices of the group's
generators. Also, allow the symmetry group to be applied to lists of
indices that are larger than the number of indices of the symmetry
group.
Before:
SGroup<4, Symmetry<0, 1>, Symmetry<2,3>> group;
group.apply<SomeOp, int>(std::array<int,4>{{0, 1, 2, 3}}, 0);
After:
SGroup<Symmetry<0, 1>, Symmetry<2,3>> group;
group.apply<SomeOp, int>(std::array<int,4>{{0, 1, 2, 3}}, 0);
group.apply<SomeOp, int>(std::array<int,5>{{0, 1, 2, 3, 4}}, 0);
This should make the symmetry group easier to use - especially if one
wants to reuse the same symmetry group for different tensors of maybe
different rank.
static/runtime asserts remain for the case where the length of the
index list to which a symmetry group is to be applied is too small.
Add a template parameter to gen_numeric_list that acts as a starting
point for the list, i.e. gen_numeric_list<int, 5, 4> will generate a
numeric_list<int, 4, 5, 6, 7, 8>.
libc++ from 3.4 onwards supports constexpr std::get, but only if
compiled with -std=c++1y. Change the detection so that libc++'s
internals are only used if either -std=c++1y is not specified or the
library is too old, making the whole hack a bit more future-proof.
* comparison (<, <=, ==, !=, ...)
* selection
* nullary ops such as random or constant generation
* misc unary ops such as log(), exp(), or a user defined unaryExpr()
Cleaned up the code a little.
Added the ability to parallelize the evaluation of a tensor expression over multiple cpu cores.
Added the ability to offload the evaluation of a tensor expression to a GPU.
* Added ability to map a region of the memory to a tensor
* Added basic support for unary and binary coefficient wise expressions, such as addition or square root
* Provided an emulation layer to make it possible to compile the code with compilers (such as nvcc) that don't support cxx11.