Only compile the custom index code when EIGEN_HAS_SFINAE is defined. For the time beeing, EIGEN_HAS_SFINAE is a synonym for EIGEN_HAS_VARIADIC_TEMPLATES, but this might evolve in the future.
Moved some code around.
using Sfinae and is_base_of to select correct template which converts to array<Index,NumIndices>
user: Gabriel Nützi <gnuetzi@gmx.ch>
branch 'default'
added unsupported/Eigen/CXX11/src/Tensor/TensorMetaMacros.h
added unsupported/test/cxx11_tensor_customIndex.cpp
changed unsupported/Eigen/CXX11/Tensor
changed unsupported/Eigen/CXX11/src/Tensor/Tensor.h
changed unsupported/Eigen/CXX11/src/Tensor/TensorMeta.h
changed unsupported/test/CMakeLists.txt
Created the TensorDimensionList class to encode the list of all the dimensions of a tensor of rank n. This could be done using TensorIndexList, however TensorIndexList require cxx11 which isn't yet supported as widely as we'd like.
Instead we now have a custom thread pool that ensures that the functions are picked up by the threads in the pool in the order in which they are enqueued in the pool.
* The scheduling of computation is moved out the the assignment code and into a new TensorExecutor class
* The assignment itself is now a regular node on the expression tree
* The expression evaluators start by recursively evaluating all their subexpressions if needed
Added the ability to parallelize the evaluation of a tensor expression over multiple cpu cores.
Added the ability to offload the evaluation of a tensor expression to a GPU.
* Added ability to map a region of the memory to a tensor
* Added basic support for unary and binary coefficient wise expressions, such as addition or square root
* Provided an emulation layer to make it possible to compile the code with compilers (such as nvcc) that don't support cxx11.
This commit adds an initial implementation of a class template Tensor
that allows for the storage of objects with more than two indices.
Currently, only storing data and setting the object to zero for POD
data types are implemented.