Commit Graph

1928 Commits

Author SHA1 Message Date
Benoit Steiner
09653e1f82 Improved the portability of the tensor code 2016-05-11 23:29:09 -07:00
Benoit Steiner
fae0493f98 Fixed a couple of bugs related to the Pascalfamily of GPUs
H: Enter commit message.  Lines beginning with 'HG:' are removed.
2016-05-11 23:02:26 -07:00
Benoit Steiner
886445ce4d Avoid unnecessary conversions between floats and doubles 2016-05-11 23:00:03 -07:00
Benoit Steiner
595e890391 Added more tests for half floats 2016-05-11 21:27:15 -07:00
Benoit Steiner
b6a517c47d Added the ability to load fp16 using the texture path.
Improved the performance of some reductions on fp16
2016-05-11 21:26:48 -07:00
Christoph Hertzberg
1a1ce6ff61 Removed deprecated flag (which apparently was ignored anyway) 2016-05-11 23:05:37 +02:00
Christoph Hertzberg
2150f13d65 fixed some double-promotion and sign-compare warnings 2016-05-11 23:02:26 +02:00
Benoit Steiner
217d984abc Fixed a typo in my previous commit 2016-05-11 10:22:15 -07:00
Benoit Steiner
08348b4e48 Fix potential race condition in the CUDA reduction code. 2016-05-11 10:08:51 -07:00
Benoit Steiner
cbb14ed47e Added a few tests to validate the generation of random tensors on GPU. 2016-05-11 10:05:56 -07:00
Benoit Steiner
6a5717dc74 Explicitely initialize all the atomic variables. 2016-05-11 10:04:41 -07:00
Benoit Steiner
4ede059de1 Properly gate the use of half2. 2016-05-10 17:04:01 -07:00
Benoit Steiner
661e710092 Added support for fp16 to the sigmoid functor. 2016-05-10 12:25:27 -07:00
Benoit Steiner
0eb69b7552 Small improvement to the full reduction of fp16 2016-05-10 11:58:18 -07:00
Benoit Steiner
6bf8273bc0 Added a test to validate the new non blocking thread pool 2016-05-10 10:49:34 -07:00
Benoit Steiner
4013b8feca Simplified the reduction code a little. 2016-05-10 09:40:42 -07:00
Benoit Steiner
75bd2bd32d Fixed compilation warning 2016-05-09 19:24:41 -07:00
Benoit Steiner
4670d7d5ce Improved the performance of full reductions on GPU:
Before:
BM_fullReduction/10       200000      11751     8.51 MFlops/s
BM_fullReduction/80         5000     523385    12.23 MFlops/s
BM_fullReduction/640          50   36179326    11.32 MFlops/s
BM_fullReduction/4K            1 2173517195    11.50 MFlops/s

After:
BM_fullReduction/10       500000       5987    16.70 MFlops/s
BM_fullReduction/80       200000      10636   601.73 MFlops/s
BM_fullReduction/640       50000      58428  7010.31 MFlops/s
BM_fullReduction/4K         1000    2006106 12461.95 MFlops/s
2016-05-09 17:09:54 -07:00
Benoit Steiner
c3859a2b58 Added the ability to use a scratch buffer in cuda kernels 2016-05-09 17:05:53 -07:00
Benoit Steiner
ba95e43ea2 Added a new parallelFor api to the thread pool device. 2016-05-09 10:45:12 -07:00
Benoit Steiner
dc7dbc2df7 Optimized the non blocking thread pool:
* Use a pseudo-random permutation of queue indices during random stealing. This ensures that all the queues are considered.
 * Directly pop from a non-empty queue when we are waiting for work,
instead of first noticing that there is a non-empty queue and
then doing another round of random stealing to re-discover the non-empty
queue.
 * Steal only 1 task from a remote queue instead of half of tasks.
2016-05-09 10:17:17 -07:00
Benoit Steiner
691614bd2c Worked around a bug in nvcc on tegra x1 2016-05-07 13:28:53 -07:00
Benoit Steiner
c54ae65c83 Marked a few tensor operations as read only 2016-05-05 17:18:47 -07:00
Benoit Steiner
69a8a4e1f3 Added a test to validate full reduction on tensor of half floats 2016-05-05 16:52:50 -07:00
Benoit Steiner
678a17ba79 Made the testing of contractions on fp16 more robust 2016-05-05 16:36:39 -07:00
Benoit Steiner
e3d053e14e Refined the testing of log and exp on fp16 2016-05-05 16:24:15 -07:00
Benoit Steiner
9a48688d37 Further improved the testing of fp16 2016-05-05 15:58:05 -07:00
Benoit Steiner
910e013506 Relaxed an assertion that was tighter that necessary. 2016-05-05 15:38:16 -07:00
Benoit Steiner
28d5572658 Fixed some incorrect assertions 2016-05-05 10:02:26 -07:00
Benoit Steiner
2aba40d208 Avoid unecessary type promotion 2016-05-05 09:26:57 -07:00
Benoit Steiner
a4d6e8fef0 Strongly hint but don't force the compiler to unroll a some loops in the tensor executor. This results in up to 27% faster code. 2016-05-05 09:25:55 -07:00
Benoit Steiner
7875437ca0 Avoided unecessary type promotion 2016-05-05 09:08:42 -07:00
Benoit Steiner
f363e533aa Added tests for full contractions using thread pools and gpu devices.
Fixed a couple of issues in the corresponding code.
2016-05-05 09:05:45 -07:00
Benoit Steiner
06d774bf58 Updated the contraction code to ensure that full contraction return a tensor of rank 0 2016-05-05 08:37:47 -07:00
Christoph Hertzberg
b300a84989 Fixed some singed/unsigned comparison warnings 2016-05-05 13:36:28 +02:00
Christoph Hertzberg
dacb469bc9 Enable and fix -Wdouble-conversion warnings 2016-05-05 13:35:45 +02:00
Benoit Steiner
62b710072e Reduced the memory footprint of the cxx11_tensor_image_patch test 2016-05-04 21:08:22 -07:00
Benoit Steiner
dd2b45feed Removed extraneous 'explicit' keywords 2016-05-04 16:57:52 -07:00
Benoit Steiner
968ec1c2ae Use numext::isfinite instead of std::isfinite 2016-05-03 19:56:40 -07:00
Benoit Steiner
2c5568a757 Added a test to validate the computation of exp and log on 16bit floats 2016-05-03 12:06:07 -07:00
Benoit Steiner
aad9a04da4 Deleted superfluous explicit keyword. 2016-05-03 09:37:19 -07:00
Benoit Steiner
8a9228ed9b Fixed compilation error 2016-05-01 14:48:01 -07:00
Benoit Steiner
d6c9596fd8 Added missing accessors to fixed sized tensors 2016-04-29 18:51:33 -07:00
Benoit Steiner
17fe7f354e Deleted trailing commas 2016-04-29 18:39:01 -07:00
Benoit Steiner
e5f71aa6b2 Deleted useless trailing commas 2016-04-29 18:36:10 -07:00
Benoit Steiner
44f592dceb Deleted unnecessary trailing commas. 2016-04-29 18:33:46 -07:00
Benoit Steiner
2b890ae618 Fixed compilation errors generated by clang 2016-04-29 18:30:40 -07:00
Benoit Steiner
d217217842 Added a few tests to ensure that the dimensions of rank 0 tensors are correctly computed 2016-04-29 18:15:34 -07:00
Benoit Steiner
f100d1494c Return the proper size (ie 1) for tensors of rank 0 2016-04-29 18:14:33 -07:00
Benoit Steiner
d14105f158 Made several tensor tests compatible with cxx03 2016-04-29 17:22:37 -07:00
Benoit Steiner
c0882ef4d9 Moved a number of tensor tests that don't require cxx11 to work properly outside the EIGEN_TEST_CXX11 test section 2016-04-29 17:13:51 -07:00
Benoit Steiner
9d1dbd1ec0 Fixed teh cxx11_tensor_empty test to compile without requiring cxx11 support 2016-04-29 16:53:55 -07:00
Benoit Steiner
a8c0405cf5 Deleted unused default values for template parameters 2016-04-29 16:34:43 -07:00
Benoit Steiner
4f53178e62 Made a coupe of tensor tests compile without requiring c++11 support. 2016-04-29 16:09:54 -07:00
Benoit Steiner
1131a984a6 Made the cxx11_tensor_forced_eval compile without c++11. 2016-04-29 15:48:59 -07:00
Benoit Steiner
c07404f6a1 Restore Tensor support for non c++11 compilers 2016-04-29 15:19:19 -07:00
Benoit Steiner
ba32ded021 Fixed include path 2016-04-29 15:11:09 -07:00
Benoit Steiner
a524a26fdc Fixed a few memory leaks 2016-04-28 18:55:53 -07:00
Gael Guennebaud
318e65e0ae Fix missing inclusion of Eigen/Core 2016-04-27 23:05:40 +02:00
Rasmus Munk Larsen
463738ccbe Use computeProductBlockingSizes to compute blocking for both ShardByCol and ShardByRow cases. 2016-04-27 12:26:18 -07:00
Gael Guennebaud
3dddd34133 Refactor the unsupported CXX11/Core module to internal headers only. 2016-04-26 11:20:25 +02:00
Benoit Steiner
4a164d2c46 Fixed the partial evaluation of non vectorizable tensor subexpressions 2016-04-25 10:43:03 -07:00
Benoit Steiner
fd9401f260 Refined the cost of the striding operation. 2016-04-25 09:16:08 -07:00
Benoit Steiner
4bbc97be5e Provide access to the base threadpool classes 2016-04-21 17:59:33 -07:00
Benoit Steiner
33adce5c3a Added the ability to switch to the new thread pool with a #define 2016-04-21 11:59:58 -07:00
Benoit Steiner
f670613e4b Fixed several compilation warnings 2016-04-21 11:03:02 -07:00
Benoit Steiner
32ffce04fc Use EIGEN_THREAD_YIELD instead of std::this_thread::yield to make the code more portable. 2016-04-21 08:47:28 -07:00
Benoit Steiner
2dde1b1028 Don't crash when attempting to reduce empty tensors. 2016-04-20 18:08:20 -07:00
Benoit Steiner
a792cd357d Added more tests 2016-04-20 17:33:58 -07:00
Benoit Steiner
c7c2054bb5 Started to implement a portable way to yield. 2016-04-19 17:59:58 -07:00
Benoit Steiner
2b72163028 Implemented a more portable version of thread local variables 2016-04-19 15:56:02 -07:00
Benoit Steiner
04f954956d Fixed a few typos 2016-04-19 15:27:09 -07:00
Benoit Steiner
5b1106c56b Fixed a compilation error with nvcc 7. 2016-04-19 14:57:57 -07:00
Benoit Steiner
7129d998db Simplified the code that launches cuda kernels. 2016-04-19 14:55:21 -07:00
Benoit Steiner
b9ea40c30d Don't take the address of a kernel on CUDA devices that don't support this feature. 2016-04-19 14:35:11 -07:00
Benoit Steiner
884c075058 Use numext::ceil instead of std::ceil 2016-04-19 14:33:30 -07:00
Benoit Steiner
a278414d1b Avoid an unnecessary copy of the evaluator. 2016-04-19 13:54:28 -07:00
Benoit Steiner
f953c60705 Fixed 2 recent regression tests 2016-04-19 12:57:39 -07:00
Benoit Steiner
50968a0a3e Use DenseIndex in the MeanReducer to avoid overflows when processing very large tensors. 2016-04-19 11:53:58 -07:00
Benoit Steiner
84543c8be2 Worked around the lack of a rand_r function on windows systems 2016-04-17 19:29:27 -07:00
Benoit Steiner
5fbcfe5eb4 Worked around the lack of a rand_r function on windows systems 2016-04-17 18:42:31 -07:00
Benoit Steiner
c8e8f93d6c Move the evalGemm method into the TensorContractionEvaluatorBase class to make it accessible from both the single and multithreaded contraction evaluators. 2016-04-15 16:48:10 -07:00
Benoit Steiner
7cff898e0a Deleted unnecessary variable 2016-04-15 15:46:14 -07:00
Benoit Steiner
6c43c49e4a Fixed a few compilation warnings 2016-04-15 15:34:34 -07:00
Benoit Steiner
eb669f989f Merged in rmlarsen/eigen (pull request PR-178)
Eigen Tensor cost model part 2: Thread scheduling for standard evaluators and reductions.
2016-04-15 14:53:15 -07:00
Rasmus Munk Larsen
3718bf654b Get rid of void* casting when calling EvalRange::run. 2016-04-15 12:51:33 -07:00
Benoit Steiner
40c9923a8a Fixed compilation errors with msvc 2016-04-15 11:27:52 -07:00
Benoit Steiner
a62e924656 Added ability to access the cache sizes from the tensor devices 2016-04-14 21:25:06 -07:00
Benoit Steiner
18e6f67426 Added support for exclusive or 2016-04-14 20:37:46 -07:00
Rasmus Munk Larsen
07ac4f7e02 Eigen Tensor cost model part 2: Thread scheduling for standard evaluators and reductions. The cost model is turned off by default. 2016-04-14 18:28:23 -07:00
Benoit Steiner
9624a1ea3d Added missing definition of PacketSize in the gpu evaluator of convolution 2016-04-14 17:16:58 -07:00
Benoit Steiner
6fbedf5a4e Merged in rmlarsen/eigen (pull request PR-177)
Eigen Tensor cost model part 1.
2016-04-14 17:13:19 -07:00
Benoit Steiner
bebb89acfa Enabled the new threadpool tests 2016-04-14 16:44:10 -07:00
Benoit Steiner
9c064b5a97 Cleanup 2016-04-14 16:41:31 -07:00
Benoit Steiner
1372156c41 Prepared the migration to the new non blocking thread pool 2016-04-14 16:16:42 -07:00
Rasmus Munk Larsen
aeb5494a0b Improvements to cost model. 2016-04-14 15:52:58 -07:00
Benoit Steiner
a8e8837ba7 Added tests for the non blocking thread pool 2016-04-14 15:23:49 -07:00
Benoit Steiner
78a51abc12 Added a more scalable non blocking thread pool 2016-04-14 15:23:10 -07:00
Rasmus Munk Larsen
d2e95492e7 Merge upstream updates. 2016-04-14 13:59:50 -07:00
Rasmus Munk Larsen
235e83aba6 Eigen cost model part 1. This implements a basic recursive framework to estimate the cost of evaluating tensor expressions. 2016-04-14 13:57:35 -07:00
Benoit Steiner
5912ad877c Silenced a compilation warning 2016-04-14 11:40:14 -07:00
Benoit Steiner
2b6e3de02f Added tests to validate flooring and ceiling of fp16 2016-04-14 11:39:18 -07:00
Benoit Steiner
6f23e945f6 Added simple test for numext::sqrt and numext::pow on fp16 2016-04-14 10:32:52 -07:00
Benoit Steiner
72510c80e1 Added basic test for trigonometric functions on fp16 2016-04-14 10:27:24 -07:00
Benoit Steiner
c7167fee0e Added support for fp16 to the sigmoid function 2016-04-14 10:08:33 -07:00
Benoit Steiner
f6003f0873 Made the test msvc friendly 2016-04-14 09:47:26 -07:00
Gael Guennebaud
7d1391d049 Turn a converge check to a warning 2016-04-13 22:50:54 +02:00
Benoit Steiner
e9b12cc1f7 Fixed compilation warnings generated by clang 2016-04-12 20:53:18 -07:00
Benoit Steiner
e3a184785c Fixed the zeta test 2016-04-12 11:12:36 -07:00
Benoit Steiner
3b76df64fc Defer the decision to vectorize tensor CUDA code to the meta kernel. This makes it possible to decide to vectorize or not depending on the capability of the target cuda architecture. In particular, this enables us to vectorize the processing of fp16 when running on device of capability >= 5.3 2016-04-12 10:58:51 -07:00
Gael Guennebaud
af2161cdb4 bug #1197: fix/relax some LM unit tests 2016-04-09 11:14:02 +02:00
Gael Guennebaud
a05a683d83 bug #1160: fix and relax some lm unit tests by turning faillures to warnings 2016-04-09 10:49:19 +02:00
Benoit Steiner
995f202cea Disabled the use of half2 on cuda devices of compute capability < 5.3 2016-04-08 14:43:36 -07:00
Benoit Steiner
0d2a532fc3 Created the new EIGEN_TEST_CUDA_CLANG option to compile the CUDA tests using clang instead of nvcc 2016-04-08 13:16:08 -07:00
Benoit Steiner
2d072b38c1 Don't test the division by 0 on float16 when compiling with msvc since msvc detects and errors out on divisions by 0. 2016-04-08 12:50:25 -07:00
Benoit Steiner
d962fe6a99 Renamed float16 into cxx11_float16 since the test relies on c++11 features 2016-04-07 20:28:32 -07:00
Benoit Steiner
7d5b17087f Added missing EIGEN_DEVICE_FUNC to the tensor conversion code. 2016-04-07 20:01:19 -07:00
Benoit Steiner
a02ec09511 Worked around numerical noise in the test for the zeta function. 2016-04-07 12:11:02 -07:00
Benoit Steiner
c912b1d28c Fixed a typo in the polygamma test. 2016-04-07 11:51:07 -07:00
Benoit Steiner
dc45aaeb93 Added tests for float16 2016-04-07 11:18:05 -07:00
Benoit Steiner
8db269e055 Fixed a typo in a test 2016-04-07 10:41:51 -07:00
Benoit Steiner
48308ed801 Added support for isinf, isnan, and isfinite checks to the tensor api 2016-04-07 09:48:36 -07:00
Benoit Steiner
cfb34d808b Fixed a possible integer overflow. 2016-04-07 08:46:52 -07:00
Benoit Steiner
165150e896 Fixed the tests for the zeta and polygamma functions 2016-04-06 14:31:01 -07:00
Benoit Steiner
7be1eaad1e Fixed typos in the implementation of the zeta and polygamma ops. 2016-04-06 14:15:37 -07:00
Benoit Steiner
10bdd8e378 Merged in tillahoffmann/eigen (pull request PR-173)
Added zeta function of two arguments and polygamma function
2016-04-06 09:40:17 -07:00
Benoit Steiner
7781f865cb Renamed the EIGEN_TEST_NVCC cmake option into EIGEN_TEST_CUDA per the discussion in bug #1173. 2016-04-06 09:35:23 -07:00
tillahoffmann
726bd5f077 Merged eigen/eigen into default 2016-04-05 18:21:05 +01:00
Gael Guennebaud
4d7e230d2f bug #1189: fix pow/atan2 compilation for AutoDiffScalar 2016-04-05 14:49:41 +02:00
Till Hoffmann
80eba21ad0 Merge upstream. 2016-04-01 18:18:49 +01:00
Till Hoffmann
eb0ae602bd Added CUDA tests. 2016-04-01 18:17:45 +01:00
Till Hoffmann
ffd770ce94 Fixed CUDA signature. 2016-04-01 17:58:24 +01:00
tillahoffmann
49960adbdd Merged eigen/eigen into default 2016-04-01 14:36:15 +01:00
Till Hoffmann
57239f4a81 Added polygamma function. 2016-04-01 14:35:21 +01:00
Till Hoffmann
dd5d390daf Added zeta function. 2016-04-01 13:32:29 +01:00
Benoit Steiner
3da495e6b9 Relaxed the condition used to gate the fft code. 2016-03-31 18:11:51 -07:00
Benoit Steiner
0f5cc504fe Properly gate the fft code 2016-03-31 12:59:39 -07:00
Benoit Steiner
af4ef540bf Fixed a off-by-one bug in a debug assertion 2016-03-30 18:37:19 -07:00
Benoit Steiner
791e5cfb69 Added NumTraits for type2index. 2016-03-30 18:36:36 -07:00
Benoit Steiner
483aaad10a Fixed compilation warning 2016-03-30 17:08:13 -07:00
Benoit Steiner
1b40abbf99 Added missing assignment operator to the TensorUInt128 class, and made misc small improvements 2016-03-30 13:17:03 -07:00
Benoit Steiner
aa45ad2aac Fixed the formatting of the README. 2016-03-29 15:06:13 -07:00
Benoit Steiner
56df5ef1d7 Attempt to fix the formatting of the README 2016-03-29 15:03:38 -07:00
Benoit Steiner
7b7d2a9fa5 Use false instead of 0 as the expected value of a boolean 2016-03-29 11:50:17 -07:00
Benoit Steiner
c38295f0a0 Added support for fmod 2016-03-28 15:53:02 -07:00
Benoit Steiner
6772f653c3 Made it possible to customize the threadpool 2016-03-28 10:01:04 -07:00
Benoit Steiner
1bc81f7889 Fixed compilation warnings on arm 2016-03-28 09:21:04 -07:00
Benoit Steiner
78f83d6f6a Prevent potential overflow. 2016-03-28 09:18:04 -07:00
Benoit Steiner
74f91ed06c Improved support for integer modulo 2016-03-25 17:21:56 -07:00
Benoit Steiner
a86c9f037b Fixed compilation error on windows 2016-03-24 18:54:31 -07:00
Benoit Steiner
044efea965 Made sure that the cxx11_tensor_cuda test can be compiled even without support for cxx11. 2016-03-23 20:02:11 -07:00
Benoit Steiner
41434a8a85 Avoid unnecessary conversions 2016-03-23 16:52:38 -07:00
Benoit Steiner
92693b50eb Fixed compilation warning 2016-03-23 16:40:36 -07:00
Benoit Steiner
9bc9396e88 Use portable includes 2016-03-23 16:30:06 -07:00
Benoit Steiner
393bc3b16b Added comment 2016-03-23 16:22:15 -07:00
Benoit Steiner
7a570e50ef Fixed contractions of fp16 2016-03-23 16:00:06 -07:00
Benoit Steiner
2062ee2d26 Added a test to verify that notifications are working properly 2016-03-23 13:39:00 -07:00
Christoph Hertzberg
9642fd7a93 Replace all M_PI by EIGEN_PI and add a check to the testsuite. 2016-03-23 15:37:45 +01:00
Benoit Steiner
28e02996df Merged patch 672 from Justin Lebar: Don't use long doubles with cuda 2016-03-22 16:53:57 -07:00
Benoit Steiner
3d1e857327 Fixed compilation error 2016-03-22 15:48:28 -07:00
Benoit Steiner
de7d92c259 Pulled latest updates from trunk 2016-03-22 15:24:49 -07:00
Benoit Steiner
002cf0d1c9 Use a single Barrier instead of a collection of Notifications to reduce the thread synchronization overhead 2016-03-22 15:24:23 -07:00
Benoit Steiner
bc2b802751 Fixed a couple of typos 2016-03-22 14:27:34 -07:00
Benoit Steiner
e7a468c5b7 Filter some compilation flags that nvcc warns about. 2016-03-22 14:26:50 -07:00
Benoit Steiner
6a31b7be3e Avoid using std::vector whenever possible 2016-03-22 14:02:50 -07:00
Benoit Steiner
65a7113a36 Use an enum instead of a static const int to prevent possible link error 2016-03-22 09:33:54 -07:00
Benoit Steiner
f9ad25e4d8 Fixed contractions of 16 bit floats 2016-03-22 09:30:23 -07:00
Benoit Steiner
8ef3181f15 Worked around a constness related issue 2016-03-21 11:24:05 -07:00
Benoit Steiner
7a07d6aa2b Small cleanup 2016-03-21 11:12:17 -07:00
Benoit Steiner
e91f255301 Marked variables that's only used in debug mode as such 2016-03-21 10:02:00 -07:00
Benoit Steiner
db5c14de42 Explicitly cast the default value into the proper scalar type. 2016-03-21 09:52:58 -07:00
Benoit Steiner
8e03333f06 Renamed some class members to make the code more readable. 2016-03-18 15:21:04 -07:00
Benoit Steiner
6c08943d9f Fixed a bug in the padding of extracted image patches. 2016-03-18 15:19:10 -07:00
Benoit Steiner
bb0e73c191 Gate all the CUDA tests under the EIGEN_TEST_NVCC option 2016-03-18 12:17:37 -07:00
Benoit Steiner
2db4a04827 Fixed a typo 2016-03-18 12:08:01 -07:00
Benoit Steiner
dd514de8a9 Added a test to validate the fallback path for half floats 2016-03-18 12:02:39 -07:00
Benoit Steiner
9a7ece9caf Worked around constness issue 2016-03-18 10:38:29 -07:00
Benoit Steiner
edc679f6c6 Fixed compilation warning 2016-03-18 07:12:34 -07:00
Benoit Steiner
53d498ef06 Fixed compilation warnings in the cuda tests 2016-03-18 07:04:54 -07:00
Benoit Steiner
70eb70f5f8 Avoid mutable class members when possible 2016-03-17 21:47:18 -07:00
Benoit Steiner
95b8961a9b Allocate the mersenne twister used by the random number generators on the heap instead of on the stack since they tend to keep a lot of state (i.e. about 5k) around. 2016-03-17 15:23:51 -07:00
Benoit Steiner
f7329619da Fix bug in tensor contraction. The code assumes that contraction axis indices for the LHS (after possibly swapping to ColMajor!) is increasing. Explicitly sort the contraction axis pairs to make it so. 2016-03-17 15:08:02 -07:00
Christoph Hertzberg
46aa9772fc Merged in ebrevdo/eigen (pull request PR-169)
Bugfixes to cuda tests, igamma & igammac implemented, & tests for digamma, igamma, igammac on CPU & GPU.
2016-03-16 21:59:08 +01:00
Benoit Steiner
ab9b749b45 Improved a test 2016-03-14 20:03:13 -07:00
Benoit Steiner
048c4d6efd Made half floats usable on hardware that doesn't support them natively. 2016-03-11 17:21:42 -08:00
Benoit Steiner
b72ffcb05e Made the comparison of Eigen::array GPU friendly 2016-03-11 16:37:59 -08:00
Benoit Steiner
25f69cb932 Added a comparison operator for Eigen::array
Alias Eigen::array to std::array when compiling with Visual Studio 2015
2016-03-11 15:20:37 -08:00
Benoit Steiner
c5b98a58b8 Updated the cxx11_meta test to work on the Eigen::array class when std::array isn't available. 2016-03-11 11:53:38 -08:00
Benoit Steiner
86d45a3c83 Worked around visual studio compilation warnings. 2016-03-09 21:29:39 -08:00
Benoit Steiner
8fd4241377 Fixed a typo. 2016-03-10 02:28:46 +00:00
Benoit Steiner
a685a6beed Made the list reductions less ambiguous. 2016-03-09 17:41:52 -08:00
Benoit Steiner
3149b5b148 Avoid implicit cast 2016-03-09 17:35:17 -08:00
Benoit Steiner
b2100b83ad Made sure to include the <random> header file when compiling with visual studio 2016-03-09 16:03:16 -08:00
Benoit Steiner
f05fb449b8 Avoid unnecessary conversion from 32bit int to 64bit unsigned int 2016-03-09 15:27:45 -08:00
Benoit Steiner
1d566417d2 Enable the random number generators when compiling with visual studio 2016-03-09 10:55:11 -08:00
Benoit Steiner
b084133dbf Fixed the integer division code on windows 2016-03-09 07:06:36 -08:00
Benoit Steiner
6d30683113 Fixed static assertion 2016-03-08 21:02:51 -08:00
Eugene Brevdo
5e7de771e3 Properly fix merge issues. 2016-03-08 17:35:05 -08:00
Eugene Brevdo
73220d2bb0 Resolve bad merge. 2016-03-08 17:28:21 -08:00
Benoit Steiner
46177c8d64 Replace std::vector with our own implementation, as using the stl when compiling with nvcc and avx enabled leads to many issues. 2016-03-08 16:37:27 -08:00