Commit Graph

1750 Commits

Author SHA1 Message Date
Benoit Steiner
deea866bbd Added tests to cover the new rounding, flooring and ceiling tensor operations. 2016-03-03 12:38:02 -08:00
Benoit Steiner
5cf4558c0a Added support for rounding, flooring, and ceiling to the tensor api 2016-03-03 12:36:55 -08:00
Benoit Steiner
dac58d7c35 Added a test to validate the conversion of half floats into floats on Kepler GPUs.
Restricted the testing of the random number generation code to GPU architecture greater than or equal to 3.5.
2016-03-03 10:37:25 -08:00
Benoit Steiner
68ac5c1738 Improved the performance of large outer reductions on cuda 2016-02-29 18:11:58 -08:00
Benoit Steiner
b2075cb7a2 Made the signature of the inner and outer reducers consistent 2016-02-29 10:53:38 -08:00
Benoit Steiner
3284842045 Optimized the performance of narrow reductions on CUDA devices 2016-02-29 10:48:16 -08:00
Benoit Steiner
609b3337a7 Print some information to stderr when a CUDA kernel fails 2016-02-27 20:42:57 +00:00
Benoit Steiner
ac2e6e0d03 Properly vectorized the random number generators 2016-02-26 13:52:24 -08:00
Benoit Steiner
caa54d888f Made the TensorIndexList usable on GPU without having to use the -relaxed-constexpr compilation flag 2016-02-26 12:38:18 -08:00
Benoit Steiner
2cd32cad27 Reverted previous commit since it caused more problems than it solved 2016-02-26 13:21:44 +00:00
Benoit Steiner
d9d05dd96e Fixed handling of long doubles on aarch64 2016-02-26 04:13:58 -08:00
Benoit Steiner
af199b4658 Made the CUDA architecture level a build setting. 2016-02-25 09:06:18 -08:00
Benoit Steiner
c36c09169e Fixed a typo in the reduction code that could prevent large full reductionsx from running properly on old cuda devices. 2016-02-24 17:07:25 -08:00
Benoit Steiner
7a01cb8e4b Marked the And and Or reducers as stateless. 2016-02-24 16:43:01 -08:00
Benoit Steiner
1d9256f7db Updated the padding code to work with half floats 2016-02-23 05:51:22 +00:00
Benoit Steiner
72d2cf642e Deleted the coordinate based evaluation of tensor expressions, since it's hardly ever used and started to cause some issues with some versions of xcode. 2016-02-22 15:29:41 -08:00
Benoit Steiner
5cd00068c0 include <iostream> in the tensor header since we now use it to better report cuda initialization errors 2016-02-22 13:59:03 -08:00
Benoit Steiner
257b640463 Fixed compilation warning generated by clang 2016-02-21 22:43:37 -08:00
Benoit Steiner
e644f60907 Pulled latest updates from trunk 2016-02-21 20:24:59 +00:00
Benoit Steiner
95fceb6452 Added the ability to compute the absolute value of a half float 2016-02-21 20:24:11 +00:00
Benoit Steiner
ed69cbeef0 Added some debugging information to the test to figure out why it fails sometimes 2016-02-21 11:20:20 -08:00
Benoit Steiner
96a24b05cc Optimized casting of tensors in the case where the casting happens to be a no-op 2016-02-21 11:16:15 -08:00
Benoit Steiner
203490017f Prevent unecessary Index to int conversions 2016-02-21 08:49:36 -08:00
Benoit Steiner
1e6fe6f046 Fixed the float16 tensor test. 2016-02-20 07:44:17 +00:00
Rasmus Munk Larsen
8eb127022b Get rid of duplicate code. 2016-02-19 16:33:30 -08:00
Rasmus Munk Larsen
d5e2ec7447 Speed up tensor FFT by up ~25-50%.
Benchmark                          Base (ns)  New (ns) Improvement
------------------------------------------------------------------
BM_tensor_fft_single_1D_cpu/8            132       134     -1.5%
BM_tensor_fft_single_1D_cpu/9           1162      1229     -5.8%
BM_tensor_fft_single_1D_cpu/16           199       195     +2.0%
BM_tensor_fft_single_1D_cpu/17          2587      2267    +12.4%
BM_tensor_fft_single_1D_cpu/32           373       341     +8.6%
BM_tensor_fft_single_1D_cpu/33          5922      4879    +17.6%
BM_tensor_fft_single_1D_cpu/64           797       675    +15.3%
BM_tensor_fft_single_1D_cpu/65         13580     10481    +22.8%
BM_tensor_fft_single_1D_cpu/128         1753      1375    +21.6%
BM_tensor_fft_single_1D_cpu/129        31426     22789    +27.5%
BM_tensor_fft_single_1D_cpu/256         4005      3008    +24.9%
BM_tensor_fft_single_1D_cpu/257        70910     49549    +30.1%
BM_tensor_fft_single_1D_cpu/512         8989      6524    +27.4%
BM_tensor_fft_single_1D_cpu/513       165402    107751    +34.9%
BM_tensor_fft_single_1D_cpu/999       198293    115909    +41.5%
BM_tensor_fft_single_1D_cpu/1ki        21289     14143    +33.6%
BM_tensor_fft_single_1D_cpu/1k        361980    233355    +35.5%
BM_tensor_fft_double_1D_cpu/8            138       131     +5.1%
BM_tensor_fft_double_1D_cpu/9           1253      1133     +9.6%
BM_tensor_fft_double_1D_cpu/16           218       200     +8.3%
BM_tensor_fft_double_1D_cpu/17          2770      2392    +13.6%
BM_tensor_fft_double_1D_cpu/32           406       368     +9.4%
BM_tensor_fft_double_1D_cpu/33          6418      5153    +19.7%
BM_tensor_fft_double_1D_cpu/64           856       728    +15.0%
BM_tensor_fft_double_1D_cpu/65         14666     11148    +24.0%
BM_tensor_fft_double_1D_cpu/128         1913      1502    +21.5%
BM_tensor_fft_double_1D_cpu/129        36414     24072    +33.9%
BM_tensor_fft_double_1D_cpu/256         4226      3216    +23.9%
BM_tensor_fft_double_1D_cpu/257        86638     52059    +39.9%
BM_tensor_fft_double_1D_cpu/512         9397      6939    +26.2%
BM_tensor_fft_double_1D_cpu/513       203208    114090    +43.9%
BM_tensor_fft_double_1D_cpu/999       237841    125583    +47.2%
BM_tensor_fft_double_1D_cpu/1ki        20921     15392    +26.4%
BM_tensor_fft_double_1D_cpu/1k        455183    250763    +44.9%
BM_tensor_fft_single_2D_cpu/8           1051      1005     +4.4%
BM_tensor_fft_single_2D_cpu/9          16784     14837    +11.6%
BM_tensor_fft_single_2D_cpu/16          4074      3772     +7.4%
BM_tensor_fft_single_2D_cpu/17         75802     63884    +15.7%
BM_tensor_fft_single_2D_cpu/32         20580     16931    +17.7%
BM_tensor_fft_single_2D_cpu/33        345798    278579    +19.4%
BM_tensor_fft_single_2D_cpu/64         97548     81237    +16.7%
BM_tensor_fft_single_2D_cpu/65       1592701   1227048    +23.0%
BM_tensor_fft_single_2D_cpu/128       472318    384303    +18.6%
BM_tensor_fft_single_2D_cpu/129      7038351   5445308    +22.6%
BM_tensor_fft_single_2D_cpu/256      2309474   1850969    +19.9%
BM_tensor_fft_single_2D_cpu/257     31849182  23797538    +25.3%
BM_tensor_fft_single_2D_cpu/512     10395194   8077499    +22.3%
BM_tensor_fft_single_2D_cpu/513     144053843  104242541    +27.6%
BM_tensor_fft_single_2D_cpu/999     279885833  208389718    +25.5%
BM_tensor_fft_single_2D_cpu/1ki     45967677  36070985    +21.5%
BM_tensor_fft_single_2D_cpu/1k      619727095  456489500    +26.3%
BM_tensor_fft_double_2D_cpu/8           1110      1016     +8.5%
BM_tensor_fft_double_2D_cpu/9          17957     15768    +12.2%
BM_tensor_fft_double_2D_cpu/16          4558      4000    +12.2%
BM_tensor_fft_double_2D_cpu/17         79237     66901    +15.6%
BM_tensor_fft_double_2D_cpu/32         21494     17699    +17.7%
BM_tensor_fft_double_2D_cpu/33        357962    290357    +18.9%
BM_tensor_fft_double_2D_cpu/64        105179     87435    +16.9%
BM_tensor_fft_double_2D_cpu/65       1617143   1288006    +20.4%
BM_tensor_fft_double_2D_cpu/128       512848    419397    +18.2%
BM_tensor_fft_double_2D_cpu/129      7271322   5636884    +22.5%
BM_tensor_fft_double_2D_cpu/256      2415529   1922032    +20.4%
BM_tensor_fft_double_2D_cpu/257     32517952  24462177    +24.8%
BM_tensor_fft_double_2D_cpu/512     10724898   8287617    +22.7%
BM_tensor_fft_double_2D_cpu/513     146007419  108603266    +25.6%
BM_tensor_fft_double_2D_cpu/999     296351330  221885776    +25.1%
BM_tensor_fft_double_2D_cpu/1ki     59334166  48357539    +18.5%
BM_tensor_fft_double_2D_cpu/1k      666660132  483840349    +27.4%
2016-02-19 16:29:23 -08:00
Benoit Steiner
46fc23f91c Print an error message to stderr when the initialization of the CUDA runtime fails. This helps debugging setup issues. 2016-02-19 13:44:22 -08:00
Benoit Steiner
670db7988d Updated the contraction code to make it compatible with half floats. 2016-02-19 13:03:26 -08:00
Benoit Steiner
180156ba1a Added support for tensor reductions on half floats 2016-02-19 10:05:59 -08:00
Benoit Steiner
f268db1c4b Added the ability to query the minor version of a cuda device 2016-02-19 16:31:04 +00:00
Benoit Steiner
a08d2ff0c9 Started to work on contractions and reductions using half floats 2016-02-19 15:59:59 +00:00
Benoit Steiner
f3352e0fb0 Don't make the array constructors explicit 2016-02-19 15:58:57 +00:00
Benoit Steiner
cd042dbbfd Fixed a bug in the tensor type converter 2016-02-19 15:03:26 +00:00
Benoit Steiner
ac5d706a94 Added support for simple coefficient wise tensor expression using half floats on CUDA devices 2016-02-19 08:19:12 +00:00
Benoit Steiner
0606a0a39b FP16 on CUDA are only available starting with cuda 7.5. Disable them when using an older version of CUDA 2016-02-18 23:15:23 -08:00
Benoit Steiner
f36c0c2c65 Added regression test for float16 2016-02-19 06:23:28 +00:00
Benoit Steiner
7151bd8768 Reverted unintended changes introduced by a bad merge 2016-02-19 06:20:50 +00:00
Benoit Steiner
17b9fbed34 Added preliminary support for half floats on CUDA GPU. For now we can simply convert floats into half floats and vice versa 2016-02-19 06:16:07 +00:00
Benoit Steiner
9e3f3a2d27 Deleted outdated comment 2016-02-11 17:27:35 -08:00
Benoit Steiner
de345eff2e Added a method to conjugate the content of a tensor or the result of a tensor expression. 2016-02-11 16:34:07 -08:00
Benoit Steiner
9a21b38ccc Worked around a few clang compilation warnings 2016-02-10 08:02:04 -08:00
Benoit Steiner
72ab7879f7 Fixed clang comilation warnings 2016-02-10 06:48:28 -08:00
Benoit Steiner
e88535634d Fixed some clang compilation warnings 2016-02-09 23:32:41 -08:00
Benoit Steiner
6323851ea9 Fixed compilation warning 2016-02-09 20:43:41 -08:00
Benoit Steiner
d69946183d Updated the TensorIntDivisor code to work properly on LLP64 systems 2016-02-08 21:03:59 -08:00
Benoit Steiner
4d4211c04e Avoid unecessary type conversions 2016-02-05 18:19:41 -08:00
Benoit Steiner
d2cba52015 Only enable the cxx11_tensor_uint128 test on 64 bit machines since 32 bit systems don't support the __uin128_t type 2016-02-05 18:14:23 -08:00
Benoit Steiner
fb00a4af2b Made the tensor fft test compile on tegra x1 2016-02-06 01:42:14 +00:00
Benoit Steiner
f535378995 Added support for vectorized type casting of int to char. 2016-02-03 18:58:29 -08:00
Benoit Steiner
4ab63a3f6f Fixed the initialization of the dummy member of the array class to make it compatible with pairs of element. 2016-02-03 17:23:07 -08:00
Benoit Steiner
1cbb79cdfd Made sure the dummy element of size 0 array is always intialized to silence some compiler warnings 2016-02-03 15:58:26 -08:00
Benoit Steiner
5d82e47ef6 Properly disable nvcc warning messages in user code. 2016-02-03 14:10:06 -08:00
Benoit Steiner
af8436b196 Silenced the "calling a __host__ function from a __host__ __device__ function is not allowed" messages 2016-02-03 13:48:36 -08:00
Benoit Steiner
dc413dbe8a Merged in ville-k/eigen/explicit_long_constructors (pull request PR-158)
Add constructor for long types.
2016-02-02 20:58:06 -08:00
Ville Kallioniemi
783018d8f6 Use EIGEN_STATIC_ASSERT for backward compatibility. 2016-02-02 16:45:12 -07:00
Benoit Steiner
99cde88341 Don't try to use direct offsets when computing a tensor product, since the required stride isn't available. 2016-02-02 11:06:53 -08:00
Ville Kallioniemi
aedea349aa Replace separate low word constructors with a single templated constructor. 2016-02-01 20:25:02 -07:00
Ville Kallioniemi
f0fdefa96f Rebase to latest. 2016-02-01 19:32:31 -07:00
Benoit Steiner
64ce78c2ec Cleaned up a tensor contraction test 2016-02-01 13:57:41 -08:00
Benoit Steiner
0ce5d32be5 Sharded the cxx11_tensor_contract_cuda test 2016-02-01 13:33:23 -08:00
Benoit Steiner
922b5f527b Silenced a few compilation warnings 2016-02-01 13:30:49 -08:00
Benoit Steiner
6b5dff875e Made it possible to limit the number of blocks that will be used to evaluate a tensor expression on a CUDA device. This makesit possible to set aside streaming multiprocessors for other computations. 2016-02-01 12:46:32 -08:00
Benoit Steiner
264f8141f8 Shared the tensor reduction test 2016-02-01 07:44:31 -08:00
Benoit Steiner
11bb71c8fc Sharded the tensor device test 2016-02-01 07:34:59 -08:00
Benoit Steiner
e80ed948e1 Fixed a number of compilation warnings generated by the cuda tests 2016-01-31 20:09:41 -08:00
Benoit Steiner
6720b38fbf Fixed a few compilation warnings 2016-01-31 16:48:50 -08:00
Benoit Steiner
4a2ddfb81d Sharded the CUDA argmax tensor test 2016-01-31 10:44:15 -08:00
Benoit Steiner
483082ef6e Fixed a few memory leaks in the cuda tests 2016-01-30 11:59:22 -08:00
Benoit Steiner
bd21aba181 Sharded the cxx11_tensor_cuda test and fixed a memory leak 2016-01-30 11:47:09 -08:00
Benoit Steiner
9de155d153 Added a test to cover threaded tensor shuffling 2016-01-30 10:56:47 -08:00
Benoit Steiner
32088c06a1 Made the comparison between single and multithreaded contraction results more resistant to numerical noise to prevent spurious test failures. 2016-01-30 10:51:14 -08:00
Benoit Steiner
2053478c56 Made sure to use a tensor of rank 0 to store the result of a full reduction in the tensor thread pool test 2016-01-30 10:46:36 -08:00
Benoit Steiner
d0db95f730 Sharded the tensor thread pool test 2016-01-30 10:43:57 -08:00
Benoit Steiner
ba27c8a7de Made the CUDA contract test more robust to numerical noise. 2016-01-30 10:28:43 -08:00
Benoit Steiner
963f2d2a8f Marked several methods EIGEN_DEVICE_FUNC 2016-01-28 23:37:48 -08:00
Benoit Steiner
c5d25bf1d0 Fixed a couple of compilation warnings. 2016-01-28 23:15:45 -08:00
Benoit Steiner
7b3044d086 Made sure to call nvcc with the relaxed-constexpr flag. 2016-01-28 15:36:34 -08:00
Gael Guennebaud
ddf64babde merge 2016-01-28 13:21:48 +01:00
Gael Guennebaud
7802a6bb1c Fix unit test filename. 2016-01-28 09:35:37 +01:00
Benoit Steiner
4bf9eaf77a Deleted an invalid assertion that prevented the assignment of empty tensors. 2016-01-27 17:09:30 -08:00
Benoit Steiner
291069e885 Fixed some compilation problems with nvcc + clang 2016-01-27 15:37:03 -08:00
Benoit Steiner
47ca9dc809 Fixed the tensor_cuda test 2016-01-27 14:58:48 -08:00
Benoit Steiner
55a5204319 Fixed the flags passed to nvcc to compile the tensor code. 2016-01-27 14:46:34 -08:00
Benoit Steiner
9dfbd4fe8d Made the cuda tests compile using make check 2016-01-27 12:22:17 -08:00
Benoit Steiner
5973bcf939 Properly specify the namespace when calling cout/endl 2016-01-27 12:04:42 -08:00
Gael Guennebaud
9c8f7dfe94 bug #1156: fix several function declarations whose arguments were passed by value instead of being passed by reference 2016-01-27 18:34:42 +01:00
Ville Kallioniemi
02db1228ed Add constructor for long types. 2016-01-26 23:41:01 -07:00
Hauke Heibel
5eb2790be0 Fixed minor typo in SplineFitting. 2016-01-25 22:17:52 +01:00
Benoit Steiner
e3a15a03a4 Don't explicitely evaluate the subexpression from TensorForcedEval::evalSubExprIfNeeded, as it will be done when executing the EvalTo subexpression 2016-01-24 23:04:50 -08:00
Benoit Steiner
bd207ce11e Added missing EIGEN_DEVICE_FUNC qualifier 2016-01-24 20:36:05 -08:00
Benoit Steiner
cb4e53ff7f Merged in ville-k/eigen/tensorflow_fix (pull request PR-153)
Add ctor for long
2016-01-22 19:11:31 -08:00
Ville Kallioniemi
9f94e030c1 Re-add executable flags to minimize changeset. 2016-01-22 20:08:45 -07:00
Benoit Steiner
3aeeca32af Leverage the new blocking code in the tensor contraction code. 2016-01-22 16:36:30 -08:00
Benoit Steiner
4beb447e27 Created a mechanism to enable contraction mappers to determine the best blocking strategy. 2016-01-22 14:37:26 -08:00
Gael Guennebaud
6a44ccb58b Backout changeset 690bc950f7 2016-01-22 15:03:53 +01:00
Ville Kallioniemi
9b6c72958a Update to latest default branch 2016-01-21 23:08:54 -07:00
Benoit Steiner
c33479324c Fixed a constness bug 2016-01-21 17:08:11 -08:00
Jan Prach
690bc950f7 fix clang warnings
"braces around scalar initializer"
2016-01-20 19:35:59 -08:00
Benoit Steiner
7ce932edd3 Small cleanup and small fix to the contraction of row major tensors 2016-01-20 18:12:08 -08:00
Benoit Steiner
47076bf00e Reduce the register pressure exerted by the tensor mappers whenever possible. This improves the performance of the contraction of a matrix with a vector by about 35%. 2016-01-20 14:51:48 -08:00
Ville Kallioniemi
915e7667cd Remove executable bit from header files 2016-01-19 21:17:29 -07:00
Ville Kallioniemi
2832175a68 Use explicitly 32 bit integer types in constructors. 2016-01-19 20:12:17 -07:00
Benoit Steiner
df79c00901 Improved the formatting of the code 2016-01-19 17:24:08 -08:00
Benoit Steiner
6d472d8375 Moved the contraction mapping code to its own file to make the code more manageable. 2016-01-19 17:22:05 -08:00
Benoit Steiner
b3b722905f Improved code indentation 2016-01-19 17:09:47 -08:00
Benoit Steiner
5b7713dd33 Record whether the underlying tensor storage can be accessed directly during the evaluation of an expression. 2016-01-19 17:05:10 -08:00
Ville Kallioniemi
63fb66f53a Add ctor for long 2016-01-17 21:25:36 -07:00
Benoit Steiner
34057cff23 Fixed a race condition that could affect some reductions on CUDA devices. 2016-01-15 15:11:56 -08:00
Benoit Steiner
0461f0153e Made it possible to compare tensor dimensions inside a CUDA kernel. 2016-01-15 11:22:16 -08:00
Benoit Steiner
aed4cb1269 Use warp shuffles instead of shared memory access to speedup the inner reduction kernel. 2016-01-14 21:45:14 -08:00
Benoit Steiner
8fe2532e70 Fixed a boundary condition bug in the outer reduction kernel 2016-01-14 09:29:48 -08:00
Benoit Steiner
9f013a9d86 Properly record the rank of reduced tensors in the tensor traits. 2016-01-13 14:24:37 -08:00
Benoit Steiner
79b69b7444 Trigger the optimized matrix vector path more conservatively. 2016-01-12 15:21:09 -08:00
Benoit Steiner
d920d57f38 Improved the performance of the contraction of a 2d tensor with a 1d tensor by a factor of 3 or more. This helps speedup LSTM neural networks. 2016-01-12 11:32:27 -08:00
Benoit Steiner
bd7d901da9 Reverted a previous change that tripped nvcc when compiling in debug mode. 2016-01-11 17:49:44 -08:00
Benoit Steiner
c5e6900400 Silenced a few compilation warnings. 2016-01-11 17:06:39 -08:00
Benoit Steiner
f894736d61 Updated the tensor traits: the alignment is not part of the Flags enum anymore 2016-01-11 16:42:18 -08:00
Benoit Steiner
4f7714d72c Enabled the use of fixed dimensions from within a cuda kernel. 2016-01-11 16:01:00 -08:00
Benoit Steiner
01c55d37e6 Deleted unused variable. 2016-01-11 15:53:19 -08:00
Benoit Steiner
0504c56ea7 Silenced a nvcc compilation warning 2016-01-11 15:49:21 -08:00
Benoit Steiner
b523771a24 Silenced several compilation warnings triggered by nvcc. 2016-01-11 14:25:43 -08:00
Benoit Steiner
2c3b13eded Merged in jeremy_barnes/eigen/shader-model-3.0 (pull request PR-152)
Alternative way of forcing instantiation of device kernels without causing warnings or requiring device to device kernel invocations.
2016-01-11 11:43:37 -08:00
Benoit Steiner
2ccb1c8634 Fixed a bug in the dispatch of optimized reduction kernels. 2016-01-11 10:36:37 -08:00
Benoit Steiner
780623261e Re-enabled the optimized reduction CUDA code. 2016-01-11 09:07:14 -08:00
Jeremy Barnes
91678f489a Cleaned up double-defined macro from last commit 2016-01-10 22:44:45 -05:00
Jeremy Barnes
403a7cb6c3 Alternative way of forcing instantiation of device kernels without
causing warnings or requiring device to device kernel invocations.

This allows Tensorflow to work on SM 3.0 (ie, Amazon EC2) machines.
2016-01-10 22:39:13 -05:00
Benoit Steiner
e76904af1b Simplified the dispatch code. 2016-01-08 16:50:57 -08:00
Benoit Steiner
d726e864ac Made it possible to use array of size 0 on CUDA devices 2016-01-08 16:38:14 -08:00
Benoit Steiner
3358dfd5dd Reworked the dispatch of optimized cuda reduction kernels to workaround a nvcc bug that prevented the code from compiling in optimized mode in some cases 2016-01-08 16:28:53 -08:00
Benoit Steiner
53749ff415 Prevent nvcc from miscompiling the cuda metakernel. Unfortunately this reintroduces some compulation warnings but it's much better than having to deal with random assertion failures. 2016-01-08 13:53:40 -08:00
Benoit Steiner
6639b7d6e8 Removed a couple of partial specialization that confuse nvcc and result in errors such as this:
error: more than one partial specialization matches the template argument list of class "Eigen::internal::get<3, Eigen::internal::numeric_list<std::size_t, 1UL, 1UL, 1UL, 1UL>>"
            "Eigen::internal::get<n, Eigen::internal::numeric_list<T, a, as...>>"
            "Eigen::internal::get<n, Eigen::internal::numeric_list<T, as...>>"
2016-01-07 18:45:19 -08:00
Benoit Steiner
0cb2ca5de2 Fixed a typo. 2016-01-06 18:50:28 -08:00
Benoit Steiner
213459d818 Optimized the performance of broadcasting of scalars. 2016-01-06 18:47:45 -08:00
Benoit Steiner
cfff40b1d4 Improved the performance of reductions on CUDA devices 2016-01-04 17:25:00 -08:00
Benoit Steiner
515dee0baf Added a 'divup' util to compute the floor of the quotient of two integers 2016-01-04 16:29:26 -08:00
Gael Guennebaud
8b0d1eb0f7 Fix numerous doxygen shortcomings, and workaround some clang -Wdocumentation warnings 2016-01-01 21:45:06 +01:00
Gael Guennebaud
978c379ed7 Add missing ctor from uint 2015-12-30 12:52:38 +01:00
Eugene Brevdo
f7362772e3 Add digamma for CPU + CUDA. Includes tests. 2015-12-24 21:15:38 -08:00
Benoit Steiner
bdcbc66a5c Don't attempt to vectorize mean reductions of integers since we can't use
SSE or AVX instructions to divide 2 integers.
2015-12-22 17:51:55 -08:00
Benoit Steiner
a1e08fb2a5 Optimized the configuration of the outer reduction cuda kernel 2015-12-22 16:30:10 -08:00
Benoit Steiner
9c7d96697b Added missing define 2015-12-22 16:11:07 -08:00
Benoit Steiner
e7e6d01810 Made sure the optimized gpu reduction code is actually compiled. 2015-12-22 15:07:33 -08:00
Benoit Steiner
b5d2078c4a Optimized outer reduction on GPUs. 2015-12-22 15:06:17 -08:00
Benoit Steiner
1c3e78319d Added missing const 2015-12-21 15:05:01 -08:00
Benoit Steiner
1b82969559 Add alignment requirement for local buffer used by the slicing op. 2015-12-18 14:36:35 -08:00
Benoit Steiner
75a7fa1919 Doubled the speed of full reductions on GPUs. 2015-12-18 14:07:31 -08:00
Benoit Steiner
8dd17cbe80 Fixed a clang compilation warning triggered by the use of arrays of size 0. 2015-12-17 14:00:33 -08:00
Benoit Steiner
4aac55f684 Silenced some compilation warnings triggered by nvcc 2015-12-17 13:39:01 -08:00
Benoit Steiner
40e6250fc3 Made it possible to run tensor chipping operations on CUDA devices 2015-12-17 13:29:08 -08:00
Benoit Steiner
2ca55a3ae4 Fixed some compilation error triggered by the tensor code with msvc 2008 2015-12-16 20:45:58 -08:00
Gael Guennebaud
35d8725c73 Disable AutoDiffScalar generic copy ctor for non compatible scalar types (fix ambiguous template instantiation) 2015-12-16 10:14:24 +01:00
Christoph Hertzberg
92655e7215 bug #1136: Protect isinf for Intel compilers. Also don't distinguish GCC from ICC and don't rely on EIGEN_NOT_A_MACRO, which might not be defined when including this. 2015-12-15 11:34:52 +01:00
Benoit Steiner
17352e2792 Made the entire TensorFixedSize api callable from a CUDA kernel. 2015-12-14 15:20:31 -08:00
Benoit Steiner
75e19fc7ca Marked the tensor constructors as EIGEN_DEVICE_FUNC: This makes it possible to call them from a CUDA kernel. 2015-12-14 15:12:55 -08:00
Gael Guennebaud
ca39b1546e Merged in ebrevdo/eigen (pull request PR-148)
Add special functions to eigen: lgamma, erf, erfc.
2015-12-11 11:52:09 +01:00
Benoit Steiner
6af52a1227 Fixed a typo in the constructor of tensors of rank 5. 2015-12-10 23:31:12 -08:00
Benoit Steiner
2d8f2e4042 Made 2 tests compile without cxx11.
HdG: --
2015-12-10 23:20:04 -08:00
Benoit Steiner
8d28a161b2 Use the proper accessor to refer to the value of a scalar tensor 2015-12-10 22:53:56 -08:00
Benoit Steiner
8e00ea9a92 Fixed the coefficient accessors use for the 2d and 3d case when compiling without cxx11 support. 2015-12-10 22:45:10 -08:00
Benoit Steiner
9db8316c93 Updated the cxx11_tensor_custom_op to not require cxx11. 2015-12-10 20:53:44 -08:00
Benoit Steiner
4e324ca6ae Updated the cxx11_tensor_assign test to make it compile without support for cxx11 2015-12-10 20:47:25 -08:00
Eugene Brevdo
fa4f933c0f Add special functions to Eigen: lgamma, erf, erfc.
Includes CUDA support and unit tests.
2015-12-07 15:24:49 -08:00
Benoit Steiner
7dfe75f445 Fixed compilation warnings 2015-12-07 08:12:30 -08:00
Gael Guennebaud
ad3d68400e Add matrix-free solver example 2015-12-07 12:33:38 +01:00
Gael Guennebaud
b37036afce Implement wrapper for matrix-free iterative solvers 2015-12-07 12:23:22 +01:00
Benoit Steiner
f4ca8ad917 Use signed integers instead of unsigned ones more consistently in the codebase. 2015-12-04 18:14:16 -08:00
Benoit Steiner
490d26e4c1 Use integers instead of std::size_t to encode the number of dimensions in the Tensor class since most of the code currently already use integers. 2015-12-04 10:15:11 -08:00
Benoit Steiner
d20efc974d Made it possible to use the sigmoid functor within a CUDA kernel. 2015-12-04 09:38:15 -08:00
Benoit Steiner
029052d276 Deleted redundant code 2015-12-03 17:08:47 -08:00
Gael Guennebaud
fd727249ad Update ADOL-C support. 2015-11-30 16:00:22 +01:00
Gael Guennebaud
da46b1ed54 bug #1112: fix compilation on exotic architectures 2015-11-27 15:57:18 +01:00
Mark Borgerding
7ddcf97da7 added scalar_sign_op (both real,complex) 2015-11-24 17:15:07 -05:00
Benoit Steiner
44848ac39b Fixed a bug in TensorArgMax.h 2015-11-23 15:58:47 -08:00
Benoit Steiner
547a8608e5 Fixed the implementation of Eigen::internal::count_leading_zeros for MSVC.
Also updated the code to silence bogux warnings generated by nvcc when compilining this function.
2015-11-23 12:17:45 -08:00
Benoit Steiner
562078780a Don't create more cuda blocks than necessary 2015-11-23 11:00:10 -08:00
Benoit Steiner
df31ca3b9e Made it possible to refer t oa GPUDevice from code compile with a regular C++ compiler 2015-11-23 10:03:53 -08:00
Benoit Steiner
1e04059012 Deleted unused variable. 2015-11-23 08:36:54 -08:00
Benoit Steiner
9fa65d3838 Split TensorDeviceType.h in 3 files to make it more manageable 2015-11-20 17:42:50 -08:00
Benoit Steiner
a367804856 Added option to force the usage of the Eigen array class instead of the std::array class. 2015-11-20 12:41:40 -08:00
Benoit Steiner
86486eee2d Pulled latest updates from trunk 2015-11-20 11:10:37 -08:00
Benoit Steiner
383d1cc2ed Added proper support for fast 64bit integer division on CUDA 2015-11-20 11:09:46 -08:00
Benoit Steiner
0ad7c7b1ad Fixed another clang compilation warning 2015-11-19 15:52:51 -08:00
Benoit Steiner
66ff9b2c6c Fixed compilation warning generated by clang 2015-11-19 15:40:32 -08:00
Benoit Steiner
f37a5f1c53 Fixed compilation error triggered by nvcc 2015-11-19 14:34:26 -08:00
Benoit Steiner
04f1284f9a Shard the uint128 test 2015-11-19 14:08:08 -08:00
Benoit Steiner
e2859c6b71 Cleanup the integer division test 2015-11-19 14:07:50 -08:00
Benoit Steiner
f8df393165 Added support for 128bit integers on CUDA devices. 2015-11-19 13:57:27 -08:00
Benoit Steiner
1dd444ea71 Avoid using the version of TensorIntDiv optimized for 32-bit integers when the divisor can be equal to one since it isn't supported. 2015-11-18 11:37:58 -08:00
Benoit Steiner
f1fbd74db9 Added sanity check 2015-11-13 09:07:27 -08:00
Benoit Steiner
7815b84be4 Fixed a compilation warning 2015-11-12 20:16:59 -08:00
Benoit Steiner
10a91930cc Fixed a compilation warning triggered by nvcc 2015-11-12 20:10:52 -08:00
Benoit Steiner
ed4b37de02 Fixed a few compilation warnings 2015-11-12 20:08:01 -08:00
Benoit Steiner
b69248fa2a Added a couple of missing EIGEN_DEVICE_FUNC 2015-11-12 20:01:50 -08:00
Benoit Steiner
0aaa5941df Silenced some compilation warnings triggered by nvcc 2015-11-12 19:11:43 -08:00
Benoit Steiner
2c73633b28 Fixed a few more typos 2015-11-12 18:39:19 -08:00
Benoit Steiner
be08e82953 Fixed typos 2015-11-12 18:37:40 -08:00
Benoit Steiner
150c12e138 Completed the IndexList rewrite 2015-11-12 18:11:56 -08:00
Benoit Steiner
8037826367 Simplified more of the IndexList code. 2015-11-12 17:19:45 -08:00
Benoit Steiner
e9ecfad796 Started to make the IndexList code compile by more compilers 2015-11-12 16:41:14 -08:00
Benoit Steiner
7a1316fcc5 Fixed compilation error with xcode. 2015-11-12 11:05:54 -08:00
Benoit Steiner
737d237722 Made it possible to run some of the CXXMeta functions on a CUDA device. 2015-11-12 09:02:59 -08:00
Benoit Steiner
1e072424e8 Moved the array code into it's own file. 2015-11-12 08:57:04 -08:00
Benoit Steiner
aa5f1ca714 gen_numeric_list takes a size_t, not a int 2015-11-12 08:30:10 -08:00
Benoit Steiner
9fa10fe52d Don't use std::array when compiling with nvcc since nvidia doesn't support the use of STL containers on GPU. 2015-11-11 15:38:30 -08:00
Benoit Steiner
c587293e48 Fixed a compilation warning 2015-11-11 15:35:12 -08:00
Benoit Steiner
7f1c29fb0c Make it possible for a vectorized tensor expression to be executed in a CUDA kernel. 2015-11-11 15:22:50 -08:00
Benoit Steiner
99f4778506 Disable SFINAE when compiling with nvcc 2015-11-11 15:04:58 -08:00
Benoit Steiner
5cb18e5b5e Fixed CUDA compilation errors 2015-11-11 14:36:33 -08:00
Benoit Steiner
228edfe616 Use Eigen::NumTraits instead of std::numeric_limits 2015-11-11 09:26:23 -08:00
Benoit Steiner
20e2ab1121 Fixed another compilation warning 2015-12-07 16:17:57 -08:00
Benoit Steiner
d573efe303 Code cleanup 2015-11-06 14:54:28 -08:00
Benoit Steiner
9fa283339f Silenced a compilation warning 2015-11-06 11:44:22 -08:00
Benoit Steiner
53432a17b2 Added static assertions to avoid misuses of padding, broadcasting and concatenation ops. 2015-11-06 10:26:19 -08:00
Benoit Steiner
6857a35a11 Fixed typos 2015-11-06 09:42:05 -08:00
Benoit Steiner
33cbdc2d15 Added more missing EIGEN_DEVICE_FUNC 2015-11-06 09:29:59 -08:00
Benoit Steiner
ed1962b464 Reimplement the tensor comparison operators by using the scalar_cmp_op functors. This makes them more cuda friendly. 2015-11-06 09:18:43 -08:00
Benoit Steiner
29038b982d Added support for modulo operation 2015-11-05 19:39:48 -08:00
Benoit Steiner
fbcf8cc8c1 Pulled latest updates from trunk 2015-11-05 14:30:02 -08:00
Benoit Steiner
0d15ad8019 Updated the regressions tests that cover full reductions 2015-11-05 14:22:30 -08:00
Benoit Steiner
c75a19f815 Misc fixes to full reductions 2015-11-05 14:21:20 -08:00
Benoit Steiner
ec5a81b45a Fixed a bug in the extraction of sizes of fixed sized tensors of rank 0 2015-11-05 13:39:48 -08:00
Gael Guennebaud
589b839ad0 Add unit test for Hessian via AutoDiffScalar 2015-11-05 14:54:05 +01:00
Gael Guennebaud
9ceaa8e445 bug #1063: nest AutoDiffScalar by value to avoid dead references 2015-11-05 13:54:26 +01:00
Benoit Steiner
beedd9630d Updated the reduction code so that full reductions now return a tensor of rank 0. 2015-11-04 13:57:36 -08:00
Benoit Steiner
6a02c2a85d Fixed a compilation warning 2015-10-29 20:21:29 -07:00
Benoit Steiner
ca12d4c3b3 Pulled latest updates from trunk 2015-10-29 17:57:48 -07:00
Benoit Steiner
31bdafac67 Added a few tests to cover rank-0 tensors 2015-10-29 17:56:48 -07:00
Benoit Steiner
ce19e38c1f Added support for tensor maps of rank 0. 2015-10-29 17:49:04 -07:00
Benoit Steiner
3785c69294 Added support for fixed sized tensors of rank 0 2015-10-29 17:31:03 -07:00
Benoit Steiner
0d7a23d34e Extended the reduction code so that reducing an empty set returns the neural element for the operation 2015-10-29 17:29:49 -07:00
Benoit Steiner
1b0685d09a Added support for rank-0 tensors 2015-10-29 17:27:38 -07:00
Benoit Steiner
c444a0a8c3 Consistently use the same index type in the fft codebase. 2015-10-29 16:39:47 -07:00
Benoit Steiner
09ea3a7acd Silenced a few more compilation warnings 2015-10-29 16:22:52 -07:00
Benoit Steiner
0974a57910 Silenced compiler warning 2015-10-29 15:00:06 -07:00
Gael Guennebaud
77ff3386b7 Refactoring of the cost model:
- Dynamic is now an invalid value
 - introduce a HugeCost constant to be used for runtime-cost values or arbitrarily huge cost
 - add sanity checks for cost values: must be >=0 and not too large
This change provides several benefits:
 - it fixes shortcoming is some cost computation where the Dynamic case was not properly handled.
 - it simplifies cost computation logic, and should avoid future similar shortcomings.
 - it allows to distinguish between different level of dynamic/huge/infinite cost
 - it should enable further simplifications in the computation of costs (save compilation time)
2015-10-28 11:42:14 +01:00
Gael Guennebaud
d4cf436cb1 Enable mpreal unit test for C++11 compiler only 2015-10-27 17:35:54 +01:00
Benoit Steiner
1c8312c811 Started to add support for tensors of rank 0 2015-10-26 14:29:26 -07:00
Benoit Steiner
1f4c98abb1 Fixed compilation warning 2015-10-26 12:42:55 -07:00
Benoit Steiner
9dc236bc83 Fixed compilation warning 2015-10-26 12:41:48 -07:00
Benoit Steiner
9f721384e0 Added support for empty dimensions 2015-10-26 11:21:27 -07:00
Benoit Steiner
a3e144727c Fixed compilation warning 2015-10-26 10:48:11 -07:00
Benoit Steiner
f8e7b9590d Fixed compilation error triggered by gcc 4.7 2015-10-26 10:47:37 -07:00
Gael Guennebaud
a5324a131f bug #1092: fix iterative solver ctors for expressions as input 2015-10-26 16:16:24 +01:00
Gael Guennebaud
af2e25d482 Merged in infinitei/eigen (pull request PR-140)
bug #1097 Added ArpackSupport to cmake install target
2015-10-26 15:31:39 +01:00
Abhijit Kundu
0ed41bdefa ArpackSupport was missing here also. 2015-10-16 18:21:02 -07:00
Abhijit Kundu
1127ca8586 Added ArpackSupport to cmake install target 2015-10-16 16:41:33 -07:00
Benoit Steiner
de1e9f29f4 Updated the custom indexing code: we can now use any container that provides the [] operator to index a tensor. Added unit tests to validate the use of std::map and a few more types as valid custom index containers 2015-10-15 14:58:49 -07:00
Benoit Steiner
6585efc553 Tightened the definition of isOfNormalIndex to take into account integer types in addition to arrays of indices
Only compile the custom index code  when EIGEN_HAS_SFINAE is defined. For the time beeing, EIGEN_HAS_SFINAE is a synonym for EIGEN_HAS_VARIADIC_TEMPLATES, but this might evolve in the future.
Moved some code around.
2015-10-14 09:31:37 -07:00
Gabriel Nützi
fc7478c04d name changes 2
user: Gabriel Nützi <gnuetzi@gmx.ch>
branch 'default'
changed unsupported/Eigen/CXX11/src/Tensor/Tensor.h
changed unsupported/Eigen/CXX11/src/Tensor/TensorMeta.h
2015-10-09 19:10:08 +02:00
Gabriel Nützi
7b34834f64 name changes
user: Gabriel Nützi <gnuetzi@gmx.ch>
branch 'default'
changed unsupported/Eigen/CXX11/src/Tensor/Tensor.h
2015-10-09 19:08:14 +02:00