Commit Graph

8007 Commits

Author SHA1 Message Date
Benoit Steiner
2a54b70d45 Fixed potential race condition in the non blocking thread pool 2016-05-12 11:45:48 -07:00
Benoit Steiner
a071629fec Replace implicit cast with an explicit one 2016-05-12 10:40:07 -07:00
Benoit Steiner
2f9401b061 Worked around compilation errors with older versions of gcc 2016-05-11 23:39:20 -07:00
Benoit Steiner
09653e1f82 Improved the portability of the tensor code 2016-05-11 23:29:09 -07:00
Benoit Steiner
fae0493f98 Fixed a couple of bugs related to the Pascalfamily of GPUs
H: Enter commit message.  Lines beginning with 'HG:' are removed.
2016-05-11 23:02:26 -07:00
Benoit Steiner
886445ce4d Avoid unnecessary conversions between floats and doubles 2016-05-11 23:00:03 -07:00
Benoit Steiner
595e890391 Added more tests for half floats 2016-05-11 21:27:15 -07:00
Benoit Steiner
b6a517c47d Added the ability to load fp16 using the texture path.
Improved the performance of some reductions on fp16
2016-05-11 21:26:48 -07:00
Benoit Steiner
518149e868 Misc fixes for fp16 2016-05-11 20:11:14 -07:00
Benoit Steiner
56a1757d74 Made predux_min and predux_max on fp16 less noisy 2016-05-11 17:37:34 -07:00
Benoit Steiner
9091351dbe __ldg is only available with cuda architectures >= 3.5 2016-05-11 15:22:13 -07:00
Benoit Steiner
02f76dae2d Fixed a typo 2016-05-11 15:08:38 -07:00
Christoph Hertzberg
131e5a1a4a Do not copy for trivial 1x1 case. This also avoids a "maybe-uninitialized" warning in some situations. 2016-05-11 23:50:13 +02:00
Benoit Steiner
70195a5ff7 Added missing EIGEN_DEVICE_FUNC 2016-05-11 14:10:09 -07:00
Benoit Steiner
09a19c33a8 Added missing EIGEN_DEVICE_FUNC qualifiers 2016-05-11 14:07:43 -07:00
Christoph Hertzberg
1a1ce6ff61 Removed deprecated flag (which apparently was ignored anyway) 2016-05-11 23:05:37 +02:00
Christoph Hertzberg
2150f13d65 fixed some double-promotion and sign-compare warnings 2016-05-11 23:02:26 +02:00
Christoph Hertzberg
7268b10203 Split unit test 2016-05-11 19:41:53 +02:00
Christoph Hertzberg
8d4ef391b0 Don't flood test output with successful VERIFY_IS_NOT_EQUAL tests. 2016-05-11 19:40:45 +02:00
Christoph Hertzberg
bda21407dd Fix help output of buildtests and check scripts 2016-05-11 19:39:09 +02:00
Christoph Hertzberg
33ca7e3c8d bug #1207: Add and fix logical-op warnings 2016-05-11 19:36:34 +02:00
Benoit Steiner
217d984abc Fixed a typo in my previous commit 2016-05-11 10:22:15 -07:00
Benoit Steiner
08348b4e48 Fix potential race condition in the CUDA reduction code. 2016-05-11 10:08:51 -07:00
Benoit Steiner
cbb14ed47e Added a few tests to validate the generation of random tensors on GPU. 2016-05-11 10:05:56 -07:00
Benoit Steiner
6a5717dc74 Explicitely initialize all the atomic variables. 2016-05-11 10:04:41 -07:00
Christoph Hertzberg
0f61343893 Workaround maybe-uninitialized warning 2016-05-11 09:00:18 +02:00
Christoph Hertzberg
3bfc9b47ca Workaround "misleading-indentation" warnings 2016-05-11 08:41:36 +02:00
Benoit Steiner
4ede059de1 Properly gate the use of half2. 2016-05-10 17:04:01 -07:00
Benoit Steiner
bf185c3c28 Extended the tests for ptanh 2016-05-10 16:21:43 -07:00
Benoit Steiner
661e710092 Added support for fp16 to the sigmoid functor. 2016-05-10 12:25:27 -07:00
Benoit Steiner
0eb69b7552 Small improvement to the full reduction of fp16 2016-05-10 11:58:18 -07:00
Benoit Steiner
0b9e3dcd06 Added packet primitives to compute exp, log, sqrt and rsqrt on fp16. This improves the performance by 10 to 30%. 2016-05-10 11:05:33 -07:00
Benoit Steiner
6bf8273bc0 Added a test to validate the new non blocking thread pool 2016-05-10 10:49:34 -07:00
Benoit Steiner
4013b8feca Simplified the reduction code a little. 2016-05-10 09:40:42 -07:00
Benoit Steiner
75bd2bd32d Fixed compilation warning 2016-05-09 19:24:41 -07:00
Benoit Steiner
4670d7d5ce Improved the performance of full reductions on GPU:
Before:
BM_fullReduction/10       200000      11751     8.51 MFlops/s
BM_fullReduction/80         5000     523385    12.23 MFlops/s
BM_fullReduction/640          50   36179326    11.32 MFlops/s
BM_fullReduction/4K            1 2173517195    11.50 MFlops/s

After:
BM_fullReduction/10       500000       5987    16.70 MFlops/s
BM_fullReduction/80       200000      10636   601.73 MFlops/s
BM_fullReduction/640       50000      58428  7010.31 MFlops/s
BM_fullReduction/4K         1000    2006106 12461.95 MFlops/s
2016-05-09 17:09:54 -07:00
Benoit Steiner
c3859a2b58 Added the ability to use a scratch buffer in cuda kernels 2016-05-09 17:05:53 -07:00
Benoit Steiner
ba95e43ea2 Added a new parallelFor api to the thread pool device. 2016-05-09 10:45:12 -07:00
Benoit Steiner
dc7dbc2df7 Optimized the non blocking thread pool:
* Use a pseudo-random permutation of queue indices during random stealing. This ensures that all the queues are considered.
 * Directly pop from a non-empty queue when we are waiting for work,
instead of first noticing that there is a non-empty queue and
then doing another round of random stealing to re-discover the non-empty
queue.
 * Steal only 1 task from a remote queue instead of half of tasks.
2016-05-09 10:17:17 -07:00
Benoit Steiner
05c365fb16 Pulled latest updates from trunk 2016-05-07 13:39:04 -07:00
Benoit Steiner
691614bd2c Worked around a bug in nvcc on tegra x1 2016-05-07 13:28:53 -07:00
Benoit Steiner
a2d94fc216 Merged latest updates from trunk 2016-05-06 19:17:57 -07:00
Benoit Steiner
8adf5cc70f Added support for packet processing of fp16 on kepler and maxwell gpus 2016-05-06 19:16:43 -07:00
Benoit Steiner
1660e749b4 Avoid double promotion 2016-05-06 08:15:12 -07:00
Christoph Hertzberg
a11bd82dc3 bug #1213: Give names to anonymous enums 2016-05-06 11:31:56 +02:00
Benoit Steiner
c54ae65c83 Marked a few tensor operations as read only 2016-05-05 17:18:47 -07:00
Benoit Steiner
69a8a4e1f3 Added a test to validate full reduction on tensor of half floats 2016-05-05 16:52:50 -07:00
Benoit Steiner
678a17ba79 Made the testing of contractions on fp16 more robust 2016-05-05 16:36:39 -07:00
Benoit Steiner
e3d053e14e Refined the testing of log and exp on fp16 2016-05-05 16:24:15 -07:00
Benoit Steiner
9a48688d37 Further improved the testing of fp16 2016-05-05 15:58:05 -07:00