Benoit Steiner
134d750eab
Completed the implementation of vectorized type casting of half floats.
2016-03-18 13:36:28 -07:00
Benoit Steiner
7bd551b3a9
Make all the conversions explicit
2016-03-18 12:20:08 -07:00
Benoit Steiner
7b98de1f15
Implemented some of the missing type casting for half floats
2016-03-17 21:45:45 -07:00
Christoph Hertzberg
46aa9772fc
Merged in ebrevdo/eigen (pull request PR-169)
...
Bugfixes to cuda tests, igamma & igammac implemented, & tests for digamma, igamma, igammac on CPU & GPU.
2016-03-16 21:59:08 +01:00
Eugene Brevdo
1f69a1b65f
Change the header guard around certain numext functions to be CUDA specific.
2016-03-16 12:44:35 -07:00
Benoit Steiner
5a51366ea5
Fixed a typo.
2016-03-14 09:25:16 -07:00
Benoit Steiner
fcf59e1c37
Properly gate the use of cuda intrinsics in the code
2016-03-14 09:13:44 -07:00
Benoit Steiner
97a1f1c273
Make sure we only use the half float intrinsic when compiling with a version of CUDA that is recent enough to provide them
2016-03-14 08:37:58 -07:00
Benoit Steiner
e29c9676b1
Don't mark the cast operator as explicit, since this is a c++11 feature that's not supported by older compilers.
2016-03-12 00:15:58 -08:00
Benoit Steiner
eecd914864
Also replaced uint32_t with unsigned int to make the code more portable
2016-03-11 19:34:21 -08:00
Benoit Steiner
1ca8c1ec97
Replaced a couple more uint16_t with unsigned short
2016-03-11 19:28:28 -08:00
Benoit Steiner
0423b66187
Use unsigned short instead of uint16_t since they're more portable
2016-03-11 17:53:41 -08:00
Benoit Steiner
048c4d6efd
Made half floats usable on hardware that doesn't support them natively.
2016-03-11 17:21:42 -08:00
Benoit Steiner
456e038a4e
Fixed the +=, -=, *= and /= operators to return a reference
2016-03-10 15:17:44 -08:00
Eugene Brevdo
836e92a051
Update MathFunctions/SpecialFunctions with intelligent header guards.
2016-03-09 09:04:45 -08:00
Eugene Brevdo
5e7de771e3
Properly fix merge issues.
2016-03-08 17:35:05 -08:00
Eugene Brevdo
73220d2bb0
Resolve bad merge.
2016-03-08 17:28:21 -08:00
Eugene Brevdo
14f0fde51f
Add certain functions to numext (log, exp, tan) because CUDA doesn't support std::
...
Use these in SpecialFunctions.
2016-03-08 17:17:44 -08:00
Eugene Brevdo
0bb5de05a1
Finishing touches on igamma/igammac for GPU. Tests now pass.
2016-03-07 15:35:09 -08:00
Eugene Brevdo
5707004d6b
Fix Eigen's building of sharded tests that use CUDA & more igamma/igammac bugfixes.
...
0. Prior to this PR, not a single sharded CUDA test was actually being *run*.
Fixed that.
GPU tests are still failing for igamma/igammac.
1. Add calls for igamma/igammac to TensorBase
2. Fix up CUDA-specific calls of igamma/igammac
3. Add unit tests for digamma, igamma, igammac in CUDA.
2016-03-07 14:08:56 -08:00
Benoit Steiner
05bbca079a
Turn on some of the cxx11 features when compiling with visual studio 2015
2016-03-05 10:52:08 -08:00
Eugene Brevdo
0b9e0abc96
Make igamma and igammac work correctly.
...
This required replacing ::abs with std::abs.
Modified some unit tests.
2016-03-04 21:12:10 -08:00
Eugene Brevdo
7ea35bfa1c
Initial implementation of igamma and igammac.
2016-03-03 19:39:41 -08:00
Benoit Steiner
1032441c6f
Enable partial support for half floats on Kepler GPUs.
2016-03-03 10:34:20 -08:00
Benoit Steiner
1da10a7358
Enable the conversion between floats and half floats on older GPUs that support it.
2016-03-03 10:33:20 -08:00
Benoit Steiner
2de8cc9122
Merged in ebrevdo/eigen (pull request PR-167)
...
Add infinity() support to numext::numeric_limits, use it in lgamma.
I tested the code on my gtx-titan-black gpu, and it appears to work as expected.
2016-03-03 09:42:12 -08:00
Eugene Brevdo
ab3dc0b0fe
Small bugfix to numeric_limits for CUDA.
2016-03-02 21:48:46 -08:00
Eugene Brevdo
6afea46838
Add infinity() support to numext::numeric_limits, use it in lgamma.
...
This makes the infinity access a __device__ function, removing
nvcc warnings.
2016-03-02 21:35:48 -08:00
Gael Guennebaud
3fccef6f50
bug #537 : fix compilation with Apples's compiler
2016-03-02 13:22:46 +01:00
Gael Guennebaud
dfa80b2060
Compilation fix
2016-03-01 12:48:56 +01:00
Gael Guennebaud
bee9efc203
Compilation fix
2016-03-01 12:47:27 +01:00
Gael Guennebaud
e9bea614ec
Fix shortcoming in fixed-value deduction of startRow/startCol
2016-02-29 10:31:27 +01:00
Gael Guennebaud
8e6faab51e
bug #1172 : make valuePtr and innderIndexPtr properly return null for empty matrices.
2016-02-27 14:55:40 +01:00
Gael Guennebaud
91e1375ba9
merge
2016-02-23 11:09:05 +01:00
Gael Guennebaud
055000a424
Fix startRow()/startCol() for dense Block with direct access:
...
the initial implementation failed for empty rows/columns for which are ambiguous.
2016-02-23 11:07:59 +01:00
Benoit Steiner
6270d851e3
Declare the half float type as arithmetic.
2016-02-22 13:59:33 -08:00
Benoit Steiner
584832cb3c
Implemented the ptranspose function on half floats
2016-02-21 12:44:53 -08:00
Benoit Steiner
95fceb6452
Added the ability to compute the absolute value of a half float
2016-02-21 20:24:11 +00:00
Benoit Steiner
9ff269a1d3
Moved some of the fp16 operators outside the Eigen namespace to workaround some nvcc limitations.
2016-02-20 07:47:23 +00:00
Gael Guennebaud
d90a2dac5e
merge
2016-02-19 23:01:27 +01:00
Gael Guennebaud
6fa35bbd28
bug #1170 : skip calls to memcpy/memmove for empty imput.
2016-02-19 22:58:52 +01:00
Gael Guennebaud
6f0992c05b
Fix nesting type and complete reflection methods of Block expressions.
2016-02-19 22:21:02 +01:00
Gael Guennebaud
f3643eec57
Add typedefs for the return type of all block methods.
2016-02-19 22:15:01 +01:00
Benoit Steiner
180156ba1a
Added support for tensor reductions on half floats
2016-02-19 10:05:59 -08:00
Benoit Steiner
5c4901b83a
Implemented the scalar division of 2 half floats
2016-02-19 10:03:19 -08:00
Benoit Steiner
f7cb755299
Added support for operators +=, -=, *= and /= on CUDA half floats
2016-02-19 15:57:26 +00:00
Benoit Steiner
dc26459b99
Implemented protate() for CUDA
2016-02-19 15:16:54 +00:00
Benoit Steiner
ac5d706a94
Added support for simple coefficient wise tensor expression using half floats on CUDA devices
2016-02-19 08:19:12 +00:00
Benoit Steiner
0606a0a39b
FP16 on CUDA are only available starting with cuda 7.5. Disable them when using an older version of CUDA
2016-02-18 23:15:23 -08:00
Benoit Steiner
7151bd8768
Reverted unintended changes introduced by a bad merge
2016-02-19 06:20:50 +00:00