Till Hoffmann
643b697649
Proper handling of domain errors.
2016-04-10 00:37:53 +01:00
Rasmus Munk Larsen
1f70bd4134
Merge.
2016-04-09 15:31:53 -07:00
Rasmus Munk Larsen
096e355f8e
Add short-circuit to avoid calling matrix norm for empty matrix.
2016-04-09 15:29:56 -07:00
Rasmus Larsen
be80fb49fc
Merged default ( 4a92b590a0
...
) into default
2016-04-09 13:13:01 -07:00
Rasmus Larsen
7a8176587b
Merged eigen/eigen into default
2016-04-09 12:47:41 -07:00
Rasmus Munk Larsen
4a92b590a0
Merge.
2016-04-09 12:47:24 -07:00
Rasmus Munk Larsen
ee6c69733a
A few tiny adjustments to short-circuit logic.
2016-04-09 12:45:49 -07:00
Till Hoffmann
7f4826890c
Merge upstream
2016-04-09 20:08:07 +01:00
Till Hoffmann
de057ebe54
Added nans to zeta function.
2016-04-09 20:07:36 +01:00
Benoit Steiner
5da90fc8dd
Use numext::abs instead of std::abs in scalar_fuzzy_default_impl to make it usable inside GPU kernels.
2016-04-08 19:40:48 -07:00
Benoit Steiner
01bd577288
Fixed the implementation of Eigen::numext::isfinite, Eigen::numext::isnan, andEigen::numext::isinf on CUDA devices
2016-04-08 16:40:10 -07:00
Benoit Steiner
89a3dc35a3
Fixed isfinite_impl: NumTraits<T>::highest() and NumTraits<T>::lowest() are finite numbers.
2016-04-08 15:56:16 -07:00
Benoit Steiner
995f202cea
Disabled the use of half2 on cuda devices of compute capability < 5.3
2016-04-08 14:43:36 -07:00
Benoit Steiner
8d22967bd9
Initial support for taking the power of fp16
2016-04-08 14:22:39 -07:00
Benoit Steiner
3394379319
Fixed the packet_traits for half floats.
2016-04-08 13:33:59 -07:00
Rasmus Larsen
0b81a18d12
Merged eigen/eigen into default
2016-04-08 12:58:57 -07:00
Benoit Jacob
cd2b667ac8
Add references to filed LLVM bugs
2016-04-08 08:12:47 -04:00
Benoit Steiner
3bd16457e1
Properly handle complex numbers.
2016-04-07 23:28:04 -07:00
Rasmus Larsen
c34e55c62b
Merged eigen/eigen into default
2016-04-07 20:23:03 -07:00
Rasmus Munk Larsen
283c51cd5e
Widen short-circuiting ReciprocalConditionNumberEstimate so we don't call InverseMatrixL1NormEstimate for dec.rows() <= 1.
2016-04-07 16:45:40 -07:00
Rasmus Munk Larsen
d51803a728
Use Index instead of int for indexing and sizes.
2016-04-07 16:39:48 -07:00
Rasmus Munk Larsen
fd872aefb3
Remove transpose() method from LLT and LDLT classes as it would imply conjugation.
...
Explicitly cast constants to RealScalar in ConditionEstimator.h.
2016-04-07 16:28:44 -07:00
Rasmus Munk Larsen
0b5546d182
Use lpNorm<1>() to compute l1 norms in LLT and LDLT.
2016-04-07 15:49:30 -07:00
parthaEth
2d5bb375b7
Static casting scalar types so as to let chlesky module of eigen work with ceres
2016-04-08 00:14:44 +02:00
Benoit Steiner
74f64838c5
Updated the unary functors to use the numext implementation of typicall functions instead of the one provided in the standard library. The standard library functions aren't supported officially by cuda, so we're better off using the numext implementations.
2016-04-07 11:42:14 -07:00
Benoit Steiner
737644366f
Move the functions operating on fp16 out of the std namespace and into the Eigen::numext namespace
2016-04-07 11:40:15 -07:00
Benoit Steiner
b89d3f78b2
Updated the isnan, isinf and isfinite functions to make compatible with cuda devices.
2016-04-07 10:08:49 -07:00
Benoit Steiner
df838736e2
Fixed compilation warning triggered by msvc
2016-04-06 20:48:55 -07:00
Benoit Steiner
14ea7c7ec7
Fixed packet_traits<half>
2016-04-06 19:30:21 -07:00
Benoit Steiner
532fdf24cb
Added support for hardware conversion between fp16 and full floats whenever
...
possible.
2016-04-06 17:11:31 -07:00
Benoit Steiner
58c1dbff19
Made the fp16 code more portable.
2016-04-06 13:44:08 -07:00
Benoit Steiner
cf7e73addd
Added some missing conversions to the Half class, and fixed the implementation of the < operator on cuda devices.
2016-04-06 09:59:51 -07:00
Benoit Steiner
10bdd8e378
Merged in tillahoffmann/eigen (pull request PR-173)
...
Added zeta function of two arguments and polygamma function
2016-04-06 09:40:17 -07:00
Benoit Steiner
72abfa11dd
Added support for isfinite on fp16
2016-04-06 09:07:30 -07:00
Rasmus Munk Larsen
4d07064a3d
Fix bug in alternate lower bound calculation due to missing parentheses.
...
Make a few expressions more concise.
2016-04-05 16:40:48 -07:00
Konstantinos Margaritis
2bba4ee2cf
Merged kmargar/eigen/tip into default
2016-04-05 22:22:08 +03:00
Konstantinos Margaritis
317384b397
complete the port, remove float support
2016-04-05 14:56:45 -04:00
tillahoffmann
726bd5f077
Merged eigen/eigen into default
2016-04-05 18:21:05 +01:00
Till Hoffmann
a350c25a39
Added accuracy comments.
2016-04-05 18:20:40 +01:00
Konstantinos Margaritis
bc0ad363c6
add remaining includes
2016-04-05 06:01:17 -04:00
Konstantinos Margaritis
2d41dc9622
complete int/double specialized traits for ZVector
2016-04-05 06:00:51 -04:00
Konstantinos Margaritis
988344daf1
enable the other includes as well
2016-04-05 05:59:30 -04:00
Rasmus Larsen
d7eeee0c1d
Merged eigen/eigen into default
2016-04-04 15:58:27 -07:00
Rasmus Munk Larsen
513c372960
Fix docstrings to list all supported decompositions.
2016-04-04 14:34:59 -07:00
Rasmus Munk Larsen
86e0ed81f8
Addresses comments on Eigen pull request PR-174.
...
* Get rid of code-duplication for real vs. complex matrices.
* Fix flipped arguments to select.
* Make the condition estimation functions free functions.
* Use Vector::Unit() to generate canonical unit vectors.
* Misc. cleanup.
2016-04-04 14:20:01 -07:00
Benoit Jacob
158fea0f5e
bug #1190 - Don't trust __ARM_FEATURE_FMA on Clang/ARM
2016-04-04 16:42:40 -04:00
Benoit Jacob
03f2997a11
bug #1191 - Prevent Clang/ARM from rewriting VMLA into VMUL+VADD
2016-04-04 16:41:47 -04:00
Till Hoffmann
b97911dd18
Refactored code into type-specific helper functions.
2016-04-04 19:16:03 +01:00
Benoit Steiner
c4179dd470
Updated the scalar_abs_op struct to make it compatible with cuda devices.
2016-04-04 11:11:51 -07:00
Benoit Steiner
1108b4f218
Fixed the signature of numext::abs to make it compatible with complex numbers
2016-04-04 11:09:25 -07:00
Rasmus Larsen
30242b7565
Merged eigen/eigen into default
2016-04-01 17:19:36 -07:00
Rasmus Munk Larsen
9d51f7c457
Add rcond method to LDLT.
2016-04-01 16:48:38 -07:00
Rasmus Munk Larsen
f54137606e
Add condition estimation to Cholesky (LLT) factorization.
2016-04-01 16:19:45 -07:00
Rasmus Munk Larsen
fb8dccc23e
Replace "inline static" with "static inline" for consistency.
2016-04-01 12:48:18 -07:00
Rasmus Munk Larsen
91414e0042
Fix comments in ConditionEstimator and minor cleanup.
2016-04-01 11:58:17 -07:00
Rasmus Munk Larsen
1aa89fb855
Add matrix condition estimator module that implements the Higham/Hager algorithm from http://www.maths.manchester.ac.uk/~higham/narep/narep135.pdf used in LPACK. Add rcond() methods to FullPivLU and PartialPivLU.
2016-04-01 10:27:59 -07:00
Till Hoffmann
80eba21ad0
Merge upstream.
2016-04-01 18:18:49 +01:00
Till Hoffmann
3cb0a237c1
Fixed suggestions by Eugene Brevdo.
2016-04-01 17:51:39 +01:00
tillahoffmann
49960adbdd
Merged eigen/eigen into default
2016-04-01 14:36:15 +01:00
Till Hoffmann
57239f4a81
Added polygamma function.
2016-04-01 14:35:21 +01:00
Till Hoffmann
dd5d390daf
Added zeta function.
2016-04-01 13:32:29 +01:00
Benoit Steiner
0ea7ab4f62
Hashing was only officially introduced in c++11. Therefore only define an implementation of the hash function for float16 if c++11 is enabled.
2016-03-31 14:44:55 -07:00
Benoit Steiner
92b7f7b650
Improved code formating
2016-03-31 13:09:58 -07:00
Benoit Steiner
f197813f37
Added the ability to hash a fp16
2016-03-31 13:09:23 -07:00
Benoit Steiner
4c859181da
Made it possible to use the NumTraits for complex and Array in a cuda kernel.
2016-03-31 12:48:38 -07:00
Benoit Steiner
c36ab19902
Added __ldg primitive for fp16.
2016-03-31 10:55:03 -07:00
Benoit Steiner
b575fb1d02
Added NumTraits for half floats
2016-03-31 10:43:59 -07:00
Benoit Steiner
8c8a79cec1
Fixed a typo
2016-03-31 10:33:32 -07:00
Benoit Steiner
4f1a7e51c1
Pull math functions from the global namespace only when compiling cuda code with nvcc. When compiling with clang, we want to use the std namespace.
2016-03-30 17:59:49 -07:00
Benoit Steiner
bc68fc2fe7
Enable constant expressions when compiling cuda code with clang.
2016-03-30 17:58:32 -07:00
Benoit Jacob
01b5333e44
bug #1186 - vreinterpretq_u64_f64 fails to build on Android/Aarch64/Clang toolchain
2016-03-30 11:02:33 -04:00
Benoit Steiner
1841d6d4c3
Added missing cuda template specializations for numext::ceil
2016-03-29 13:29:34 -07:00
Benoit Steiner
e02b784ec3
Added support for standard mathematical functions and trancendentals(such as exp, log, abs, ...) on fp16
2016-03-29 09:20:36 -07:00
Benoit Steiner
c38295f0a0
Added support for fmod
2016-03-28 15:53:02 -07:00
Konstantinos Margaritis
01e7298fe6
actually include ZVector files, passes most basic tests (float still fails)
2016-03-28 10:58:02 -04:00
Konstantinos Margaritis
f48011119e
Merged eigen/eigen into default
2016-03-28 01:48:45 +03:00
Konstantinos Margaritis
ed6b9d08f1
some primitives ported, but missing intrinsics and crash with asm() are a problem
2016-03-27 18:47:49 -04:00
Benoit Steiner
65716e99a5
Improved the cost estimate of the quotient op
2016-03-25 11:13:53 -07:00
Benoit Steiner
d94f6ba965
Started to model the cost of divisions more accurately.
2016-03-25 11:02:56 -07:00
Benoit Steiner
2e4e4cb74d
Use numext::abs instead of abs to avoid incorrect conversion to integer of the argument
2016-03-23 16:57:12 -07:00
Benoit Steiner
81d340984a
Removed executable bit from header files
2016-03-23 16:15:02 -07:00
Benoit Steiner
bff8cbad06
Removed executable bit from header files
2016-03-23 16:14:23 -07:00
Benoit Steiner
7a570e50ef
Fixed contractions of fp16
2016-03-23 16:00:06 -07:00
Benoit Steiner
fc3660285f
Made type conversion explicit
2016-03-23 09:56:50 -07:00
Benoit Steiner
0e68882604
Added the ability to divide a half float by an index
2016-03-23 09:46:42 -07:00
Benoit Steiner
6971146ca9
Added more conversion operators for half floats
2016-03-23 09:44:52 -07:00
Benoit Steiner
f9ad25e4d8
Fixed contractions of 16 bit floats
2016-03-22 09:30:23 -07:00
Benoit Steiner
134d750eab
Completed the implementation of vectorized type casting of half floats.
2016-03-18 13:36:28 -07:00
Benoit Steiner
7bd551b3a9
Make all the conversions explicit
2016-03-18 12:20:08 -07:00
Benoit Steiner
7b98de1f15
Implemented some of the missing type casting for half floats
2016-03-17 21:45:45 -07:00
Christoph Hertzberg
46aa9772fc
Merged in ebrevdo/eigen (pull request PR-169)
...
Bugfixes to cuda tests, igamma & igammac implemented, & tests for digamma, igamma, igammac on CPU & GPU.
2016-03-16 21:59:08 +01:00
Eugene Brevdo
1f69a1b65f
Change the header guard around certain numext functions to be CUDA specific.
2016-03-16 12:44:35 -07:00
Benoit Steiner
5a51366ea5
Fixed a typo.
2016-03-14 09:25:16 -07:00
Benoit Steiner
fcf59e1c37
Properly gate the use of cuda intrinsics in the code
2016-03-14 09:13:44 -07:00
Benoit Steiner
97a1f1c273
Make sure we only use the half float intrinsic when compiling with a version of CUDA that is recent enough to provide them
2016-03-14 08:37:58 -07:00
Benoit Steiner
e29c9676b1
Don't mark the cast operator as explicit, since this is a c++11 feature that's not supported by older compilers.
2016-03-12 00:15:58 -08:00
Benoit Steiner
eecd914864
Also replaced uint32_t with unsigned int to make the code more portable
2016-03-11 19:34:21 -08:00
Benoit Steiner
1ca8c1ec97
Replaced a couple more uint16_t with unsigned short
2016-03-11 19:28:28 -08:00
Benoit Steiner
0423b66187
Use unsigned short instead of uint16_t since they're more portable
2016-03-11 17:53:41 -08:00
Benoit Steiner
048c4d6efd
Made half floats usable on hardware that doesn't support them natively.
2016-03-11 17:21:42 -08:00
Benoit Steiner
456e038a4e
Fixed the +=, -=, *= and /= operators to return a reference
2016-03-10 15:17:44 -08:00
Eugene Brevdo
836e92a051
Update MathFunctions/SpecialFunctions with intelligent header guards.
2016-03-09 09:04:45 -08:00
Eugene Brevdo
5e7de771e3
Properly fix merge issues.
2016-03-08 17:35:05 -08:00
Eugene Brevdo
73220d2bb0
Resolve bad merge.
2016-03-08 17:28:21 -08:00
Eugene Brevdo
14f0fde51f
Add certain functions to numext (log, exp, tan) because CUDA doesn't support std::
...
Use these in SpecialFunctions.
2016-03-08 17:17:44 -08:00
Eugene Brevdo
0bb5de05a1
Finishing touches on igamma/igammac for GPU. Tests now pass.
2016-03-07 15:35:09 -08:00
Eugene Brevdo
5707004d6b
Fix Eigen's building of sharded tests that use CUDA & more igamma/igammac bugfixes.
...
0. Prior to this PR, not a single sharded CUDA test was actually being *run*.
Fixed that.
GPU tests are still failing for igamma/igammac.
1. Add calls for igamma/igammac to TensorBase
2. Fix up CUDA-specific calls of igamma/igammac
3. Add unit tests for digamma, igamma, igammac in CUDA.
2016-03-07 14:08:56 -08:00
Benoit Steiner
05bbca079a
Turn on some of the cxx11 features when compiling with visual studio 2015
2016-03-05 10:52:08 -08:00
Eugene Brevdo
0b9e0abc96
Make igamma and igammac work correctly.
...
This required replacing ::abs with std::abs.
Modified some unit tests.
2016-03-04 21:12:10 -08:00
Eugene Brevdo
7ea35bfa1c
Initial implementation of igamma and igammac.
2016-03-03 19:39:41 -08:00
Benoit Steiner
1032441c6f
Enable partial support for half floats on Kepler GPUs.
2016-03-03 10:34:20 -08:00
Benoit Steiner
1da10a7358
Enable the conversion between floats and half floats on older GPUs that support it.
2016-03-03 10:33:20 -08:00
Benoit Steiner
2de8cc9122
Merged in ebrevdo/eigen (pull request PR-167)
...
Add infinity() support to numext::numeric_limits, use it in lgamma.
I tested the code on my gtx-titan-black gpu, and it appears to work as expected.
2016-03-03 09:42:12 -08:00
Eugene Brevdo
ab3dc0b0fe
Small bugfix to numeric_limits for CUDA.
2016-03-02 21:48:46 -08:00
Eugene Brevdo
6afea46838
Add infinity() support to numext::numeric_limits, use it in lgamma.
...
This makes the infinity access a __device__ function, removing
nvcc warnings.
2016-03-02 21:35:48 -08:00
Gael Guennebaud
3fccef6f50
bug #537 : fix compilation with Apples's compiler
2016-03-02 13:22:46 +01:00
Gael Guennebaud
dfa80b2060
Compilation fix
2016-03-01 12:48:56 +01:00
Gael Guennebaud
bee9efc203
Compilation fix
2016-03-01 12:47:27 +01:00
Gael Guennebaud
e9bea614ec
Fix shortcoming in fixed-value deduction of startRow/startCol
2016-02-29 10:31:27 +01:00
Gael Guennebaud
8e6faab51e
bug #1172 : make valuePtr and innderIndexPtr properly return null for empty matrices.
2016-02-27 14:55:40 +01:00
Gael Guennebaud
91e1375ba9
merge
2016-02-23 11:09:05 +01:00
Gael Guennebaud
055000a424
Fix startRow()/startCol() for dense Block with direct access:
...
the initial implementation failed for empty rows/columns for which are ambiguous.
2016-02-23 11:07:59 +01:00
Benoit Steiner
6270d851e3
Declare the half float type as arithmetic.
2016-02-22 13:59:33 -08:00
Benoit Steiner
584832cb3c
Implemented the ptranspose function on half floats
2016-02-21 12:44:53 -08:00
Benoit Steiner
95fceb6452
Added the ability to compute the absolute value of a half float
2016-02-21 20:24:11 +00:00
Benoit Steiner
9ff269a1d3
Moved some of the fp16 operators outside the Eigen namespace to workaround some nvcc limitations.
2016-02-20 07:47:23 +00:00
Gael Guennebaud
d90a2dac5e
merge
2016-02-19 23:01:27 +01:00
Gael Guennebaud
6fa35bbd28
bug #1170 : skip calls to memcpy/memmove for empty imput.
2016-02-19 22:58:52 +01:00
Gael Guennebaud
6f0992c05b
Fix nesting type and complete reflection methods of Block expressions.
2016-02-19 22:21:02 +01:00
Gael Guennebaud
f3643eec57
Add typedefs for the return type of all block methods.
2016-02-19 22:15:01 +01:00
Benoit Steiner
180156ba1a
Added support for tensor reductions on half floats
2016-02-19 10:05:59 -08:00
Benoit Steiner
5c4901b83a
Implemented the scalar division of 2 half floats
2016-02-19 10:03:19 -08:00
Benoit Steiner
f7cb755299
Added support for operators +=, -=, *= and /= on CUDA half floats
2016-02-19 15:57:26 +00:00
Benoit Steiner
dc26459b99
Implemented protate() for CUDA
2016-02-19 15:16:54 +00:00
Benoit Steiner
ac5d706a94
Added support for simple coefficient wise tensor expression using half floats on CUDA devices
2016-02-19 08:19:12 +00:00
Benoit Steiner
0606a0a39b
FP16 on CUDA are only available starting with cuda 7.5. Disable them when using an older version of CUDA
2016-02-18 23:15:23 -08:00
Benoit Steiner
7151bd8768
Reverted unintended changes introduced by a bad merge
2016-02-19 06:20:50 +00:00
Benoit Steiner
17b9fbed34
Added preliminary support for half floats on CUDA GPU. For now we can simply convert floats into half floats and vice versa
2016-02-19 06:16:07 +00:00
Benoit Steiner
8ce46f9d89
Improved implementation of ptanh for SSE and AVX
2016-02-18 13:24:34 -08:00
Eugene Brevdo
832380c455
Merged eigen/eigen into default
2016-02-17 14:44:06 -08:00
Eugene Brevdo
06a2bc7c9c
Tiny bugfix in SpecialFunctions: some compilers don't like doubles
...
implicitly downcast to floats in an array constructor.
2016-02-17 14:41:59 -08:00
Gael Guennebaud
f6f057bb7d
bug #1166 : fix shortcomming in gemv when the destination is not a vector at compile-time.
2016-02-15 21:43:07 +01:00
Gael Guennebaud
4252af6897
Remove dead code.
2016-02-12 16:13:35 +01:00
Gael Guennebaud
2f5f56a820
Fix usage of evaluator in sparse * permutation products.
2016-02-12 16:13:16 +01:00
Gael Guennebaud
0a537cb2d8
bug #901 : fix triangular-view with unit diagonal of sparse rectangular matrices.
2016-02-12 15:58:31 +01:00
Benoit Steiner
17e93ba148
Pulled latest updates from trunk
2016-02-11 15:05:38 -08:00
Benoit Steiner
3628f7655d
Made it possible to run the scalar_binary_pow_op functor on GPU
2016-02-11 15:05:03 -08:00
Hauke Heibel
eeac46f980
bug #774 : re-added comment referencing equations in the original paper
2016-02-11 19:38:37 +01:00
Benoit Steiner
c569cfe12a
Inline the +=, -=, *= and /= operators consistently between DenseBase.h and SelfCwiseBinaryOp.h
2016-02-11 09:33:32 -08:00
Gael Guennebaud
8cc9232b9a
bug #774 : fix a numerical issue producing unwanted reflections.
2016-02-11 15:32:56 +01:00