Commit Graph

7641 Commits

Author SHA1 Message Date
Benoit Steiner
0ea7ab4f62 Hashing was only officially introduced in c++11. Therefore only define an implementation of the hash function for float16 if c++11 is enabled. 2016-03-31 14:44:55 -07:00
Benoit Steiner
92b7f7b650 Improved code formating 2016-03-31 13:09:58 -07:00
Benoit Steiner
f197813f37 Added the ability to hash a fp16 2016-03-31 13:09:23 -07:00
Benoit Steiner
0f5cc504fe Properly gate the fft code 2016-03-31 12:59:39 -07:00
Benoit Steiner
4c859181da Made it possible to use the NumTraits for complex and Array in a cuda kernel. 2016-03-31 12:48:38 -07:00
Benoit Steiner
c36ab19902 Added __ldg primitive for fp16. 2016-03-31 10:55:03 -07:00
Benoit Steiner
b575fb1d02 Added NumTraits for half floats 2016-03-31 10:43:59 -07:00
Benoit Steiner
8c8a79cec1 Fixed a typo 2016-03-31 10:33:32 -07:00
Benoit Steiner
af4ef540bf Fixed a off-by-one bug in a debug assertion 2016-03-30 18:37:19 -07:00
Benoit Steiner
791e5cfb69 Added NumTraits for type2index. 2016-03-30 18:36:36 -07:00
Benoit Steiner
4f1a7e51c1 Pull math functions from the global namespace only when compiling cuda code with nvcc. When compiling with clang, we want to use the std namespace. 2016-03-30 17:59:49 -07:00
Benoit Steiner
bc68fc2fe7 Enable constant expressions when compiling cuda code with clang. 2016-03-30 17:58:32 -07:00
Benoit Steiner
483aaad10a Fixed compilation warning 2016-03-30 17:08:13 -07:00
Benoit Steiner
1b40abbf99 Added missing assignment operator to the TensorUInt128 class, and made misc small improvements 2016-03-30 13:17:03 -07:00
Benoit Jacob
01b5333e44 bug #1186 - vreinterpretq_u64_f64 fails to build on Android/Aarch64/Clang toolchain 2016-03-30 11:02:33 -04:00
Benoit Steiner
aa45ad2aac Fixed the formatting of the README. 2016-03-29 15:06:13 -07:00
Benoit Steiner
56df5ef1d7 Attempt to fix the formatting of the README 2016-03-29 15:03:38 -07:00
Benoit Steiner
1bcd82e31b Pulled latest updates from trunk 2016-03-29 13:36:18 -07:00
Gael Guennebaud
09ad31aa85 Add regression test for nesting type handling in blas_traits 2016-03-29 22:33:57 +02:00
Benoit Steiner
1841d6d4c3 Added missing cuda template specializations for numext::ceil 2016-03-29 13:29:34 -07:00
Benoit Steiner
7b7d2a9fa5 Use false instead of 0 as the expected value of a boolean 2016-03-29 11:50:17 -07:00
Benoit Steiner
e02b784ec3 Added support for standard mathematical functions and trancendentals(such as exp, log, abs, ...) on fp16 2016-03-29 09:20:36 -07:00
Benoit Steiner
c38295f0a0 Added support for fmod 2016-03-28 15:53:02 -07:00
Benoit Steiner
6772f653c3 Made it possible to customize the threadpool 2016-03-28 10:01:04 -07:00
Benoit Steiner
1bc81f7889 Fixed compilation warnings on arm 2016-03-28 09:21:04 -07:00
Benoit Steiner
78f83d6f6a Prevent potential overflow. 2016-03-28 09:18:04 -07:00
Benoit Steiner
74f91ed06c Improved support for integer modulo 2016-03-25 17:21:56 -07:00
Benoit Steiner
65716e99a5 Improved the cost estimate of the quotient op 2016-03-25 11:13:53 -07:00
Benoit Steiner
d94f6ba965 Started to model the cost of divisions more accurately. 2016-03-25 11:02:56 -07:00
Benoit Steiner
a86c9f037b Fixed compilation error on windows 2016-03-24 18:54:31 -07:00
Benoit Steiner
0968e925a0 Updated the benchmarking code to use Eigen::half instead of half 2016-03-24 18:00:33 -07:00
Benoit Steiner
044efea965 Made sure that the cxx11_tensor_cuda test can be compiled even without support for cxx11. 2016-03-23 20:02:11 -07:00
Benoit Steiner
2e4e4cb74d Use numext::abs instead of abs to avoid incorrect conversion to integer of the argument 2016-03-23 16:57:12 -07:00
Benoit Steiner
41434a8a85 Avoid unnecessary conversions 2016-03-23 16:52:38 -07:00
Benoit Steiner
92693b50eb Fixed compilation warning 2016-03-23 16:40:36 -07:00
Benoit Steiner
9bc9396e88 Use portable includes 2016-03-23 16:30:06 -07:00
Benoit Steiner
393bc3b16b Added comment 2016-03-23 16:22:15 -07:00
Benoit Steiner
81d340984a Removed executable bit from header files 2016-03-23 16:15:02 -07:00
Benoit Steiner
bff8cbad06 Removed executable bit from header files 2016-03-23 16:14:23 -07:00
Benoit Steiner
7a570e50ef Fixed contractions of fp16 2016-03-23 16:00:06 -07:00
Benoit Steiner
7168afde5e Made the tensor benchmarks compile on MacOS 2016-03-23 14:21:04 -07:00
Benoit Steiner
2062ee2d26 Added a test to verify that notifications are working properly 2016-03-23 13:39:00 -07:00
Benoit Steiner
fc3660285f Made type conversion explicit 2016-03-23 09:56:50 -07:00
Benoit Steiner
0e68882604 Added the ability to divide a half float by an index 2016-03-23 09:46:42 -07:00
Benoit Steiner
6971146ca9 Added more conversion operators for half floats 2016-03-23 09:44:52 -07:00
Christoph Hertzberg
9642fd7a93 Replace all M_PI by EIGEN_PI and add a check to the testsuite. 2016-03-23 15:37:45 +01:00
Benoit Steiner
28e02996df Merged patch 672 from Justin Lebar: Don't use long doubles with cuda 2016-03-22 16:53:57 -07:00
Benoit Steiner
3d1e857327 Fixed compilation error 2016-03-22 15:48:28 -07:00
Benoit Steiner
de7d92c259 Pulled latest updates from trunk 2016-03-22 15:24:49 -07:00
Benoit Steiner
002cf0d1c9 Use a single Barrier instead of a collection of Notifications to reduce the thread synchronization overhead 2016-03-22 15:24:23 -07:00