Commit Graph

4646 Commits

Author SHA1 Message Date
Gael Guennebaud
b6ed8244b4 bug #1201: optimize affine*vector products 2016-05-19 16:09:15 +02:00
Gael Guennebaud
73693b5de6 bug #1221: disable gcc 6 warning: ignoring attributes on template argument 2016-05-19 15:21:53 +02:00
Gael Guennebaud
df9a5e13c6 Fix SelfAdjointEigenSolver for some input expression types, and add new regression unit tests for sparse and selfadjointview inputs. 2016-05-19 13:07:33 +02:00
Gael Guennebaud
6a2916df80 DiagonalWrapper is a vector, so it must expose the LinearAccessBit flag. 2016-05-19 13:06:21 +02:00
Gael Guennebaud
a226f6af6b Add support for SelfAdjointView::diagonal() 2016-05-19 13:05:33 +02:00
Gael Guennebaud
ee7da3c7c5 Fix SelfAdjointView::triangularView for complexes. 2016-05-19 13:01:51 +02:00
Gael Guennebaud
b6b8578a67 bug #1230: add support for SelfadjointView::triangularView. 2016-05-19 11:36:38 +02:00
Gael Guennebaud
84df9142e7 bug #1231: fix compilation regression regarding complex_array/=real_array and add respective unit tests 2016-05-18 23:00:13 +02:00
Gael Guennebaud
21d692d054 Use coeff(i,j) instead of operator(). 2016-05-18 17:09:20 +02:00
Gael Guennebaud
8456bbbadb bug #1224: fix regression in (dense*dense).sparseView() by specializing evaluator<SparseView<Product>> for sparse products only. 2016-05-18 16:53:28 +02:00
Gael Guennebaud
b507b82326 Use default sorting strategy for square products. 2016-05-18 16:51:54 +02:00
Gael Guennebaud
747e3290c0 bug #1213: rename some enums type for consistency. 2016-05-18 13:26:56 +02:00
Rasmus Munk Larsen
0dbd68145f Roll back changes to core. Move include of TensorFunctors.h up to satisfy dependence in TensorCostModel.h. 2016-05-17 10:25:19 -07:00
Rasmus Munk Larsen
e55deb21c5 Improvements to parallelFor.
Move some scalar functors from TensorFunctors. to Eigen core.
2016-05-12 14:07:22 -07:00
Benoit Steiner
fae0493f98 Fixed a couple of bugs related to the Pascalfamily of GPUs
H: Enter commit message.  Lines beginning with 'HG:' are removed.
2016-05-11 23:02:26 -07:00
Benoit Steiner
b6a517c47d Added the ability to load fp16 using the texture path.
Improved the performance of some reductions on fp16
2016-05-11 21:26:48 -07:00
Benoit Steiner
518149e868 Misc fixes for fp16 2016-05-11 20:11:14 -07:00
Benoit Steiner
56a1757d74 Made predux_min and predux_max on fp16 less noisy 2016-05-11 17:37:34 -07:00
Benoit Steiner
9091351dbe __ldg is only available with cuda architectures >= 3.5 2016-05-11 15:22:13 -07:00
Benoit Steiner
02f76dae2d Fixed a typo 2016-05-11 15:08:38 -07:00
Christoph Hertzberg
131e5a1a4a Do not copy for trivial 1x1 case. This also avoids a "maybe-uninitialized" warning in some situations. 2016-05-11 23:50:13 +02:00
Benoit Steiner
70195a5ff7 Added missing EIGEN_DEVICE_FUNC 2016-05-11 14:10:09 -07:00
Benoit Steiner
09a19c33a8 Added missing EIGEN_DEVICE_FUNC qualifiers 2016-05-11 14:07:43 -07:00
Christoph Hertzberg
33ca7e3c8d bug #1207: Add and fix logical-op warnings 2016-05-11 19:36:34 +02:00
Benoit Steiner
217d984abc Fixed a typo in my previous commit 2016-05-11 10:22:15 -07:00
Christoph Hertzberg
0f61343893 Workaround maybe-uninitialized warning 2016-05-11 09:00:18 +02:00
Christoph Hertzberg
3bfc9b47ca Workaround "misleading-indentation" warnings 2016-05-11 08:41:36 +02:00
Benoit Steiner
0b9e3dcd06 Added packet primitives to compute exp, log, sqrt and rsqrt on fp16. This improves the performance by 10 to 30%. 2016-05-10 11:05:33 -07:00
Benoit Steiner
8adf5cc70f Added support for packet processing of fp16 on kepler and maxwell gpus 2016-05-06 19:16:43 -07:00
Christoph Hertzberg
a11bd82dc3 bug #1213: Give names to anonymous enums 2016-05-06 11:31:56 +02:00
Benoit Steiner
0451940fa4 Relaxed the dummy precision for fp16 2016-05-05 15:40:01 -07:00
Christoph Hertzberg
dacb469bc9 Enable and fix -Wdouble-conversion warnings 2016-05-05 13:35:45 +02:00
Ola Røer Thorsen
be78aea6b3 fix double-promotion/float-conversion in Core/SpecialFunctions.h 2016-05-04 10:52:08 +02:00
Gael Guennebaud
75a94b9662 Improve documentation of BDCSVD 2016-05-04 12:53:14 +02:00
Gael Guennebaud
e2ca478485 bug #1214: consider denormals as zero in D&C SVD. This also workaround infinite binary search when compiling with ICC's unsafe optimizations. 2016-05-03 23:15:29 +02:00
Benoit Steiner
6c3e5b85bc Fixed compilation error with cuda >= 7.5 2016-05-03 09:38:42 -07:00
Benoit Steiner
da50419df8 Made a cast explicit 2016-05-02 19:50:22 -07:00
Gael Guennebaud
b1bd53aa6b Fix performance regression: with AVX, unaligned stores were emitted instead of aligned ones for fixed size assignement. 2016-05-01 23:25:06 +02:00
Benoit Steiner
2b890ae618 Fixed compilation errors generated by clang 2016-04-29 18:30:40 -07:00
Benoit Steiner
46bcb70969 Don't turn on const expressions when compiling with gcc >= 4.8 unless the -std=c++11 option has been used 2016-04-29 15:20:59 -07:00
Gael Guennebaud
0f3c4c8ff4 Fix compilation of sparse.cast<>().transpose(). 2016-04-29 18:26:08 +02:00
Benoit Steiner
dacb23277e Fixed the igamma and igammac implementations to make them callable from a gpu kernel. 2016-04-28 18:54:54 -07:00
Benoit Steiner
a5d4545083 Deleted unused variable 2016-04-28 14:14:48 -07:00
Justin Lebar
40d1e2f8c7 Eliminate mutual recursion in igamma{,c}_impl::Run.
Presently, igammac_impl::Run calls igamma_impl::Run, which in turn calls
igammac_impl::Run.

This isn't actually mutual recursion; the calls are guarded such that we never
get into a loop.  Nonetheless, it's a stretch for clang to prove this.  As a
result, clang emits a recursive call in both igammac_impl::Run and
igamma_impl::Run.

That this is suboptimal code is bad enough, but it's particularly bad when
compiling for CUDA/nvptx.  nvptx allows recursion, but only begrudgingly: If
you have recursive calls in a kernel, it's on you to manually specify the
kernel's stack size.  Otherwise, ptxas will dump a warning, make a guess, and
who knows if it's right.

This change explicitly eliminates the mutual recursion in igammac_impl::Run and
igamma_impl::Run.
2016-04-28 13:57:08 -07:00
Benoit Steiner
2b917291d9 Merged in rmlarsen/eigen2 (pull request PR-183)
Detect cxx_constexpr support when compiling with clang.
2016-04-27 15:19:54 -07:00
Rasmus Munk Larsen
09b9e951e3 Depend on the more extensive support for constexpr in clang:
http://clang.llvm.org/docs/LanguageExtensions.html#c-1y-relaxed-constexpr
2016-04-27 14:59:11 -07:00
Rasmus Munk Larsen
1a325ef71c Detect cxx_constexpr support when compiling with clang. 2016-04-27 14:33:51 -07:00
Benoit Steiner
c61170e87d fpclassify isn't portable enough. In particular, the return values of the function are not available on all the platforms Eigen supportes: remove it from Eigen. 2016-04-27 14:22:20 -07:00
Benoit Steiner
f629fe95c8 Made the index type a template parameter to evaluateProductBlockingSizes
Use numext::mini and numext::maxi instead of std::min/std::max to compute blocking sizes.
2016-04-27 13:11:19 -07:00
Benoit Steiner
25141b69d4 Improved support for min and max on 16 bit floats when running on recent cuda gpus 2016-04-27 12:57:21 -07:00