Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Adaptive Parallel Computation with CUDA Dynamic Parallelism (nvidia.com)
49 points by AndreyKarpov on May 7, 2014 | hide | past | favorite | 9 comments


Nvidia is YEARS ahead of the competition. They're playing a whole different game.


I recently switched from CUDA to OpenCL, so I could use AMD's cards which are (imo) much better value. So yes, Nvidia are playing a different game - premium prices.

I can see why many people prefer them though, if they have the cash - setting up AMD's cards on Linux is a real pain with drivers, profiler, etc - in that department Nvidia are miles ahead (whilst AMD and their customers are in the 7th circle of hell).

Hopefully the competition will keep spurring them both on.


So when will OpenCL provide the same support for integrated C++ and Fortran development, on the same source file, CUDA allows for?

Only now is OpenCL having the first steps with language agnostic GPGPU bytcode and C++ support.

That premium price pays off in developer productivity.


> So when will OpenCL provide the same support for integrated C++ and Fortran development, on the same source file, CUDA allows for?

Pretty much any language that has an llvm backend should be easy to port to OpenCL now that we have SPIR.


I think Nvidia is at a point now where the CUDA toolkit is well-established that it can afford to start squeezing the people already using it. Based on my perspective (academia), universities seem to have invested heavily in CUDA, meaning a lot of faculties tend to use it (or rely on builtin CUDA support in FEA/CFD/MATLAB) for HPC where they need performance .

I'd be interested to see how much of the breakdown in revenue for their compute cards is for academia, government and industry though.


OpenCL 2.0 supports this feature, and so does the current AMD hardware in theory. However there are no drivers yet and I'm afraid it's going to be at least an year until AMD manages to push out working OpenCL 2.0 stack.

Nvidia is just on a completely different league with driver support. And they could implement OpenCL 2.0 based on current CUDA if they wanted, but for rather obvious reasons they would rather not.


> And they could implement OpenCL 2.0 based on current CUDA if they wanted, but for rather obvious reasons they would rather not.

Which is the reason why I refuse to buy NVidia hardware. No way in hell will I support a proprietary API.


I'm not an expert in the field and I'm not going to say anything about end user experience^, however what I keep hearing from my professors and PhD friends doing research at my university on high-performance parallel applications on clusters of GPUs is that AMD GPUs usually have more raw performance (as in, reported performance) but nVidia GPUs have more control and better drivers. This means that on common benchmarks for high-performance computing (distribution of bandwidth, FLOPS, etc etc) nVidias still seem to get the advantage because it's just easier to squeeze more performance out of them compared to AMD ones. Also parallel algorithms are apparently easier to handle on the former rather than the latter.

ps: this is all from hearsay and unofficial conversations, I don't have sources to validate nor disprove any of the claims I made, sorry.

^ I use an nvidia and an intel integrated gpu on my linux desktop and laptop respectively and never had problems with either, just to note, I have no AMD experience.


Launching kernels on device is nice.

The competition has real time ray-tracing on mobile.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: