Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't see anyone de-throning Nvidia from the deep learning market.

I'd like to see it, but I'm not sure anyone can catch up now.



I think it's highly unlikely that any traditional chip company dethrones Nvidia in DL, at least in a reasonably soonish time horizon. As others have said, CUDA is just too far ahead in terms of development and adoption.

However, I think NVIDIA is still vulnerable—but against AWS/GCP/Azure, not Intel/AMD.

My opinion is that deep learning is moving to the cloud. That's a bigger conversation with a lot of nuances, but if you take that basic assumption, then the development of ASICs like TPU/Inferentia become a big threat to Nvidia.

If the biggest buyers of chips in deep learning are the clouds, and the clouds are increasingly developing their own chips for deep learning, Nvidia is in a tough spot. They'll always have a place among labs that use their own machine, and of course, Nvidia's business is bigger than machine learning, but in general I think the clouds are a real threat.


It is relatively trivial to hook any new accelerator you develop into the popular deep learning frameworks. In the case of AMD there actually already exists a mature compiler framework for their GPUs and their cards are mostly on par with Nvidia's. Most deep-learning researchers don't write custom CUDA kernels, but simply stitch high level operations together in python. So as soon AMD delivers a performance / power advantage there will be almost no friction to deploy a AMD only cluster.

One of Nvidia's actual moats is their system building competency, which AMD lacks. They can sell you a box / a whole server room configuration, since their acquisition of Mellanox together with network equipment.


Yes, but there is one assumption in this hypothesis which I think is not accurate.

The cost of Hardware Development is the main cost contribution.

Which I think is not true with regards to both GPU and GPGPU computing. The major cost for GPU is Drivers, and CUDA for GPGPU. i.e It is Software.

Unlike ARM where AWS/GCP/Azure can make their chips and benefits from the Software ecosystem already in place for ARM, there is no such thing on GPU. Drivers and CUDA is the biggest moat around Nvidia's CPU. And unless Developers figure out a way to drive the cost of DL and Drivers down, there is no incentives to switch away from Nvidia's ecosystem.

That is why I am interested to see how Intel tackle this area. And if History will repeat itself again in the Voodoo, Rage 3D and S3 Verge era.


Not happening. nVidia has a stranglehold on deep learning because of CUDA and Cudnn. I don't see any AMD alternatives to take over either of these. So I wouldn't bet too much on AMD taking over the deep learning chip market.


Who’s writing bare CUDA though? For most tasks a framework like Tensorflow or PyTorch is good enough.

If AMD could provide a backend for the most popular frameworks then they could skip over the CUDA patent issue completely.

The real problem is that it seems like AMD’s not investing substantially in software teams to make it happen.


In the deep learning world every major framework works on top of Cudnn which works on top of CUDA Pytorch, Tensorflow you name it.

https://github.com/pytorch/pytorch/issues/10657

That is the state of Pytorch support for AMD GPUs.


> Who’s writing bare CUDA though?

I do. Not everything you can do with a CUDA card is deep learning. In fact that's just one of many applications.


Lots of people do. We write cuda all the time.


This. The software lead is just incredible; almost everything uses CUDA.

There has been some progress, but PyTorch still isn't fully functional with ROCm yet and that feels like a good litmus test.

https://github.com/pytorch/pytorch/issues/10657


The Apple ecosystem with its amd graphic cars and future Apple Gpu card seems to be a fight. Or at least maintain certain software not totally all cuda all the way down. And also the gaming with amd dominates both gaming platform.

Really do not want just one players. And hope the high level plays have more completion.

Still interest in Taiwan part. Purely from economic point of view. How secure are we ok that front, if all eggs are in one basket. Hk is fallen. Taiwan or South China Sea is in play. That will affect the supply chain.


I think the stranglehold is about to burst.

Intel is launching a GPU/Deep learning accelerator, Huawei is thinking about launching a GPU. Pytorch and Tensor flow work well enough on AMD GPUs. There are also custom deep learning ASICs from Google. There is simply too much competition at this point for CUDA to continue to be the standard.


Is there any chance that some of the upcoming open-source cross-platform standards like WebGPU could have an effect on this, if tooling around them was built to support writing more GPGPU-focused code?


DLSS is black magic


Have you heard of the billion dollar unicorn from England, United Kingdom called Graphcore?

https://m.youtube.com/watch?v=_zvU0uwIafQ

https://www.graphcore.ai/products/mk2/ipu-machine-ipu-pod


Just saying deep learning still is mostly a marketing fad...


Last quarter, Nvidia’s datacenter segment exceeded $1B in revenue for the first time, and it’s close to overtaking the gaming segment as largest business segment.

Marketing fad or not, it’s not a bad business to be in.


I say the total opposite.

It is barely starting. In 10 and 20 years it will be huge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: