CUDA Dominance vs AMD

The cluster focuses on NVIDIA's CUDA ecosystem dominance in GPU computing, particularly for AI/ML workloads, and AMD's challenges in developing competitive alternatives like ROCm, HIP, OpenCL, and Vulkan.

📉 Falling 0.4x AI & Machine Learning
4,146
Comments
18
Years Active
5
Top Authors
#2136
Topic ID

Activity Over Time

2009
3
2010
6
2011
24
2012
14
2013
28
2014
30
2015
73
2016
196
2017
241
2018
136
2019
214
2020
293
2021
268
2022
213
2023
947
2024
949
2025
504
2026
9

Keywords

DLSS LOT TensorFlow PyTorch AWS EDIT HIP VRAM LLNL GPU cuda amd nvidia opencl libraries ml gpu hardware pytorch gpus

Sample Comments

graphe Nov 25, 2023 View on HN

How is AMD addressing CUDA dominance?

esafak Sep 27, 2023 View on HN

It's not Nvidia's fault that the competition (AMD) does not provide the right software. There is an open alternative to CUDA called OpenCL.

shmerl Mar 1, 2021 View on HN

AMD GPUs are better than Nvidia for that - they had async compute way longer. I thought the problem was CUDA lock-in and lack of a nice programming model using something modern like Rust may be.

zozbot234 Oct 2, 2019 View on HN

CUDA is a vendor lock-in scheme. Use OpenCL or Vulkan instead (yes, Vulkan includes support for compute, not just graphics!). AMD supports both, in addition to tools like HIP to help you port legacy CUDA code.

Cloudef Dec 13, 2022 View on HN

It's unfortunate. CUDA is a mess, but because it's the leading tech on the field, AMD has no choice but to come with an (equally messy) compatible solution. Vulkan was thought of as a something that could replace CUDA and (dead) OpenCL, but it's never going to take off due to the ML field being heavily CUDA, or well more of pytorch / python :)

doublextremevil Apr 16, 2023 View on HN

The market desperately needs an alternative to CUDA, but I just don't see AMD doing it

deepGem Nov 9, 2017 View on HN

This is what many people outside the AI world don’t seem to understand. Nvidia has a stranglehold in the form of CUDA and Cudnn. There isn’t any open source equivalent to Cudnn. AMD is trying to push OpenCl in this direction but it will be a long time before DL libraries start migrating to OpenCl. Like tomorrow by miracle if al alternative GPU which is as good as the 1080ti popped up, it would be useless in the AI market.

nemothekid Dec 15, 2023 View on HN

CUDA is huge and nvidia spent a ton in a lot of "dead end" use cases optimizing it. There have been experiments with CUDA translation layers with decent performance[1]. There are two things that most projects hit:1. The CUDA API is huge; I'm sure Intel/AMD will focus on what they need to implement pytorch and ignore every other use case ensuring that CUDA always has the leg up in any new frontier2. Nvidia actually cares about developer experience. The most prominent exa

vishvananda Mar 6, 2019 View on HN

I think this is primarily due to the immense effort that NVIDIA has put into CUDA. It works very well and it is extremely fast. The alternatives for AMD are OpenCL and ROCm which have seriously lagged behind CUDA in every respect.EDIT: lots of theories and discussion here: https://www.reddit.com/r/MachineLearni

pk-protect-ai Dec 15, 2023 View on HN

Ping me when the software stack for the AMD hardware is as good as CUDA.