It’s no surprise that NVIDIA is gradually dropping support for older videocards, with the Pascal (GTX 10xx) GPUs most recently getting axed. What’s more surprising is the terrible way t…
Amd had approximately 1 consumer gpu with rocm support so unless your framework supports opencl or you want to fuck around with unsupported rocm drivers then you’re out of luck. They’ve completely failed to meet the market
I mean… my 6700xt dont have offical rocm support, but the rocm driver works perfectly fine for it. The difference is amd hasnt but the effort in testing rocm on their consumer cards, thus cant make claims support for it.
The fact that it’s still off label like that is kinda nuts to me. ML and AI have been huge money makers for a decade and a half and amd seemingly doesn’t care about gpus. I wish they would invest more in testing and packaging drivers for the hardware that’s out there. In the year of our lord 2025 I shouldn’t have to compile from source or use aur packages for drivers 😭
That’s fine and dandy until you need to do ML, there is no other option
I successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.
Amd had approximately 1 consumer gpu with rocm support so unless your framework supports opencl or you want to fuck around with unsupported rocm drivers then you’re out of luck. They’ve completely failed to meet the market
I mean… my 6700xt dont have offical rocm support, but the rocm driver works perfectly fine for it. The difference is amd hasnt but the effort in testing rocm on their consumer cards, thus cant make claims support for it.
The fact that it’s still off label like that is kinda nuts to me. ML and AI have been huge money makers for a decade and a half and amd seemingly doesn’t care about gpus. I wish they would invest more in testing and packaging drivers for the hardware that’s out there. In the year of our lord 2025 I shouldn’t have to compile from source or use aur packages for drivers 😭
Llama.cpp now supports Vulkan, so it doesn’t matter what card you’re using.
Devs need actual support, tensor and other frameworks
ML?
Machine learning
Is that a huge deal for the average user?
Average user no, power user/enthusiast/techie absolutely critical