

122·
1 day agoI successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.


I successfully ran local Llama with llama.cpp and an old AMD GPU. I’m not sure why you think there’s no other option.
Llama.cpp now supports Vulkan, so it doesn’t matter what card you’re using.