Really? When I was trying to get it to run a little while ago, I kept running out of memory with my 3060 12GB running 20B models, but prehaps I had it configured wrong.
You can offload them into ram. The response time gets way slower once this happens, but you can do it. I’ve run a 70b llama model on my 3060 12gb at 2 bit quantisation (I do have plenty of ram so no offloading from ram to disk at least lmao). It took like 6-7 minutes to generate replies but it did work.
This is correct. The popular misconception may arise from the marked difference between model use vs development. Inference is far less demanding than training with respect to time and energy efficiency.
And you can still train on most consumer GPUs, but for really deep networks like LLMs, well get ready to wait.
I run models at 10-20B parameters pretty easily on my M1 Pro MacBook. You can get good response times for decent models on a $500 M4 Mac Mini. A $4000 Nvidia GPU isn’t necessary.
For most of the good LLM models its going to take a high-end computer. For image generation, a more mid-range gaming computer works just fine.
You don’t really need a top of the line GPU. Waiting a minute for an answer is fine.
Really? When I was trying to get it to run a little while ago, I kept running out of memory with my 3060 12GB running 20B models, but prehaps I had it configured wrong.
You can offload them into ram. The response time gets way slower once this happens, but you can do it. I’ve run a 70b llama model on my 3060 12gb at 2 bit quantisation (I do have plenty of ram so no offloading from ram to disk at least lmao). It took like 6-7 minutes to generate replies but it did work.
This is correct. The popular misconception may arise from the marked difference between model use vs development. Inference is far less demanding than training with respect to time and energy efficiency.
And you can still train on most consumer GPUs, but for really deep networks like LLMs, well get ready to wait.
I run models at 10-20B parameters pretty easily on my M1 Pro MacBook. You can get good response times for decent models on a $500 M4 Mac Mini. A $4000 Nvidia GPU isn’t necessary.