This is correct. The popular misconception may arise from the marked difference between model use vs development. Inference is far less demanding than training with respect to time and energy efficiency.
And you can still train on most consumer GPUs, but for really deep networks like LLMs, well get ready to wait.
This is correct. The popular misconception may arise from the marked difference between model use vs development. Inference is far less demanding than training with respect to time and energy efficiency.
And you can still train on most consumer GPUs, but for really deep networks like LLMs, well get ready to wait.