• PlzGivHugs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    For most of the good LLM models its going to take a high-end computer. For image generation, a more mid-range gaming computer works just fine.

      • PlzGivHugs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Really? When I was trying to get it to run a little while ago, I kept running out of memory with my 3060 12GB running 20B models, but prehaps I had it configured wrong.

        • Arkthos@pawb.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You can offload them into ram. The response time gets way slower once this happens, but you can do it. I’ve run a 70b llama model on my 3060 12gb at 2 bit quantisation (I do have plenty of ram so no offloading from ram to disk at least lmao). It took like 6-7 minutes to generate replies but it did work.

      • Septimaeus@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        This is correct. The popular misconception may arise from the marked difference between model use vs development. Inference is far less demanding than training with respect to time and energy efficiency.

        And you can still train on most consumer GPUs, but for really deep networks like LLMs, well get ready to wait.

    • KoalaUnknown@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I run models at 10-20B parameters pretty easily on my M1 Pro MacBook. You can get good response times for decent models on a $500 M4 Mac Mini. A $4000 Nvidia GPU isn’t necessary.