If even half of Intel’s claims are true, this could be a big shake up in the midrange market that has been entirely abandoned by both Nvidia and AMD.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    20 hours ago

    An LLM card with quicksync would be the kick I need to turn my n100 mini into a router. Right now, my only drive to move is that my storage is connected via usb. SATA is just not enough value for a whole new box. £300 for Ollama, much faster ml in immich etc and all the the transcodes I could want would be a “buy now figure the rest out later” moment.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 hours ago

      Oh also you might look at Strix Halo from AMD in 2025?

      Its IGP is beefy enough for LLMs, and it will be WAY lower power than any dGPU setup, with enough vram to be “sloppy” and run stuff in parallel with a good LLM.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      You could get that with 2x B580s in a single server I guess, though yoi could have already done that with the A770s.

      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        … That’s nuts. I only just graduated to a mini from a pi, I didnt consider a dual GPU setup. Arbitrary budget aside, I should have added an “idle power” constraint too. Reasonable to assume that as soon as LLMs get involved all concept of “power efficient” goes out the window. Don’t mind me, just wishing for a unicorn.