• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    13 days ago

    Speaking as a local LLM enthusiast, it’s still unreliable and is “googling” something behind the scenes to get answers. And it’s not like a human poking around, internally it’s looking at a wall of webpages and “hoping” to hit something right.

    But to answer your question Perplexity is similar, and perplexica is the open source equivalent. There are also researcher “agent” scripts that will script LLM calls.

    • TheLoneMinon@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      13 days ago

      I get that, and for sure always need to verify it’s results, but SEO has turned looking information up into a slog of AI articles and lists.

      I very rarely am able to find the answers to my questions anymore. And I say that as someone who has always joked that my job was a professional Googler. Whereas I used to be able to search key words or vague snippets of something from memory or lookup a hyper-specific question and find a solid answer within a couple minutes, now the first 2 pages are ads and “websites” with “articles” that are just vessels for more ads.

      Using something like ChatGPT gets me a little closer to that old level of access. The hand that feeds I suppose…

      Anyways, that’s for the tip. I’ll check it out!

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        13 days ago

        Just remember to take anything AI says with a grain of salt. TBH I’d suggest digging deeper into how LLMs work at a high level so you can understand just how they can be confidently “wrong”