I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.

while there are plenty of models that were trained on copyrighted material without consent (which is piracy, not theft but close enough when talking about small businesses or individuals) is there an argument against models that were legally trained? And if so, is it something past the saying that AI art is lifeless?

  • MTK@lemmy.worldOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    21 hours ago

    Wow, thank you, I think this is the first argument that clicked for me.

    But it does raise for me 2 questions:

    • If the technology ever gets to a point where it does not degenerate into static by creating its own feedback loop, would it then be more like an excavator?
    • What if this is the start of a future (understandably a bad start) where you have artist who get paid to train AI models? Kind of like a an engineer that designs a factory
    • Fushuan [he/him]@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 hours ago

      About your first point: think of it like inbreeding, you need fresh genes on the pool or mutations occur.

      A generative model will generate some relevant results and some non relevant results, it’s the job of humans to curate that.

      However, the more content the llm generates, it is used on the web and thus becomes part of it’s training data.

      Imagine that 95% of results are accurate, from those only 1% doesn’t get fact checked and gets released into the internet where other humans will complain, but that will be used as input of an llm regardless. Anyway, so we have a 99% accuracy in the next input, and only 95% of that will be accurate.

      It’s literally a sequence that will reach very innacurate values very fast:

      f(1) = 1
      f(x_n) = x_n-1 * 0.95
      

      You can mitigate it by not training it on generated data, but as long as AI content replaces genuine content, specially with images, AI will train itself from its own output and it will degenerate fast.

      About the second point, you can pay artists to train models, sure, but that’s not so clear when talking about text based generative models that depend on expert input to give relevant responses. About voice LLMs too, any given money would not be enough for a voice actor because doing so would effectively destroy their future jobs and thus future income.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      21 hours ago
      • Can it ever get to the point where it wouldn’t be vulnerable to this? Maybe. But it would require an entirely different AI architecture than anything that any contemporary AI company is working on. All of these transformer-based LLMs are vulnerable to this.

      • That would be fine. That’s what they should have done to train these models in the first place. Instead they’re all built on IP theft. They were just too cheap to do so and chose to build companies based on theft instead. If they hired their own artists to create training data, I would certainly lament the commodification and corporatization of art. But that’s something that’s been happening since long before OpenAI.

      • MTK@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        21 hours ago

        Thank you, out of all of these replies I feel like you really hit the nail on the head for me.