Will Manidis is the CEO of AI-driven healthcare startup ScienceIO

  • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    132
    ·
    5 days ago

    (Already said this before, but let me reiterate:)

    Typical AITA post:

    Title: AITAH for calling out my [Friend/Husband/Wife/Mom/Dad/Son/Daughter/X-In-Law] after [He/She] did [Undeniably something outrageous that anyone with an IQ above 80 should know its unacceptable to do]?

    Body of post:

    [5-15 paragraph infodumping that no sane person would read]

    I told my friend this and they said I’m an asshole. AITAH?

    Comments:

    Comment 1: NTA, you are abosolutely right, you should [Divorce/Go No-Contact/Disown/Unfriend, the person] IMMEDIATELY. Don’t walk away, RUNNN!!!

    Comment 2: NTA, call the police! That’s totally unacceptable!

    And sometimes you get someone calling out OP… 3: Wait, didn’t OP also claim to be [Totally different age and gender and race] a few months ago? Heres the post: [Link]


    🙄 C’mon, who even think any of this is real…

    • WilderSeek@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      3 days ago

      It’s “reality television” on a discussion forum to karma farm and help push other kinds of misinformation.

    • Way too many…

      I was born before the Internet. The Internet is always lumped into the “entertainment” part of my brain. A lot of people that have grown up knowing only the Internet think the Internet is much more “real”. It’s a problem.

      • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        5 days ago

        I’ve come up with a system to categorize reality in different ways:

        Category 1: Thoughts inside my brain formed by logics

        Category 2: Things I can directly observe via vision, hearing, or other direct sensory input

        Category 3: IRL Other people’s words, stories, anecdotes, in face to face conversations

        Category 4: Acredited News Media, Television, Newspaper, Radio (Including Amateur Radio Conversations), Telegrams, etc…

        Category 5: The General Internet

        The higher the category number, means the more distant that information is, and therefore more suspicious I am.

        I mean like, if a user on Reddit (or any internet fourm or social media for that matter) told me X is a valid treatment for X disease without like real evidence, I’m gonna laugh in their face (well not their face, since its a forum, but you get the idea).

          • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            4 days ago

            So here’s the thing:

            I sometimes though I saw a ghost moving in a dark cornet of my eyes.

            I didn’t see a ghost.

            But then later I walk through the same place again, and also saw the same vision, but I already held the belief that ghosts dont exist, so I investigated, it turned out to be a lamp (that was off) that casted a shadow of another light source, so, when I happend to walk though the area, the shadow moved, and combined with my head turning motion, it made it appear like a ghost was there, but it was just a difference in lighting, a shadow. Not a ghost. I bet a lot of “ghosts” could be just interpreting lighting wrong and think its a ghost, not an actual ghost.

            Having you thoughts/logics prioritized is important to find the truth, and not just start believing the first thing you interpret like a vision of a “ghost”.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        5 days ago

        I genuinely miss the 90s. I mean, yeah, early forms of internet and computers existed, but not everyone had a camera, and not everyone got absolutely bukkaked with disinformation. Not that I think everything is bad about the tech in of itself, but how we use it nowadays is just so exhausting.

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      4 days ago

      Man, sometimes when I finish grabbing something I needed from Reddit, I hit the frontpage (always logged out) just out of morbid curiosity.
      Every single time that r/AmIOverreacting sub is there with the most obvious “no, you’re not” situation ever.

      I never once seen that sub show up before the exodus. AI or not, I refuse to believe any frontpage posts from that sub are anything other than made up bullshit.

    • samus12345@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      4 days ago

      If it’s well-written enough to be entertaining, it doesn’t even matter whether it’s real or not. Something like it almost certainly happened to someone at some point.

    • WilderSeek@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      I’m thinking of pulling the plug on Reddit (at least for a while). My tipping point has become how the “drone” story is becoming popular. At first it was intriguing and mysterious (the airport shutdowns and reports of large vehicles at low levels was fascinating), but I’m getting the vibe it’s a misinformation campaign to distract the US from how we are about to be changed.

      I was actually permabanned in the “News” sub for an innocuous comment. All it was is that I noted the federal authorities are likely correct for saying most of the reports of “UFOs” are likely airplanes and manmade drones, and to play devil’s advocate I mentioned there were likely legitimate reports of UAPs, but since the majority were probably mistaken planes the Federal agencies’ reactions were technically truthful.

    • SerotoninSwells@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      Look at that, the detection heuristics all laid out nice and neatly. The only issue is that Reddit doesn’t want to detect bots because they are likely using them. Reddit at one point was using a form of bot protection but it wasn’t for posts; instead, it was for ad fraud.

    • zeca@lemmy.eco.br
      link
      fedilink
      arrow-up
      29
      arrow-down
      2
      ·
      4 days ago

      there isnt so much incentive. No advertisement. Upvote counters behave weirdly in the fediverse (from what i can see).

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        4 days ago

        No advertisement

        You don’t think that commercial products can’t get good (or bad) coverage in a place like this? In any discussion of hardware, software (including, for example, video games), cars, books, movies, television, etc., there’s plenty of profit motive behind getting people interested in things.

        There are already popular and unpopular things here. Some of those things are pretty far removed from a direct profit motive (Linux, Star Trek memes, beans). But some are directly related to commercial products being sold now (current video games and the hardware to run them, specific types of devices from routers to CPUs to televisions to bicycles or even cars and trucks, movies, books, etc.).

        Not to mention the political motivations to influence on politics, economics, foreign affairs, etc. There’s lots of money behind trying to convince people of things.

        As soon as a thread pops up in a search engine it’s fair game for the bots to find it, and for that platform to be targeted by humans who unleash bots onto that platform. Lemmy/Mastodon aren’t too obscure to notice.

    • mtchristo@lemm.ee
      link
      fedilink
      arrow-up
      23
      arrow-down
      3
      ·
      4 days ago

      There are no virtual points to earn on Lemmy. So hopefully it will resist the enshitification for while.

        • Anarki_@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          19
          ·
          4 days ago

          Account age and karma makes an account look more legit and it’s thus more useful for spreading misinformation and/or guerilla marketing.

        • Irelephant@lemm.ee
          link
          fedilink
          arrow-up
          9
          ·
          4 days ago

          Same reason why people play cookie clicker, watch the useless number go up.

          Also, some subs are downright hostile to people with low karma.

          • Zahille7@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 days ago

            I actually got auto-added to some bullshit sub that was “for people with so much karma.” I don’t remember what the number was, because it was such a useless sub that no one engaged with.

        • mtchristo@lemm.ee
          link
          fedilink
          arrow-up
          4
          ·
          4 days ago

          Some subreddits require a minimum karma score for posting. And it gets less likely to get shadow banned the more karma you have.

          • weeeeum@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            Ohhh right. I remember subs had that bullshit. I didnt know about the shadowban thing though.

  • ShadowRam@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    Anyone who did any kind of modding on Reddit could see the majority of posts and comments where mostly bots.

    Bots competitions for upvotes/views and clicks.

    Reddit’s been like that since ~2018

  • GuitarSon2024@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    4 days ago

    This is the whole reason that I discovered and came to Lemmy. Reddit is literally 90% bots, from the posts, to the filtering, to the censoring, to outright banning. It’s a mess.

    • GHiLA@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      4 days ago

      Or getting this shit after you comment somewhere:

      “Excuse me but could you please send a direct message to our admins to verify your account before placing a comment? Everyone has to do it.”

      I replied “go fuck yourself” and they banned me instantly and I never even submitted anything lmao.

  • plunging365@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    Does Lemmy have any features that resist this kind of astroturfing?

    No one would consider bots talking to one another a real conversation, but is there anything regular users can do?

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 days ago

    In the age of A/B testing and automated engagement, I have to wonder who is really getting played? The people reading the synthetically generated bullshit or the people who think they’re “getting engagement” on a website full of bots and other automated forms of engagement cultivation.

    How much of the content creator experience is itself gamed by the website to trick creators into thinking they’re more talented, popular, and well-received than a human audience would allow and should therefore keep churning out new shit for consumption?

    • conicalscientist@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      4 days ago

      It’s ultimately about ad money. They haven’t cared it’s humans or bots either. They keep paying out either way. This predates long before the LLM era. It’s bizarre.

      It’s pretty much a case of the POSIWID. The system is meant to be genuine human engagement. What the system does is artificial at every step. Turns out its purpose is to fabricate things for bots to engage with. And this is all propped up by people who for some reason pay to keep the system running.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        This reminds me of the ad supported games that advertise other ad supported games. I think I’ve even seen an ad supported game run an ad for itself.

        I wonder if at some point people will walk away from these platforms and the platform and its owners won’t even be able to tell.

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    20
    ·
    4 days ago

    Most people who have worked in customer service would believe every word because they have seen the absurdity of real people.

  • Mango@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    4 days ago

    Tbh, I see how this can be really good for people. We can never again believe that what apples are saying online is really representative of the general population. It never has been, but now we have a really solid reason to dispel the belief that doesn’t require much explanation.

    That said, we’ll need to combat this with more right knit communities where people can better identify themselves as human. Captcha doesn’t do that, but the Goth girls on VF so long ago had it figured out. We gotta do proper “salutes”.

  • NutinButNet@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 days ago

    It’s stupidly easy to make up stuff on AITA and get upvotes/comments. I made up one just for fun and was surprised at how popular it got. Well, now not so much, but back when I did.

    If you know the audience and what gets them upset, you’ve got easy karma farming.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      5 days ago

      It’s like reality TV & soap operas in text form. You can somewhat easily spot the AI posts though, which are plentiful now. They all tend to have the same “professional” writing style and a high tendency to add mid sentence “quotes” and em dashes (—) which you need a numpad combo to actually write out manually - a casual write-up would just use the - symbols, if at all. LLMs also make a lot of logic errors that may pop up. Example from one of the currently highly upvoted posts:

      He pulled out what looked like a box from a special jewelry store. My heart raced with excitement as I assumed it was a lovely bracelet or a special memento for our wedding day. But when he opened the box, I was absolutely stunned. Inside was a key to a house he supposedly bought for us. I was taken aback because I had no idea he was even looking for real estate. My first reaction was one of shock and confusion, as I thought it was a huge decision that we should have discussed together.

      As I processed the moment, I realized the house wasn’t just any house—it was a fixer-upper on the outskirts of town. Now, I get that it can be a great investment, but this particular house needed a ton of work. I’m talking major renovations and repairs, and I honestly had no desire to live there.

      Aside from the weird writing (Oh jolly! Expensive gifts! How exciting!), this lady somehow realized & identified this house, location and its state just by looking at some random key in that moment. Bonus frustration if you read through the comments who eat all of this shit up, assuming they aren’t also bots.

      • Interesting observation about the em dash. I never thought about it that hard, but reddit’s text editor (as well as Lemmy’s, at least on the default UI) automatically concatenate a double dash into an en dash, rather than an em dash.

        I use em dashes (well, en dashes, as above) in my writing all the time, because I am a nerd.

        For anyone who cares, an en dash is the same width as an N in typical typography, and looks like this: –

        An em dash is, to no one’s surprise, the same with as an M. It looks like this: —

        (For what it’s worth, Lemmy does not concatenate a triple dash into an em dash. It turns it into a horizontal rule instead.)

          • That’s probably because the posts are stored as plain text, and any markdown within them is just rendered at display time. This is presumably also how you can view any post or comment’s original source. So, here you go:

            Double –

            En – (alt 0150)

            Em — (alt 0151)

            And for good measure, a triple:


            Actually, I notice if you include a triple that’s not on a line by itself it does render it as an em dash rather than en, like so: —

            • DarkThoughts@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              4 days ago

              You’re right, it means you don’t have to save two versions or somehow convert it back into a source format instead. The triple renders as a line below on mbin. I don’t remember what they’re called.

        • pixelscript@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          I use the poor man’s emdash (two hyphens in a row) here and there as well. I guess I never noticed Reddit auto-formats them. I have been accused of being an AI on a few occasions. I guess this is a contributing factor to why that is.

          Funny how Reddit technically formats it into the wrong glyph, though. Not like anyone but the most insufferable of pedants would notice and care, of course. I find it merely mildly amusing.

      • NoneOfUrBusiness@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Now that you mention it, I might be the only non-AI using em dashes on the internet (I have a program that joins two hyphens into an em dash).

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          4 days ago

          Apparently Lemmy, and Reddit (I can’t test either one), actually render it that way too. Not sure how many people know about that though.

    • Vaquedoso@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Two weeks ago someone on one of those story subs, I think it was amioverreacting, was milking off karma making updates. They made 5 posts about the whole thing and even started to sell merch to profit in real life until they took the last post down.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    4 days ago

    But I mean, AI is the asshole, so maybe that’s why they went to the front page?

  • Thrife@feddit.org
    link
    fedilink
    arrow-up
    16
    ·
    5 days ago

    Is reddit still feeding Googles LLM or was it just a one time thing? Meaning will the newest LLM generated posts feed LLMs to generate posts?

    • shittydwarf@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      edit-2
      5 days ago

      The truly valuable data is the stuff that was created prior to LLMs, anything after this is tainted by slop. Any verifiable human data would be worth more, which is why they are simultaneously trying to erode any and all privacy

      • gandalf_der_12te@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        4 days ago

        I’m not sure about that. It implies that only humans are able to produce high-quality output. But that seems wrong to me.

        • First of all, not everything that humans produce has high quality; rather, the opposite.
        • Second, with the development of AI i think it will be very well possible for AI to generate good-quality output in the future.
        • Danquebec@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          They can produce high-quality answers now, but that’s just because they wwre trained on things written by humans.

          Any training on things produced by LLMs will just reproduce the same stuff, or even worse actually because it will include hallucinations.

          For an AI to discover new things and truly innovate, or learn about existing products, the world, etc. it would need to do something entirely different than what LLMs are doing.

        • morrowind@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          4 days ago

          Microsoft’s PHI-4 is primarily trained on synthetic (generated by other AIs) data. It’s not a future thing, it’s been happening for years

    • whotookkarl@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      5 days ago

      These days the LLMs feed the LLMs so you can model models unless you’re excluding any public data from the last decade. You have to assume all public data based on users is tainted when used for training.