• Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    4 hours ago

    Because she works in the medical field, she decided to create a condition related to health and hit on the name bixonimania because it “sounded ridiculous”, she says. “I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that’s a psychiatric term.”

    If that wasn’t sufficient to raise suspicions, Osmanovic Thunström planted many clues in the preprints to alert readers that the work was fake. Izgubljenovic works at a non-existent university called Asteria Horizon University in the equally fake Nova City, California. One paper’s acknowledgements thank “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise”. Both papers say they were funded by “the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”.

    Even if readers didn’t make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.

    • fartographer@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      46 minutes ago

      Your comment is a little too long, but I read up to this point:

      Because she works in the medical field

      So, now I know enough to know that any AI summary of this paper is absolutely true because science said it.

      Also, I’m pleasantly surprised that Sideshow Bob is finally doing something useful.

      • Aatube@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        41 minutes ago

        science didn’t say it either. the first thing you learn in research class is you don’t trust pre-prints since they by definition have not been reviewed (like the academia equivalent of blog posts)

        • fartographer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          29 minutes ago

          You didn’t mention if you work in the medical field at the top of your comment, which invalidates everything else you’ve claimed about science. I should know, I do my own research by reading Google AI summaries.

  • FireWire400@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    12
    ·
    4 hours ago

    If it’s plausible enough based on the dataset it was trained on it exists. Hallucinations are basically just the LLM trying to stay current by inference, I think.

    • flandish@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      3
      ·
      4 hours ago

      “Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.

      • PushButton@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        3 hours ago

        They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.

        They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.

        It’s like Google with their “side loading”. There is no such thing, it’s installing an app…

        It’s a word war. People are being manipulated.

      • melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        8
        ·
        4 hours ago

        Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.

        • Cort@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 hours ago

          Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.

          • snugglesthefalse@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 hours ago

            The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.