• flandish@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    5
    ·
    10 hours ago

    “Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      9 hours ago

      They coined the term “hallucination” as soon as when people realized that the “AI thing” is throwing back bullshit at us.

      They had to force that term in people’s head, else we would call that bullshit, lies and so on as we should.

      It’s like Google with their “side loading”. There is no such thing, it’s installing an app…

      It’s a word war. People are being manipulated.

    • melroy@kbin.melroy.org
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      9 hours ago

      Hallucinations are by design for Ai. It’s just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.

      • Cort@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 hours ago

        Ah, it’s always hallucinating, sometimes the hallucinations conveniently line up with reality.

        • snugglesthefalse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          8 hours ago

          The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We’re several years deep into these “most likely” results and sometimes they’re pretty close but usually it’s not quite there because the only guidance they get is from outside.