• OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    10 months ago

    About 3 percent of students in the study had positive mental health outcomes, reporting that talking to the chatbot “halted their suicidal ideation.” But researchers also found “there are some cases where their use is either negligible or might actually contribute to suicidal ideation.”

    This is referring to a bot designed to help with people struggling with mental health, and is actually a big one. That number is way too low.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 months ago

      “hey, I know you feel like killing yourself, but if it happens then we’ll just replace you with a shitty bot” probably isn’t as helpful as they thought it would be. It’s violating and ghoulish.

      • OsrsNeedsF2P@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        I hate this attitude of “well if you can’t get a professional therapist, figure out how to get one anyways”. There needs to be an option for people who either can’t afford or can’t access a therapist. I would have loved for AI to fill that gap. I understand it won’t be as good, but in many regions the wait-list for therapy is far too long, and something is better than nothing

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          Someone close to me gave up on the hotlines in the US and now just uses ChatGPT. It’s no therapist, but at least it’ll hold a conversation. If only the hotlines here weren’t so absurdly understaffed.

          • Megaman_EXE@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            I’ve used one called PI which I’m assuming is some kind of branch off of chat gpt or something.

            You don’t have to sign up or anything (for now) which is cool. But I assume they harvest all our data and information.

            I tested to see if I could break it once, and from my brief tests, it seemed to never break out of character or tell me something bad or negative, which I thought was interesting(and good!)

            • Powderhorn@beehaw.org
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              I actually used Pi as my intro to generative LLMs. It was … I guess not encouraging self harm, but so fucking irritating that it led me to want to. Always with the irrelevant supportive words that I guess work if you’re a teen?

          • Powderhorn@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I’ve given up on crisis lines. Their whole premise seems to be “get back to being comfortable with the oppressive system, you little bitch.”

  • Powderhorn@beehaw.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Imagine a 3% success rate being acceptable in any situation. That tends to get you fired.

    • sqgl@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      3% success vs what? 6% sent over the edge? 10% 20% ?

      What shitty investigative journalism. If the journalist asked for a specific figure but was evaded then it should be stated in the article.

      • Powderhorn@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I don’t much like that take. Ars commits excellent journalism.

        From the story:

        About 3 percent of students in the study had positive mental health outcomes, reporting that talking to the chatbot “halted their suicidal ideation.” But researchers also found “there are some cases where their use is either negligible or might actually contribute to suicidal ideation.”

    • halyk.the.red@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      Yeah, I know someone who won’t watch that episode again because of how unsettling it is. Luckily for them, it’s slowly becoming reality.