• ClusterBomb@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    4 months ago

    “My hammer is not well suited to cut vegetables” 🤷

    There is so much to say about AI, can we move on from “it can’t count letters and do math” ?

    • ReallyActuallyFrankenstein@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 months ago

      I get that it’s usually just a dunk on AI, but it is also still a valid demonstration that AI has pretty severe and unpredictable gaps in functionality, in addition to failing to properly indicate confidence (or lack thereof).

      People who understand that it’s a glorified autocomplete will know how to disregard or prompt around some of these gaps, but this remains a litmus test because it succinctly shows you cannot trust an LLM response even in many “easy” cases.

    • Strykker@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      But the problem is more “my do it all tool randomly fails at arbitrary tasks in an unpredictable fashion” making it hard to trust as a tool in any circumstances.

      • desktop_user [they/them] @lemmy.blahaj.zoneBanned
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        it would be like complaining that a water balloon isn’t useful because it isn’t accurate. LLMs are good at approximating language, numbers are too specific and have more objective answers.

        • Lovable Sidekick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          I really don’t get what point OP is trying to make with this example though. It accurately answered their misspelled question, and also accurately answered the question they were apparently trying to ask. I don’t see the problem.