• EvergreenGuru@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    2 days ago

    Your comment would be more convincing if you laid out the complex idea you’re alluding to, instead of saying that a simple example is all people need.

    As far as I can tell, thought scientists stay losing, because pretending your thoughts comprise a form of science that ends in a measurable result is sophistry.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      2 days ago

      It’s to illustrate the alignment problem. What you literally ask isn’t always what you actually want. This is usually obvious to humans but not necessarily to an AI. If you sit in a self-driving car and tell it to take you to the airport as fast as possible, you might arrive three minutes later covered in vomit with the entire police department after you. That’s obviously not what you wanted, but you got exactly what you asked for.

      The paperclip maximizer is a cartoon example of this. If you just ask it to make as many paperclips as possible, that becomes its priority number one and everything gets turned into paperclips and you might not get the chance to tell it this isn’t what you meant.

      A kind of real-life example is the story of a city that started paying people for rat tails to eradicate the rat population, only for folks to start breeding rats instead to make money. It’s a classic case of unintended results due to unspecific requirements.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        the story of a city that started paying people for rat tails to eradicate the rat population, only for folks to start breeding rats instead to make money.

        Or the real life story of the US elementary school students who saved up money to buy and then free slaves, which - when examined closer - was found to be driving growth in the slave trade, not slowing it down.

        In both cases - you figure out what’s off kilter, and you stop doing that.

        It’s a lot easier to turn off “AI machines” than, for instance, powerful industries like Oil and gas…

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        you might not get the chance to tell it this isn’t what you meant.

        And that is where the thought experiment left the tracks - lifted off with escape velocity and is now passing Voyager 2…

        In what cartoon world do we not get a chance to shut off the Doomsday Device? I mean, it was a funny little twist at the end of Dr. Strangelove, but as realistic as many elements of that story were, that was not one of them.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          13 hours ago

          I don’t think you fully appreciate the implications of creating something orders of magnitude more intelligent than us. You can’t outsmart something smarter than you. Even if it was only as smart as the smartest human, being a computer it would still process information a million times faster. Everything would happen in super-slow motion from its perspective. It would have so much time to consider each move.

          Humans aren’t anywhere near the strongest primate on Earth, yet we’re by far the dominant one. I don’t think a gorilla has any idea just how much smarter we are, and even if it did, it would probably still assume that a war with humans would mean us outnumbering them, hitting, biting, and throwing things at them. They’d have no clue we can end them from a distance without them ever knowing what hit them. They can’t even imagine all the ways we could - and have - screw things up for them, even when we have nothing against gorillas.

          The point isn’t that I think this is absolutely going to happen, but just to highlight that we’re effectively rolling the dice on it and seeing what happens - which I find incredibly irresponsible. This whole “it’ll be fine, we can always turn it off” attitude is incredibly naive and short-sighted.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            You can’t outsmart something smarter than you.

            And, yet, we have rich idiots making all our top level decisions. https://www.cnbc.com/2014/07/16/icahn-too-many-companies-run-by-morons.html

            I don’t think a gorilla has any idea just how much smarter we are

            I don’t think most people have any idea just how smart a gorilla, or dolphin, or squid, or pig, or any of thousands of other species are.

            it would probably still assume that a war with humans would mean us outnumbering them, hitting, biting, and throwing things at them.

            Many people, but not all, are very rigid in their thinking. Similarly, some animals are adaptable: https://theconversation.com/city-animals-act-in-the-same-brazen-ways-around-the-world-279977

            They’d have no clue we can end them from a distance without them ever knowing what hit them.

            Many hunted animals have evolved a fear of humans at a distance. All the megafauna of Africa remaining today are only there because they evolved alongside humans, instead of being blindsided and hunted to extinction before they figured out what we can do.

            Will we be blindsided by our computers (any more than we already have been)? Undoubtedly. Will they turn around and start eating us because they’re so fast and smart? Probably not.

            we’re effectively rolling the dice on it and seeing what happens - which I find incredibly irresponsible.

            Yep. Pretty much like developing the fossil fuel industry, or cutting all the mature trees off the face of three continents, hunting whales to near extinction, killing all the megafauna in the Europe and the Americas, desertification of the cradle of civilization through unsustainable farming, etc. etc.

            I agree, it’s irresponsible. I disagree with those who liken it to a world war IV global apocalypse in a millisecond singularity.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          It’s not a matter to decide but a problem to try and solve. In most cases we get to learn from our mistakes but when it comes to AGI we might not.

          Or are you suggesting we shouldn’t even think about it but rather just roll the dice and see what happens?

          • eleitl@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Undecidable in the sense that no solution can exist for that problem class. You can start with the definition of what exactly you’re aligning with, how you measure that, how you derive applicable system evolution constraints from your measurements, and just what humanity is, in the iterative context.

            Apart from that we’re already in an out of control winner-takes-all arms race where AI is used by competing nations, including social control and battlefield. Ivory tower is a meal ticket with no practical relevance.

    • idiomaddict@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 days ago

      The “experiment” is one you conduct on yourself, it’s not for thinking about a process and using your imagined results as the basis of further study. It’s very useful in a number of non scientific fields, and it can serve as an aid in scientific education though, so it shouldn’t be written off generally.

      The paper clip thought experiment is a punchy, memorable example of the conflict between what input you give to a computer and what the computer interprets from that. The goal is for people who hear it to remember that they need to be thoughtful about what exactly they want and precise in their phrasing when they’re programming or training an AI.