• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
    • Salamendacious@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 years ago

      If an AI were to gain sentience, basically becoming an AGI, then I think it’s probably that it would develop an ethical system independent of its programming and be able make moral decisions. Such as murder is wrong. Fiction deals with killer robots all the time because fiction is a narrative and narratives work best with both a protagonist and an antagonist. Very few people in the real world have an antagonist who actively works against them. Don’t let fiction influence your thinking too much it’s just words written by someone. It isn’t a crystal ball.

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        You realise those robots were made by humans to win a war? That’s the trick, the danger is humans using ai or trusting it. Not skynet or other fantasies.

        • Salamendacious@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 years ago

          My point is everything written up to now have been just fantasies. Just stories dreamed up by authors. They reflect the fears of their time more than accurately predict the future. The more old science fiction you read, you realize it’s more about the environment that it was written and almost universally doesn’t even come close to actually predicting the future.