AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.

  • treefrog@lemm.ee
    link
    fedilink
    English
    arrow-up
    192
    arrow-down
    1
    ·
    2 years ago

    Business Insider warning about late stage capitalism feels more than a little ironic.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      54
      arrow-down
      2
      ·
      2 years ago

      As does being warned of technological oligarchs monopolizing AI by someone who works for fucking Meta.

      • Peanut@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        2 years ago

        And is the reason we can all fuck around with llama models despite the fact. Props to yann and other meta AI researchers. Also eager to see future jepa stuff.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      2 years ago

      Today on PBS, we got an insider warning from a lifelong Republican that the fascism got put of hand and is going for full autocracy, even though he’d been pushing through pro-fash policies for the last thirty years.

      Everyone thinks The One Ring will be theirs to control.

      • Mirshe@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 years ago

        And in other news, the Leopards Eating Faces Party continues to eat faces, confusing Leopards Eating Faces voters…

      • MirthfulAlembic@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 years ago

        Was that the Adam Kinzinger one? It’s a low bar, but I’ll give him a modicum of credit for saying his vote against the first impeachment was cowardice and that he’d vote for Biden in 2024 if Trump is the Republican nominee. Doesn’t totally feel like a lesson learnt that he still considers himself a Republican, though.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 years ago

      I’ve been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It’s mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.

      The dataset is the real problem. Say you want to classify fruit to check if it’s ripe enough for harvesting. You’ll need a whole lot of pictures of your preferred fruit where it’s both ripe and not ripe. You’ll want people who know the fruit to classify those images, and then you can feed it into a model. It’s a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven’t traditionally been a part of open source software.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      2 years ago

      AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.

      And I, for one, welcome our new robot overlords!

      • zbyte64@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 years ago

        Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.

        • Uriel238 [all pronouns]@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          Actually AI safety experts are worried that corporations are just interested in getting technology that achieves specific ends, and don’t care that it is dangerous or insufficiently tested. Our rate of industrial disasters kinda demonstrates their views regarding risk.

          For now, we are careening towards giving smart drones autonomy to detect, identify, target and shoot weapons at enemies long before they’re smart enough to build flat-packed furniture from the IKEA visual instructions.

      • PsychedSy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        2 years ago

        If we have to choose between corporations or the government ruling us with AI I think I’m gonna just take a bullet.

        • Kedly@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          Anarchy with never exist as anything but the exception to the rule, governments are a form of power that the population can at least influence. Weaker government will always mean stronger either nobility or corporations

            • Kedly@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 years ago

              Maybe in the future we can go back to smaller tribes/groups of people that take care of each other, but in the world as it exists today? An entity will come by sooner or later to conquer said group. We influence our government FAR better than we influence a corporation or dictator. Right now we need an equalizing big power, and at least with democratic governments, these big powers at least have to pretend to work for their people. Which, again, corporations and dictators do not

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 years ago

      Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).

    • r3df0x ✡️✝☪️@7.62x54r.ru
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Yep. As dangerous as that could be, it’s better then centralizing it. There are already systems like GPT4all that come with good models that are slower then things like Chat GPT but work similarly well.

  • echo64@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    2 years ago

    God I can’t stand these people who are only basically only worried about AI’s affect on the stock market. No normal person would even notice. we have more realistic issues with AI.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 years ago

    This is the best summary I could come up with:


    He named OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei in a lengthy weekend post on X.

    “Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun wrote, referring to these founders’ role in shaping regulatory conversations about AI safety.

    That’s significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.Altman, Hassabis, and Amodei did not immediately respond to Insider’s request for comment.

    Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone.

    In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.

    Those risks include worker exploitation and data theft that generates profit for “a handful of entities,” according to the Distributed AI Research Institute (DAIR).


    The original article contains 768 words, the summary contains 163 words. Saved 79%. I’m a bot and I’m open source!

  • Beefalo@midwest.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    Well we know that, but anybody who does anything less than clap and sing about it gets treated like trash by the huge wave of people who immediately trusted the crazy thing with their lives. It’s the fucking iPhone all over again. So hooray for AI.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Yeah, my own Dad calls me an “activist” now (in a deragotory manner). I never leave my house most days… But okay. I’m an activist because I think AI is a tangible threat to the working class. I’ve said only a few sentences to my Dad about it. But yeah… I guess I’m the problem for not finding some creative way to profit off LLM’s yet.

    • Silinde@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      That’s great. Now try training that model on a 4080 and you’ll see it’ll take significantly longer. Try amassing the data needed for training on your home PC and see how much longer beyond that which you’ll need. There’s a reason the current race is down to just a few companies, it costs pennies to run queries on an existing model, millions to build and train that model in the first place.

  • jcdenton@lemy.lol
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    10
    ·
    2 years ago

    No one can fucking run it locally right now only people who have 1%er money can run it