• SirEDCaLot@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    There’s stupid from top to bottom here.

    The company is stupid for allowing an AI full root access to their entire setup.

    The provider is stupid for only generating full-access API keys. They’re even stupider for storing backups with a volume, so deleting the volume (zero confirmation via API key) also insta-deletes the backups. And they’re stupidest for encouraging users to plug AIs into this full-trust mess.

    And the company is absolute stupidest for having no backups other than the provider’s builtin versioning.

  • Bluewing@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    To be fair, someone did have the malice aforeskin to have an AI separated backup. They did get things restored from a snapshot. It just took a couple of days to do it.

    But the loss of reputation and revenue is gonna sting for a good while.

  • IronKrill@lemmy.ca
    link
    fedilink
    English
    arrow-up
    52
    ·
    3 days ago

    The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to ‘fix’ the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.

    Quite easy-to-believe, really.

    These multiple safeguards toppling in rapid succession

    Multiple safeguards? Really? Multiple paragraph prompts are not multiple safeguards… it’s half a safeguard at best. Applying limits on what the AI can do is a safeguard.

  • Tim@lemmy.snowgoons.ro
    link
    fedilink
    English
    arrow-up
    305
    arrow-down
    11
    ·
    3 days ago

    This isn’t an AI story, it’s a “completely fucking idiotic sysadmins exist” story.

    Treat an AI like the idiot intern without any references you just hired. Gave the idiot intern permission to delete your production database? That’s entirely on you, zero sympathy. (Actually, give any developer that power? You get what you deserve.)

    • IchNichtenLichten@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      133
      arrow-down
      3
      ·
      3 days ago

      It could be a moronic sysadmin, it could just as easily be a moronic exec pushing staff to implement this crap right now and damn the consequences.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      83
      arrow-down
      3
      ·
      3 days ago

      I mean that’s kinda the whole point.

      Companies are looking at AI to replace people. Either it’s ready or it’s not.

      If you need to treat it like it’s an intern, then it’s not worth the expense. Anyone hiring interns to be productive doesn’t understand why you hire an intern.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          3 days ago

          You don’t hire interns for productivity. If you’re intern program is any good it’s a time/resource sink. However, it’s a good recruiting pipeline and provides young people an opportunity to get real world experience.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            You don’t hire interns for productivity

            Because it’s unethical. I’ve been in business for 10+ years but i never hired an intern because i don’t find it fair to make someone work for less than minimum wage, and i don’t have the structure required to really teach them anything. I have bad fundamentals and only ever learnt by doing, so having an intern while it may help me wouldn’t really help them and that’s not a deal i’m willing to make. Probably why i’m not super successful lol

            That being said, i don’t see any problem with making a GPU cry somewhere in California for my menial tasks. And it’s tremendously effective too, for a hundred bucks a month i get a lot of shit done that would take me ages. I don’t give it access to anything critical so it can’t fuck my shit up and i come out on top as long as the tokens are subsidized by dumb VC money.

        • iegod@lemmy.zip
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          3
          ·
          3 days ago

          I actually think it’s better than that and when you set up multiple pipelines that interact and cross check it starts to ramp up. Definitely true Lemmy has its head in the sand about it though.

          • nymnympseudonym@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            3 days ago

            This. Yes it seems wasteful or whatever but you need bots with prompts that review the work, kick it back to the coder bot to re-do, yadda. But at the end of the day you have a thing that Fixes Your Bugs and Implements Basic Features For You.

        • Whelks_chance@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          19
          ·
          3 days ago

          People don’t wanna hear that around here. But I agree, with the right instructions it’s better than a junior Dev. Loads faster, and mistakes can be fixed faster, and if you update the prompts then it learns better from mistakes too.

          • 7101334@lemmy.world
            link
            fedilink
            English
            arrow-up
            30
            arrow-down
            12
            ·
            3 days ago

            People don’t want to hear it anywhere because you’re lauding the benefits of a parasitic technology which is inherently hostile towards workers.

            And if you’re getting paid for it, it makes you a parasite too, or at least more complicit than the average person.

            • Regrettable_incident@lemmy.world
              link
              fedilink
              English
              arrow-up
              17
              arrow-down
              6
              ·
              3 days ago

              The fact is, it can be a very useful technology when deployed sensibly. Yes, it’s going to inflict massive harm on society in multiple ways - but just dismissing it as shit is putting your head in the sand. We need to be figuring out how to ensure that the harm it does is minimised and ideally that it’s used in ways that benefit us all. Fuck knows how though.

              But it’s not just going to go away, no matter how much we might want it to.

              • 7101334@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                3 days ago

                It destroys the environment inherently by virtue of its operation (in the context of our current energy infrastructure). I do not care how “useful” it is to you or any corporation if it takes even a single living organism off of this earth.

                I dismiss it as shit and I don’t need your approval to do so. Medical and scientific applications are acceptable. Nothing else, no exceptions.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              1
              ·
              3 days ago

              Maybe your position would be better served by not lashing out at people as if they’re your enemy.

              Multiple things can be true at the same time. Statements about the technical capability of a technology don’t detract from the negative impacts on the world. Those are two different topics.

              Fossil fuels have incredibly massive, civilization-scale problems that are actively harming the modern world AND ALSO have enabled industrialization, pulling billions out of poverty.

              AI is objectively capable at some tasks AND ALSO is being used to disrupt the labor market and causing other harmful effects in society.

              The world isn’t black and white

              • 7101334@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                6
                ·
                3 days ago

                Black and white, no, but things can be evaluated on their net impact. And in that evaluation, AI is shit.

                • FauxLiving@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  3 days ago

                  I understand the arguments, today isn’t my first day on the Internets.

                  The comment that was responded to was in a conversation talking about the technical capabilities and how it doesn’t matter what the truth is on that topic because some people don’t want to hear it because they only can view AI in a 2-diminsional, black or white, net good or net bad way.

                  Then you showed up like a caricature of the type of irrationality that they were discussing.

                  I even explained the, very obvious, context that you breezed right passed and yet you’re still grinding that same talking point without a moment of self reflection.

            • fuck_u_spez_in_particular@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              3 days ago

              I honestly think, it’s very cool for prototyping ideas at this point. It’s also parasitic. Although I think because of (maybe) different reasons: It gives people the power (which they unfortunately use way too much) to imitate an art, but in an non-arty imperfect way that doesn’t comprehend details (of the art), resulting in slop. For software that can go very wrong as we see here. This is also a reason why I mostly quit open-source, because now everyone can code a bad version of a library, it sucked the art out of good open source etc. and it’s increasingly difficult because of good wording/“look” etc. to differentiate on quality of code, previously you could often check a code-base review it somewhat and know how good the quality is, now it’s more like “is this slop or not?” (in which case I go a big circle around it, because reviewing is often not worth it)

              At some point though, I think this automation of work is inevitable, we need to think about a society that can peacefully exist without having the requirement to work to exist. I actually think this could easily be utopian, everyone can focus on what they actually think is fulfilling life.

              Though, it’s sad and concerning that technology is developing faster than society can adapt, which is why I’m mostly with you, because people (or representatives like politicians) just aren’t “programmed” for these fast-paced changes, to adapt the technology such that the future may be more utopian as it currently is heading towards a dystopian future…

              • nymnympseudonym@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                It gives people the power […] to imitate an art, but in an non-arty imperfect way

                Is it okay for Skrillex to make loops? For Vanilla Ice or MC Hammer to sample?

              • 7101334@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                3
                ·
                3 days ago

                Every commercial use of AI negatively impacts the environment in order to further the interests of capital and is therefore inherently immoral.

                If we were in a nuclear fusion or otherwise all-renewable-energy-with-plenty-of-excess world, then I’d be more aligned with your mindset and agree that only uses which bastardize art / etc are immoral.

    • FosterMolasses@leminal.space
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Treat an AI like the idiot intern without any references you just hired. Gave the idiot intern permission to delete your production database? That’s entirely on you

      We’ve officially veered into a timeline where the standard for every tech employee’s level of competence is on par with the guy who pushed through that update to CrowdStrike lol

    • moustachio@lemmy.world
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      10
      ·
      3 days ago

      “Treat an AI like an idiot intern without any references you just hired.”

      Instead of this, treat AI like some dude off the street who you didn’t hire and leave it out of your life. It’s shitty, it’s wasteful, and it’s subsidized by everyone to get a few tech bros rich.

      Like seriously, it’s just theft of people’s work it “trained on”, powered by energy companies that charge us more to power it, at the cost of poisoning our water supplies, to ultimately try and steal our salaries one day.

      It’s absolutely parasitic software at every level.

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        Hah, you just wrote a punchline similar to a presentation I’ve been giving at conferences.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      24
      ·
      3 days ago

      Treat an AI like the idiot intern without any references you just hired.

      My company is in the process of pivoting hard to Claude after 50yrs of doing virtually everything themselves and rolling their own versions of already-existing software, and this is almost verbatim how I’ve described to others what it feels like to use it.

      It feels like cajoling an intern to understand a job for which they have some average skill but zero motivation, and they only want to do the bare minimum, so you spend all the time you could be doing your job holding their hand through basic tasks.

      It’s fucking annoying.

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        24
        ·
        3 days ago

        you spend all the time you could be doing your job holding their hand through basic tasks

        negl sounds like you need to spend some time writing good documentation. May as well do it in the form of Skills files so humans and bots both are more quickly able to be useful in your org.

    • nymnympseudonym@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      3 days ago

      give any developer that power?

      Fun fact: giving developers access to production deployments violates FedRAMP and like half a dozen other compliance regimes SOC2/IRAP/ISMAP/G-Cloud/BSI C5/…

      • eodur@piefed.social
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        3 days ago

        But it doesn’t mean it isn’t incredibly common. Especially with “DevOps” where the developers are pushed to handle literally every aspect.

        • nymnympseudonym@piefed.social
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          3 days ago

          IMO DevOps was always a stupid idea. Impedance mismatch.

          Developers who are really good at designing complex enterprise-level shit need days-to-weeks of uninterrupted time to think and experiment. Please, skip the daily stand-up until you’ve figured out how to fix <insane-race-condition>

          Coders who are good at fixing bugs or adding a new menu item need a few hours or a day uninterrupted. Daily stand-up, should have closed yesterday’s ticket or have hit a real roadblock with it.

          Ops IT people are fixing like 4 fires at the literal same time, they are lucky to get minutes of uninterrupted thinking time. It’s about managing rate of tickets per day, and in contrast going full CAPA when there’s a significant outage.

          Just… totally different workflows, personalities, and management

          • eodur@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            I totally agree. I think it stems from Ops people that are angry at developers for building bad software. Theoretically making devs responsible for their deployments would make them care more about the quality, but really it just splits their focus and now they make bad software and provide poor ops.

            • nymnympseudonym@piefed.social
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 days ago

              Agreed about salty ops people. That said it is important even for fancy-schamcy Architect-level engineers to be assigned real annoying bugs in the codebase they helped to shape

    • dogslayeggs@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      3 days ago

      I was once the intern who did relatively stupid things with one very big consequence.

      My biggest fuckup was unplugging a 10base2 (edit: I originally wrote 10-base-T) coax wire from the loop so I could plug in a newly built computer. Everyone at the time (including me) knew that an unterminated 10-base-T network would crash Win 3.11, so the accepted process was to tell the entire network you were about to disconnect a cable so they could save their work and be ready to drop to DOS. I spaced that step in my haste to test a newly built computer and ruined a day’s worth of work by the sales guy.

      Ultimately, I was the one who fucked up and did know better. That’s AI. However, it only had consequences because Win 3.11 networking code was fucking awful and because the sales guy didn’t save his work frequently. If the same person in this story had asked Claude whether it was a good idea to have the backup and production databases on the same volume, the AI would have said No. If the person had asked Claude whether it was a good idea to delete a database without any confirmation dialogue, the AI would have said No. AI did it anyway. That’s what makes this an AI story.

      Was their database environment stupid? Yes. Did the sysadmin fuck up by not treating AI like an intern? Yes. Did the AI do something it knew it shouldn’t do? Also yes. This is both an AI story and stupid sysadmin story.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I witnessed a sysadmin, on a production database, type a SQL DELETE FROM query, which was being read to him over a call.

        He ran the command before writing the WHERE clause.

        Luckily, they had backups.

        “OOPS!? What do you mean “oops”?” was a meme around the office for years.

    • GalacticSushi@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      Treat an AI like the idiot intern without any references you just hired.

      An extremely enthusiastic intern that, if presented with a question/problem/prompt they don’t know the solution for will just overconfidently pull something out of their ass and run with it.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      Problem is execs and stupid software devs wanna give these things full reign on systems because of “performance gainz “

      It’s a collective stupidity that’s impossible to break because it’s hooked into the highest decision makers.

    • Crashumbc@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      These things are bought specifically because they are trying to replace the sysadmins… Along with everyone else.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Any business who uses AI in that manner will fail like all of the dot com companies who went all-in on the Internet when it first achieved a bit of popularity.

        AI is, at best, a tool that professionals may be able to use in some situations. Any company dumb enough to believe the hype generated by the chatbot companies is probably making other, similarly dumb, decisions in other areas.

        Things like giving way too much access to a worker, not having a tested disaster recovery plan, and not having anyone who understands the technologies that their business depends on.

        This company was heading towards disaster due to poor decision making, it just happened to be AI related but it could have also been an undetected cyberattack, 0-day exploits pushed to the client app, destructive ex-employee, etc.

        This is a cautionary tale about bad management

  • fum@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    4
    ·
    3 days ago

    This is absolutely hilarious. “AI” users getting what they deserve chef’s kiss

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      This is what happens when there is a new technology and companies are run by commerce grads, not scientist or engineers that understand the technology.

      • kazerniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        2 days ago

        Please don’t recommend AI for therapeutic uses, it’s only been optimised to keep the user engaged and pushed many people into psychosis. Just search for “ai psychosis” on your favourite search engine and you’ll get a ton of reports on how LLMs validate vulnerable people’s delusions, sometimes pushing them all the way into murder and/or suicide.

      • Cherries@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 days ago

        I hope you are not seriously advocating using the lying machine for therapy. You would get more value talking to a finger puppet.

      • Doom@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        No. Chatbots are machines built by billionaires with the agenda of making money. They litterally design these bots (even the therapeutic ones) to be sycophantic to the point they tell people anything to keep them chatting longer. To the point some of their users lose touch with reality. How many cases do we need of a chatbots helping a teenager plan and succeed at a suicide? Altruists did not design these machines. Even with a human therapist we have to watch for the landmines of their personal agendas. That’s a thousand times worse for machines that have no humanity, are capable of LIES, and have secret unwritten priorites written into their code by rich sociopathic creators. If facebook taught us anything it should be that if something is free on the internet it’s not because we are the customers.

        Also DO NOT TELL ALL YOUR DEEPEST DARKEST SECRETS TO CHATBOTS! They aren’t required by any legal bodies to protect that information! OMFG

      • Jako302@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        2 days ago

        People that need therapy are one of the groups that should be kept away from ai as fr as possible.

        AIs are yes-man, they agree with most of what you say. You really think its a good idea to reinforce the bad worldview or sense of self someone that desperately needs therapy most likely has.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    198
    ·
    4 days ago

    the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

    Well, there’s your problem.

    • MountingSuspicion@reddthat.com
      link
      fedilink
      English
      arrow-up
      81
      ·
      3 days ago

      I don’t want to sound like a know it all here because I recently was reminded by a nice Lemmy person to actually TEST my backups, but damn. Every part of that is so dumb. I also have backups stored by a different company in addition to locally storing really important info. If your stuff is hosted and backed up by the same people, what happens if your account is randomly suspended or hacked or some other issue (like ai)?

        • logi@piefed.world
          link
          fedilink
          English
          arrow-up
          22
          ·
          3 days ago

          People somehow think that they should give more permissions to Claude than to Camden. (Is that a name? To me that’s a borough and an eponymous beer.)

          E: oh yeah, and the market.

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            3 days ago

            Of course it’s a name. Camden borough/town/market is named after William Camden, 1551-1623. Using surnames as given names is a relatively common Americanism.

            • lando55@lemmy.zip
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              3 days ago

              What was William Camden’s take on unrestricted AI use in production?

            • Ghostalmedia@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 days ago

              And now is a common first name that in circulation because of a bunch of Gen X and early millennial parents named millions of kids anything that ended in den, dan, or don.

              • Semjeza@fedinsfw.app
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I thought it was a common first name because of all the fooling around in the Cyberdog dressing rooms?

      • homes@piefed.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        3 days ago

        If your stuff is hosted and backed up by the same people, what happens if your account is randomly suspended or hacked or some other issue (like ai)?

        This should be one of the first questions you get asked when you’re being interviewed for the position 2 to 3 levels beneath the position of ultimate responsibility. And if you don’t immediately have an answer, the interview is over.

        Fucking idiots had it coming

        • logi@piefed.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          3 days ago

          It’s an easy question to answer but a more difficult question to remember to ask. But I guess that’s what those 2 to 3 levels are for 😏

          • homes@piefed.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 days ago

            Ooo, good point. Management can be shit a lot of the time.

            But with all of those layoffs because of AI, those 2 to 3 levels get collapsed into one, and we’re left with the trainees running the show.

            And here we are ¯\_(ツ)_/¯

        • MountingSuspicion@reddthat.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 days ago

          Not to give myself more credit than I deserve, but I did test them upon setup, and had restored from backup 2 years ago. I didn’t have any ongoing checks other than to ensure a backup happened. I have since instituted yearly checks of the backups themselves, but I did feel dumb when I realized how vulnerable my data was.

          • stoy@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            Hehe, I ment no disrespect towards you, I just find that to be an excellent expression to explain the importance of testing backups to non tech people.

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 days ago

            So in the event of a failure, you’d be okay with reverting to that last known good backup from a year ago?

            • MountingSuspicion@reddthat.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 days ago

              Yes, but also I have to draw a line somewhere. I have a daily backup process. Some data is backed up to multiple places. I have backups of my backups. I cannot ensure that all three of the daily backups I run are fully restorable. I would love to know with 100% certainty that they all execute perfectly, but at the end of the day I have to trust the tools and processes I put in place for backups. A yearly checkup is probably more than sufficient for my purposes. I’m sure for certain businesses or sectors they need to be more on top of things, but I could manage just fine if all of it disappeared tomorrow. It wouldn’t be awesome for me, but it’d be manageable.

  • realitista@lemmus.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Can you get an AI to code? Yes. Can you get it to stop you from running your operation in such a stupid way that it will end up destroying it? No.

  • WhatsHerBucket@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    2
    ·
    3 days ago

    “That’s ok, it will be great in robots with lethal weapons. What could go wrong? It’ll be the greatest killing machine, like you’ve never seen before”. 🫲 🍊 🫱

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Can we make sure to make Ted Farro suffers worse this time?

      Being reduced to a mutant blob for, say, a few extra thousand years and maybe put in a zoo or something?

      • Pman@lemmy.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Nah but that’s what he wanted, he is the truest form of tech bro, destroy the world, refuse to accept consequences of his actions, weaseled his way out of the situation and managed to, in the wake of unimaginable human suffering, get more power over people and has a god complex tell me this isn’t some or all the characteristics of people like Peter Theil, Elon Musk, Mark Zuckerberg, Sundar Pichai, Bill Gates, hell even Tim Cook and Steve Jobs before him. Punishment doesn’t stop this sort of behavior but removing the possibility of someone having that level of control over others is the only way but the richest and most powerful have always sought ways of amassing more power not realizing that that leads to worse off situations for everyone including themselves, Horizon did great encapsulating that trait in Faro, but be it him, the people behind Skynet, the Matrix or whatever other tech dystopia that tech bros seem pathologically unable to not try to make happen in the worst way possible is only the beginning, they seem to forget that even with advanced tech that serves their needs and wants, which won’t help their mental health, the people lower down on the rungs of society have brains, wants and needs, and they have more expertise in all sorts of things than the 1% are except for mass exploitation. This inevitably goes wrong one of a few ways, either everyone dies from the tech, or so many that societal collapse is inevitable not great and even if society survives it can’t functionally reconstitute itself; 2 they win and kill off or supress enough of society that the society becomes less productive and instead of fighting the powerful they flee or don’t participate in wealth generating for the rich were they don’t have to, maybe to rise up again later or the economy of the region just ignores them completely and the government protects themselves from their people more than anything else, or 3rd your revolution with terror campaigns against any and all who can be credibly accused of being part of the former tyrants. In all 3 cases the richer people end up poorer overall because wealth flees or dies in autocracy.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    91
    ·
    3 days ago

    This guy.

    The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider’s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

    Oh look, they have project level tokens: https://docs.railway.com/integrations/api#project-token

    They chose to give it full account access, including to production. But ohhhh nooooo it’s not MYYYY fault!

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        3 days ago

        Oh yes, I skipped that part. Railway specifically explains their solutions are self-managed. If they were doing pgdumps to the same volume, that’s on them.

        If Railway loses business over this, they may have a libel claim. They’d never do it, but it wouldn’t be invalid.

        • el_abuelo@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          3 days ago

          “It wouldn’t be invalid” isn’t the worst double negative in the world but it would be valid to say that it was unpleasant to read it when you could have used a less misdirecting choice of prose that wouldn’t have had such a negative effect on my reading comprehension. That is to say that I could have enjoyed it less but I certainly didnt enjoy it as much as i could have if you hadn’t used the double negative when a single positive wasn’t any further from reach.

      • Bilb!@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        That’s doesn’t even really qualify as a backup. A snapshot, maybe.

    • queueBenSis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      ha! for real. you have scoped API tokens, but not using it properly. this is just a fear mongering click bait rage bait headline. sure, the agent executed the deletion, but it’s the human’s responsibility to configure security tokens correctly before handing the keys to anyone, human or agent.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    1
    ·
    3 days ago

    This isn’t an AI problem, this is an “Don’t allow anyone access your backups without following protocol.” problem.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      9
      ·
      3 days ago

      this is an “Don’t allow anyone access your backups without following protocol.” problem.

      Congratulations you just identified the AI problem.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          3 days ago

          Seems to be, yes. The AI had the access it needed to do the job it was given, and that access allowed it to cause the problem.

          The alternative that would have prevented this issue was to not use AI for this.

          • luciferofastora@feddit.org
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            3 days ago

            A human with the same permissions would have been capable of fucking up too. Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

            (Relying on AI is dumb anyway, but that’s not the biggest issue in this specific case)

            • Encrypt-Keeper@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              Giving the equivalent of a junior dev with a learning disability the keys to the whole place is just dumb.

              Correct. You too have now identified the AI problem. This was the job of a human senior infrastructure engineer that they delegated to an AI agent. They’ve found out why it’s not an AI’s job.

              • luciferofastora@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 days ago

                I can’t read the original twitter link, but I’m not sure they handed it the job of a senior infrastructure engineer. The article says “routine”, which to me is something you can hand off to a junior just fine. When they hit a snag, they obviously should stop and ask what to do, but even then, a human might want to avoid admitting ignorance and try to fix it themselves instead. They shouldn’t have privileges to fuck up that badly.

                So while it’s on the AI for taking destructive steps, I do think there’s a human error in the form of grossly irresponsible rights allotment. If this was a first-of-its-kind incident that shows otherwise stellar AI fucking up badly, I’d classify it as a pure AI problem, but their limits are hardly novel at this point. There have been previous incidents circulating the media. We’ve had memes about it. If you can’t stay up to date on your tools and their shortcomings, you shouldn’t be using them, because discovering a footgun becomes a question of “when”, not “if”.

                That’s why I consider this partially a human failing: If you’re gonna use a tool, make sure that it operates within safe limits. The chainsaw doesn’t know the difference between tree and bone, so it’s on you to make sure it stays away from anyone’s legs. So while “Chainsaw can saw legs if wielded improperly” is a problem that was accepted as a tradeoff for its utility, you can’t really blame the chainsaw if you zip-tied the safety.

                (Again, not to say Anthropic is blameless for letting its random generator generate randomly destructive shit. I just don’t think that’s the only point of failure here.)

                • Encrypt-Keeper@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 days ago

                  That’s why I consider this partially a human failing: If you’re gonna use a tool, make sure that it operates within safe limits.

                  Yes and in this case using it for this job at all was clearly not within safe limits. You keep hammering on “It’s not the AI’s fault it was given a job with too big of a blast zone for it to safely do” after I’ve said “This type of job has too big a blast zone for an AI to safely do” and somehow you’ve convinced yourself that these are two different things.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Yes that’s right the protocols that we humans used to have for giving only trusted, reliable people this level of access over infrastructure predate LLMs and were a great way to stop this from happening.

          However the AI is here now, and when you give an autonomous agent with known hallucination problems access to act on your behalf with your IaC on your infra provider, this kind of thing is an inevitability.