• athatet@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Honestly. At this point, after it having happened to multiple people, multiple times, this is the only appropriate response.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Given that the infrastructure description included the DataTalks.Club website, this resulted in a full wipe of the setup for both sites, including a database with 2.5 years of records, and database snapshots that Grigorev had counted on as backups. The operator had to contact Amazon Business support, which helped restore the data within about a day.

    Non-story. He let Terraform zap his production site without offsite backups. But then support restored it all back.

    I’d be more alarmed that a ‘destroy’ command is reversible.

  • eleitl@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    “and database snapshots that Grigorev had counted on as backups” – yes, this is exactly how you run “production”.

    • Nighed@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      With some of the cloud providers, their built in backups are linked to the resource. So even if you have super duper geo-zone redundant backups for years, they still get nuked if you drop the server.

      It’s always felt a bit stupid, but the backups can still normally be restored by support.

      • eleitl@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s because these are not backups. With backups you still have your data even if the cloud provider has gone away.

        • Nighed@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          They are backups, you potentially get copy’s of the data in multiple locations across continents.

          BUT I agree, you are relying on them entirely for it. Lots of vendor tie in stuff in the industry unfortunately.

          • EffortlessGrace@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Is everyone in commercial software development finally saying, “Fuck it, we’ll run the shit ourselves”?

            I’m an infrastructure and devops noob here; take my words with a grain of salt.

            I need GPU clusters with ECC VRAM for research and found it’s cheaper to just have my own high-ish performance compute in my own office I paid for once than pay AWS/Azure/GCS/etc forever or at least everytime I want to train a custom DNN model. Sometimes I use Linode but it’s for monitoring. But I can run shit at will and I have data sovereignty.

            Has the paradigm shifted back to developing and serving things in-house now that big tech vendor-lock/tie-ins have so many dark patterns that scalability isn’t cost-effective with them? Or is it just my own pipe dream?

            • Nighed@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              If you are going to use it enough to pay for it sure. But that’s always been the case.

              The main benefits of cloud are it’s ability to scale quickly, it’s ability to provide geographic reach and the conversation of capex to opex.

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    sigh

    Use LLMs as instructional models not as production/development models. It’s not hard, people. You don’t need to connect credentials to any LLMs just like you’d never write your production passwords on post-it’s and stick them on your computer monitor.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Meh, they work well enough if you treat them as a rubber duck that responds. I’ve had an actual rubber duck on my desk for some years, but I’ve found LLM’s taking over its role lately.

        I don’t use them to actually generate code. I use them as a place where I can write down my thoughts. When the LLM responds, it has likely “misunderstood” some aspect of my idea, and by reformulating myself and explaining how it works I can help myself think through what I’m doing. Previously I would argue with the rubber duck, but I have to admit that the LLM is actually slightly better for the same purpose.

          • thebestaquaman@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            You’re absolutely right. I mostly run a pretty simple local model though, so it’s not like it’s very expensive either.

          • thebestaquaman@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I think you’ve misunderstood the purpose of a rubber duck: The point is that by formulating your problems and ideas, either out loud or in writing, you can better activate your own problem solving skills. This is a very well established method for reflecting on and solving problems when you’re stuck, it’s a concept far older than chatbots, because the point isn’t the response you get, but the process of formulating your own thoughts in the first place.

            • prole@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Right, but a rubber duck isn’t a sycophantic chatbot that isn’t capable of conceptualizing anything but responding to you anyway.

              • thebestaquaman@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                That is correct. However, an LLM and a rubber duck have in common that they are inanimate objects that I can use as targets when formulating my thoughts and ideas. The LLM can also respond to things like “what part of that was unclear”, to help keep my thoughts flowing. NOTE: The point of asking an LLM “what part of that was unclear” is NOT that it has a qualified answer, but rather that it’s a completely unqualified prompt to explain a part of the process more thoroughly.

                This is a very well established process: Whether you use an actual rubber duck, your dog, writing a blog post / personal memo (I do the last quite often) or explaining your problem to a friend that’s not at all in the field. The point is to have some kind of process that helps you keep your thoughts flowing and touching in on topics you might not think are crucial, thus helping you find a solution. The toddler that answers every explanation with “why?” can be ideal for this, and an LLM can emulate it quite well in a workplace environment.

  • kamen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    You either have a backup or will have a backup next time.

    Something that is always online and can be wiped while you’re working on it (by yourself or with AI, doesn’t matter) shouldn’t count as backup.

    • MIDItheKID@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      AI or not, I feel like everybody has had “the incident” at some point. After that, you obsessively keep backups.

      For me it was a my entire “Junior Project” in college, which was a music album. My windows install (Vista at that time - I know, vista was awful, but it was the only thing that would utilize all 8gb of my RAM because x64 XP wasn’t really a thing) bombed out, and I was like “no biggie, I keep my OS on one drive and all of my projects on the other, I’ll just reformat and reinstall Windows”

      Well… I had two identical 250gb drives and formatted the wrong one.

      Woof.

      I bought an unformat tool that was able to recover mostly everything, but I lost all of my folder structure and file names. It was just like 000001.wav, 000002.wav etc. I was able to re-record and rebuild but man… Never made that mistake again. Like I said. I now obsessively backup. Stacks of drives, cloud storage. Drives in divverent locations etc.

      • SirEDCaLot@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        AI or not, I feel like everybody has had “the incident” at some point. After that, you obsessively keep backups.

        Yup!

        Also totally unrelated helpful tip- triple check your inputs and outputs when using dd to clone a drive. dd works great to clone an old drive onto a new blank one. It is equally efficient at cloning a blank drive full of nothing but 0s over an old drive that has some 1s mixed in.

        • kamen@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          And that’s a great example where a GUI could be way better at showing you what’s what and preventing such errors.

          If you’re automating stuff, sure, scripting is the way to go, but for one-off stuff like this seeing more than text and maybe throwing in a confirmation dialogue can’t hurt - and the tool might still be using dd underneath.

          • SirEDCaLot@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Quite true.
            It’s an argument I often have with the CLI only people, and have been having for years. Like ‘with this Cisco router I can do all kinds of shit with this super powerful CLI’. Yeah okay how do I forward a port? Well that takes 5 different commands…

            Or I just want to understand what options are available- a GUI does that far better than a CLI.

            • kamen@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 month ago

              IMO it’s important to recognise that both are valid in different scenarios. If you want to click through and change something that’s actually doable with a couple of clicks, that’s fine. If you want to do this through the CLI, it’s also fine - if you’re someone who’s done 10 deployments today and configured the same thing, it would be muscle memory even if it’s 5 commands.

              • SirEDCaLot@lemmy.today
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 month ago

                Quite true there is absolutely a place for both in all situations. And it’s why I hate absolutists who think gui’s are some sort of disease. GUIs are discoverable and intuitive, You can lay out all the options for the user so they know what they can choose and make the right choice. CLIs are powerful and scriptable, easy to automate.
                Neither is bad.