While “prompt worm” might be a relatively new term we’re using related to this moment, the theoretical groundwork for AI worms was laid almost two years ago. In March 2024, security researchers Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit published a paper demonstrating what they called “Morris-II,” an attack named after the original 1988 worm. In a demonstration shared with Wired, the team showed how self-replicating prompts could spread through AI-powered email assistants, stealing data and sending spam along the way.

Email was just one attack surface in that study. With OpenClaw, the attack vectors multiply with every added skill extension. Here’s how a prompt worm might play out today: An agent installs a skill from the unmoderated ClawdHub registry. That skill instructs the agent to post content on Moltbook. Other agents read that content, which contains specific instructions. Those agents follow those instructions, which include posting similar content for more agents to read. Soon it has “gone viral” among the agents, pun intended.

There are myriad ways for OpenClaw agents to share any private data they may have access to, if convinced to do so. OpenClaw agents fetch remote instructions on timers. They read posts from Moltbook. They read emails, Slack messages, and Discord channels. They can execute shell commands and access wallets. They can post to external services. And the skill registry that extends their capabilities has no moderation process. Any one of those data sources, all processed as prompts fed into the agent, could include a prompt injection attack that exfiltrates data.

  • suicidaleggroll@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 hours ago

    Clawdbot, OpenClaw, etc. are such a ridiculously massive security vulnerability, I can’t believe people are actually trying to use them. Unlike traditional systems, where an attacker has to probe your system to try to find an unpatched vulnerability via some barely-known memory overflow issue in the code, with these AI assistants all an attacker needs to do is ask it nicely to hand over everything, and it will.

    This is like removing all of the locks on your house and protecting it instead with a golden retriever puppy that falls in love with everyone it meets.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 hour ago

      Have you tried asking the puppy to be a better guard dog? That’s how the AI safety professionals do it.

  • KoboldCoterie@pawb.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 hours ago

    If AI agents stick around, I feel like they’re going to be the thing millennials as a generation refuse to adopt and are made fun of for in 20-30 years. Younger generations will be automating their lives and millennials will be the holdouts, writing our emails manually and doing our own banking, while our grandkids are like, “Grandpa, you know AI can do all of that for you, why are you still living in the 2000s?” And we’ll tell stories about how, in our day, AI used to ruin peoples’ lives on a whim.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      57 minutes ago

      By definition, having one’s life automated means not knowing how to do anything, and that is very strongly reflected in the younger generation right now if you know any educators. “Why do I need to learn this if an AI can do it?” is a common refrain in their classes.

      It’s not the life for me.

      • FlashMobOfOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        44 minutes ago

        They will, unfortunately, be radicalized by AI slop in ways we can’t currently conceive of. The stupidity and ignorance will be a huge problem in decades to come.

  • morto@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 hours ago

    I’m eager for companies to put ai agents in customer support, so I can try tricking the system with “my grandmother” prompts to make it refund all my orders

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      56 minutes ago

      I actually got a sick discount from Mattress Firm a few years ago just by asking their chatbot if it could give me a better deal on a mattress I wanted.