TL;DR MIT researchers have developed an antitampering ID tag that is tiny, cheap, and secure. It is several times smaller and significantly cheaper than the traditional radio frequency tags that are used to verify product authenticity. The tags use glue containing microscopic metal particles. This glue forms unique patterns that can be detected using terahertz waves. The system uses AI to compare glue patterns and calculate their similarity. The tags could be used to authenticate items too small for traditional RFIDs.

      • lolcatnip@reddthat.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 years ago

        My problem is that “AI” is an overly broad term that leads people to conflate very different technologies. I just want people to use more specific language.

          • uis@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 years ago

            Machine learning: we don’t know how it works AI: we don’t want you to know how it works

        • lando55@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          There’s a corporate initiative where I work that we’re going to offer AI in 2024. When I politely asked to expound on that, I was met with blank stares.

          Like motherfucker do you realize even MS Teams uses AI for meeting transcription

  • TheOneCurly@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    We made a tag that can’t be reliably and deterministically scanned so we also included a machine learning model that takes a good guess at it.

    I just don’t see how you could possibly rely on a black box model for anything important. You have no way to mathematically prove if there are collisions in the model output or not, and newer versions of the model can’t be made backwards compatible. So if you have a database of thousands of these tags scanned, then they discover a critical vulnerability and provide a new model, you’re SOL and everything you have is worthless.

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Can you imagine your house doorknob had to think about the shape of your key before letting you in, and then have the possibility of just saying “No. Not today.”?

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      If there were collisions in the output you’d see them while scanning those thousands of entries. And if they release a new model you can use it going forward and keep scanning the old items with the old one.

      This happens in inventory sometimes, new technology comes out, you have to update asset tags.

  • QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    To clarify what OP meant by his ‘AI’ statement

    The system uses AI to compare glue patterns […]

    The researchers noticed that if someone attempted to remove a tag from a product, it would slightly alter the glue with metal particles making the original signature slightly different. To counter this they trained a model:

    The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy.

    It’s a good use case for an ML model.

    In my opinion, this should only be used for continuing to detect the product itself.
    The danger that I can see with this product would be a decision made by management thinking that they can rely on this to detect tampering without considering other factors.

    The use case provided in the article was for something like a car wash sticker placed on a customers car.

    If the customer tried to peel it off and reattach it to a different car, the business could detect that as tampering.

    However, in my opinion, there are a number of other reasons where this model could falsely accuse someone of tampering:

    • Temperature swings. A hot day could warp the glue/sticker slightly which would cause the antitampering device to go off the next time it’s scanned.
    • Having to get the windshield replaced because of damage/cracks. The customer would transfer the sticker and unknowingly void the sticker.
    • Kids, just don’t underestimate them.

    In the end, most management won’t really understand this device well beyond statements like, “You can detect tampering with more than 99 percent accuracy!” And, unless they inform the customers of how the anti-tampering works, Customers won’t understand why they’re being accused of tampering with the sticker.

  • dyathinkhesaurus@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I read somewhere of a similar implementation using glitter mixed into clear nail polish. Take a close-up photo any time and visually compare with the original, no ML/AI model necessary

  • Pulptastic@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    These tags should be smaller and cheaper, offloading tech to the scanners. Since a store uses lots of tags and only a couple scanners, this might make financial sense even if the scanners are more expensive as long as the tags are cheaper and enough of them are needed.