• 🇦🇺𝕄𝕦𝕟𝕥𝕖𝕕𝕔𝕣𝕠𝕔𝕕𝕚𝕝𝕖@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    il y a 1 jour

    I still dont see anything about encrypted inference this mostly looks like ways to avoid having the model retain sensitive data during its training. What i really would like to see is a way to send encrypted data off to a cloud where i can pay for inference compute then receive something i can unscramble to get the actual responce without said cloud ever havibg the raw unencrypted data.

    EDIT: did some research and it seems that fully encrypted inference without leaking data via semantic meaning is possible at the small cost of making inferance 52000 times more computationally expensive. Seems more research is required.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      il y a 1 jour

      52000 times more computationally expensive

      Are you referring to retraining models with the same training data encrypted with your own key, and only interacting with the model via the same key? That’s the only way I’ve heard possible, but that was a year or more ago.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      il y a 1 jour

      And learning from the dataset is kinda the whole point of LLMs, right? I see some fundamental problems there. If you ask it where Alpacas are from, or which symptoms make some medical conditions, you want it to return what it memorized earlier. It kind of doesn’t help if it makes something else up to “preserve privacy”.

      Do they address that? I see lots of flowery words like

      Integrating privacy-preserving techniques often entails trade-offs, such as reduced accuracy or increased computational demands, […]

      But I mean that’s just silly.