Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

  • dwindling7373@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sure, or you could send an email to the leading international institution on the matter to get a very accurate answer!

    Is it the most reasonable course of action? No. Is it more reasonable than waste a gazillion Watt so you can maybe get some better keywords to then paste in a search engine? Yes.

    • kitnaht@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.