In addition to making people stupid, I wonder what affect will LLMs like Claude will have on programmers? How will new programmers learn if companies start using Claude?

  • StarryPhoenix97@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    That’s on my to-do list. I’m currently reworking my entire build because I realized I had enough last generation parts to build a media server. Once I have windows set up to only run on VM and get my stuff moved and backed up I’m going to install an LLM

        • Franconian_Nomad@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          I have a Radeon RX 7800 XT.

          Qwen 3.5-9b is blazingly fast on it. However while it’s its impressive for its size, it has its limitations. Complex tasks with several steps are too much for it.

          So now I run the 3.6-35B model with llama.cpp It’s too big for my VRAM so I had to split it: everything that doesn’t fit on the graphics’s card runs in the normal RAM. That slows everything down, but with the right flags I get a bit over 20 tokens/s.

          If you have problems with speed and you’re using ollama I would replace it with something faster like llama.cpp.