In addition to making people stupid, I wonder what affect will LLMs like Claude will have on programmers? How will new programmers learn if companies start using Claude?
In addition to making people stupid, I wonder what affect will LLMs like Claude will have on programmers? How will new programmers learn if companies start using Claude?
That’s on my to-do list. I’m currently reworking my entire build because I realized I had enough last generation parts to build a media server. Once I have windows set up to only run on VM and get my stuff moved and backed up I’m going to install an LLM
I recommend Qwen3.6, either the 27B dense or the 35B MoE model. Both outstanding for local models.
What hardware are you using?
I am using qwen3.5 9b. And it is barely working.
I have a Radeon RX 7800 XT.
Qwen 3.5-9b is blazingly fast on it. However while it’s its impressive for its size, it has its limitations. Complex tasks with several steps are too much for it.
So now I run the 3.6-35B model with llama.cpp It’s too big for my VRAM so I had to split it: everything that doesn’t fit on the graphics’s card runs in the normal RAM. That slows everything down, but with the right flags I get a bit over 20 tokens/s.
If you have problems with speed and you’re using ollama I would replace it with something faster like llama.cpp.