

51·
2 years agoI use koboldcpp with the vicuna model. Reasonably fast generation (<1 minute) on a 4th gen i7, would probably be on par with chatgpt in terms of speed if you used a GPU.
I use koboldcpp with the vicuna model. Reasonably fast generation (<1 minute) on a 4th gen i7, would probably be on par with chatgpt in terms of speed if you used a GPU.
Searxng is a meta search engine, how is that relevant to a large language model?