I think context is what’s going to kill LLMs. They keep throwing hacks on top of it to try to make it understand context, but it never really “understands” it just makes it look like it is following the context by simulating a few cues. Every interaction is a fresh slate, and trying to actually preserve the context of the model to the level that we would consider actual “intelligence” never mind long term planning and actual “thinking” would explode towards infinity so fast there are probably not enough resources in the universe to even do it once.
I think context is what’s going to kill LLMs. They keep throwing hacks on top of it to try to make it understand context, but it never really “understands” it just makes it look like it is following the context by simulating a few cues. Every interaction is a fresh slate, and trying to actually preserve the context of the model to the level that we would consider actual “intelligence” never mind long term planning and actual “thinking” would explode towards infinity so fast there are probably not enough resources in the universe to even do it once.