Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”
If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.
There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.
I just assumed that its bs at first, but I also once nearly went unga bunga caveman against a computer from 1978. So I probably have a deeper understanding of how dumb computers can be.
Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now…
Yes, people are being forced to use it if they want to, for instance, search using Google or Bing.
As the parent comment suggested, or there’s no way to opt out, currently.
I’m glad you see value in it; I think the injection of LLM queries into search results I want to contain accurate results (and nothing more) a useless waste of power.
Injecting that into search result is a bad thing, I’m with you on that. Try DuckDuckGo. They use Bing but don’t insert all of that AI crap. The results are much more vanilla. It’s actually easier to find stuff because it’s not that cluttered.
I always ask all people defending AI, or rather LLMs, what’s the great value they all mention in their comments. So far the “best” answer I got was one dude using LLMs to extract info from decades old reports that no one has checked in 20 years hahaha. So glad we are allowing LLMs to deetroy the environment and plagiarize all creative work for that lol.
It was never made for information retrieval. It’s made for high-level reasoning and language understanding. That is where it shines. You completely misunderstand what this is all about. You’re trying to use a car to paint a wall.
There is really no argument against LLMs if they are used correctly. Just relax a bit and embrace it with a bit more curiosity. It won’t kill mankind, just like fire, agriculture, or the steam engine has.
Me? I’m not using LLMs at all hahaha. I’m asking you, who says they have great value, to provide examples of their uses. I just provided pretty much the only one I have heard, which some random dude told me in a different thread. Everyone else, like you, just keeps it abstract and just bullshits and bullshits hahaha.
If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.
There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.
I just assumed that its bs at first, but I also once nearly went unga bunga caveman against a computer from 1978. So I probably have a deeper understanding of how dumb computers can be.
Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now…
Nobody is forcing you to use it.
I’m using it and I see great value in it.
Yes, people are being forced to use it if they want to, for instance, search using Google or Bing.
As the parent comment suggested, or there’s no way to opt out, currently.
I’m glad you see value in it; I think the injection of LLM queries into search results I want to contain accurate results (and nothing more) a useless waste of power.
Injecting that into search result is a bad thing, I’m with you on that. Try DuckDuckGo. They use Bing but don’t insert all of that AI crap. The results are much more vanilla. It’s actually easier to find stuff because it’s not that cluttered.
I always ask all people defending AI, or rather LLMs, what’s the great value they all mention in their comments. So far the “best” answer I got was one dude using LLMs to extract info from decades old reports that no one has checked in 20 years hahaha. So glad we are allowing LLMs to deetroy the environment and plagiarize all creative work for that lol.
So, what is the great value you see man?
It was never made for information retrieval. It’s made for high-level reasoning and language understanding. That is where it shines. You completely misunderstand what this is all about. You’re trying to use a car to paint a wall.
There is really no argument against LLMs if they are used correctly. Just relax a bit and embrace it with a bit more curiosity. It won’t kill mankind, just like fire, agriculture, or the steam engine has.
Me? I’m not using LLMs at all hahaha. I’m asking you, who says they have great value, to provide examples of their uses. I just provided pretty much the only one I have heard, which some random dude told me in a different thread. Everyone else, like you, just keeps it abstract and just bullshits and bullshits hahaha.
Sir, are you telling me AI isn’t a panacea for conveying facts? /s
Some services will use glorified RAG to put more current info in the context.
But yeah, if it’s just the raw model, I’m not sure what they were expecting.