One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounter
AI can be convincing, and it will swear until it’s blue in the face that something is right and then just be completely wrong.
But that happens maybe 10% of the time. Other times it is mostly right.
So got to be careful. This guy was in his 50’s, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.
I think part of the difference is the amount of output being measured. Maybe a single statement has a 10% chance of being wrong, but over the course of a whole response the likelihood of there being an incorrect statement goes up. After only 5 statements at 10% error, that’s a 40% chance of being wrong in some way.
I don’t have any real numbers, just personal experience using AI for programming at work, and all of these numbers (10%, 40%, 70%) seem plausible depending on exactly what you’re measuring.
AI can be convincing, and it will swear until it’s blue in the face that something is right and then just be completely wrong.
But that happens maybe 10% of the time. Other times it is mostly right.
So got to be careful. This guy was in his 50’s, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.
Where are you pulling your numbers from, mate? The figures I’ve seen so far start somewhere >40% and go all the way up to 70%.
There’s a kind of law here that should be named IMO when dealing with LLMs:
In a long enough interaction with an LLM the probability that it generates a very incorrect, borderline insane response approaches 100%.
I think part of the difference is the amount of output being measured. Maybe a single statement has a 10% chance of being wrong, but over the course of a whole response the likelihood of there being an incorrect statement goes up. After only 5 statements at 10% error, that’s a 40% chance of being wrong in some way.
I don’t have any real numbers, just personal experience using AI for programming at work, and all of these numbers (10%, 40%, 70%) seem plausible depending on exactly what you’re measuring.
so…a bit like economists then ?
Not if we’re talking Jim Cramer, who is well beyond 70%.