It’s not meant to be a specific prediction, it’s just a plausible (for when it was written) scenario. Don’t worry about the actual years, it could be off by an order of magnitude, just decide for yourself if any of the assumptions are completely wrong.
- 0 Posts
- 4 Comments
postscarce@lemmy.dbzer0.comto
Technology@lemmy.world•AI Is Destroying Grocery Supply ChainsEnglish
5·1 month agoNobody is programming those laws because it’s not possible with the way that LLMs are currently built and trained. Instead of The Three Laws, which are inviolable but in certain edge cases insufficient, we have Anthropic’s Constitution, which is 23,000 words worth of good intentions which Claude should keep in the back of its mind while it does whatever it wants to do.
postscarce@lemmy.dbzer0.comto
Technology@lemmy.world•Microsoft’s $440 billion wipeout, and investors angry about OpenAI’s debt, explainedEnglish
2·2 months agoLLMs could theoretically give a game a lot more flexibility, by responding dynamically to player actions and creating custom dialogue, etc. but, as you say, it would work best as a module within an existing framework.
I bet some of the big game dev companies are already experimenting with this, and in a few years (maybe a decade considering how long it takes to develop a AAA title these days) we will see RPGs with NPCs you can actually chat with, which remain in-character, and respond to what you do. Of course that would probably mean API calls to the publisher’s server where the custom models are run, with all of the downsides that entails.
Ha! True. Very true. I find this scenario compelling but it’s based on a series of assumptions which individually seem plausible but I have no way to evaluate them all together. It’s like the Drake Equation; because the probabilities are multiplicative even tiny adjustments to a few of them end up making a huge difference to the final answer.
The thing is though, if there really is even a tiny chance of the ultimate outcome of this thought experiment being true (i.e. the end of humanity) then we should probably address it. And what that would look like is stopping the AI companies from doing any more research until they can prove their model will be safe, which should make people who are more concerned about AI slop happy too. Everybody wins by hitting the brakes. (Edit: well, Sam Altman doesn’t but I’m not going to lose sleep over that.)