

Well fuck that


Well fuck that


Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere


aka enshittification


It pains me to argue this point, but are you sure there isn’t a legitimate use case just this once? The text says that this was aimed at making Wikipedia more accessible to less advanced readers, like (I assume) people whose first language is not English. Judging by the screenshot they’re also being fully transparent about it. I don’t know if this is actually a good idea but it seems the least objectionable use of generative AI I’ve seen so far.


It’s actually kind of worrisome that they have to guess it was his code based on the function/method name. Do these people not use version control? I guess not, they sure as hell don’t do code reviews if this guy managed to get this code into production


Yeah I see what you mean. There’s a decent argument to be made that something like reasoning appears as an emergent property in this kind of system, I’ll admit. Still, the fact that fundamentally the code works as a prediction engine rules out any sort of real cognition, even if it makes an impressive simulacrum. There’s just no ability to invent, no true novelty, which – to my mind at least – is the hallmark of actual reasoning.


It’s real. https://en.wikipedia.org/wiki/XAI_(company)


an open source reasoning AI
It’s still an LLM right? I’m going to have to take issue with your use of the word ‘reasoning’ here


At least in Doom they had sense enough to do it on Mars
Sounds quite similar to Nick Cave’s letter on the topic, read here by Stephen Fry. (anyone feel free to reply with a piped link, for some reason it’s never worked for me)