

It’s saying that you can invent an infinite number of hypothetical futures but they are not useful for making decisions in the here and now


It’s saying that you can invent an infinite number of hypothetical futures but they are not useful for making decisions in the here and now
I’d love to take the credit but i actually stole it from that link that made the rounds on Hacker News
Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.
To me it’s like sending the “let me google that for you” link to answer a question. It’s just bad form. I don’t want your whole reasoning trace man, i just want to know what you understand of it and maybe you’ll catch some detail i’m missing or whatever. It’s simple, i won’t read LLM output, my colleagues know it and i get shit for it but no i am not digesting this material for you. Give me a 3 bullet-point version in your own words, the point is not just in the data exchange it’s also to make sure you are aware of the answer and we have a common truth.
Or failing that, just give me the fucking prompt and at least i’ll know if you understand the question.


VMs mostly
oh yeah i see how that can be hungry
What are you hosting on Minecraft that isn’t using >=4 gigs?
Just a vanilla server i play on with my son, it’s got 2G and i haven’t noticed anything out of the ordinary. Chunk gen is slow-ish but i suppose that’s CPU-bound.
BTW i exagerated in my initial comment, i looked at the machine and it’s sitting just under 8G of used RAM.
Also ZFS
Jesus christ 😅 no idea if you’re jesting


Serious question, what does RAM help with in the context of self hosting? I recently bought 32G for my server, and it’s DDR3 ecc so it’s so cheap I could have afforded 64 but I just kept wondering what will I use it for? I rarely go north of 6G usage and that’s with half a dozen services, a Minecraft server etc… I just don’t know what kind of services are RAM hungry.
And that’s great for you but I still think you’ll be in a minority. Which is not necessarily bad of course.
Open Source devs mostly come from the industry and the penetration of agentic coding in the industry has been massive over the last six months. I don’t think I’ve ever seen anything of this scale.
I think disclosure is good and should be tackled as soon as possible because being transparent in your communication is just good practice in general.
However I feel like this will soon be rendered useless as all projects will move to agentic (or otherwise ai-assisted) coding.
Maybe there’ll be a movement of hand coded FOSS but realistically they’ll have a hard time. Resources are already tight for most projects, and rejecting productivity in favor of aesthetics is a rich guy’s strategy.
This whole debacle is showing that people fundamentally misunderstand how code works. They are trying to declare code good or code bad because of some silly heuristics like ai/not-ai, as if it wasn’t literal lines of text which you can read before you form an opinion and make a fool of yourself.


The best way to learn to write is to write and have someone critique you. That someone can be an AI it doesn’t change anything about the process, as long as the initial input is your own best effort and the final result is your own edit based on the feedback you received.


That’s an excellent point! On that topic I recently listened to an interview of the founder of EleutherAI, who focuses on training small language models. She said they were able to train a 1B parameters reasoning model with 50K Wikipedia articles and carefully curated RL traces. The thing could run in your smartphone and is at parity with much larger models trained on trillions of tokens.
She also scoffed at Common Crawl and said it contained mostly cookies and porn. She had a kind of attitude like “no wonder the big labs need to slurp trillions of tokens when the tokens are such low quality”. Very interesting approach, if you understand french I can only recommend the interview.


OK that’s a fair observation. Honestly my naive guess would be that they simply do not optimize mainline gpt models for the kind of use case you generally have on Api (tool use, multi-step actions, etc…). They need it to be a perky every day assistant not necessarily a reliable worker. Already on gpt-4 i found it extremely mediocre compared to the Claude models of the same time.
I think that’s a more likely explanation than model collapse which is a really drastic phenomenon. A collapsed model will not just fail tasks at a higher rate, it will spit garbled text and go completely off the rails, which would be way more noticeable. It would also be weird that Claude models keep getting better and better while they’re probably fed roughly the same diet of synthetic data.


The switch you mention (from 4th gen to 5th gen GPT) is when they introduced the model router, which created a lot of friction. Basically this will try to answer your question with as cheap a model as possible, so most of the time you won’t be using flagship 5.2 but a 5.2-mini or 5.2-tiny which are seriously dumber. This is done to save money of course, and the only way to guarantee pure 5.2 usage is to go through the API where you pay for every token.
There’s also a ton of affect and personal bias. Humans are notoriously bad at evaluating others intelligence, and this is especially true of chatbots which try to mimic specific personalities that may or may not mesh well with your own. For example, OpenAI’s signature “salesman & bootlicker” personality is grating to me and i consistently think it’s stupider than it is. I’ve even done a bit of double blind evaluation on various cognitive tasks to confirm my impression but the data really didn’t agree with me. It’s smart, roughly as smart as other models of its generation, but it’s just fucking insufferable. It’s like i see Sam Altman’s shit eating grin each time i read a word from ChatGPT, that’s why i stopped using it. That’s a property of me, the human, not GPT, the machine.


I’m sorry but no, models are definitely not collapsing. They still have a million issues and are subject to a variety of local optima, but they are not collapsing in any way. It is not known whether this can even happen in large models, and if it can it would require months of active effort to generate the toxic data and fine-tune models on that data. Nobody is gonna spend that kind of money to shoot themselves in the foot.


Yeah i remember that Ed article ! I don’t think the technical aspects are relevant to the newer generation of models, but yeah of course any attempt to compress inference costs can have side effects : either response quality will degrade for using dumber models, or you’ll have re-inference costs when the dumb model shits its pants. In fact the re-inference can become super costly as dumber models tend to get lost in reasoning loops more easily.


Yeah that’s also something that you have to train for, i’m not super aware of the technicals but model routing is definitely important to the AI companies. I suspect that’s part of why they can pretend that “inference is profitable” as they are already trying to squeeze it down as much as possible.


To clarify : model collapse is a hypothetical phenomenon that has only been observed in toy models under extreme circumstances. This is not related in any way to what is happening at OpenAI.
OpenAI made a bunch of choices in their product design which basically boil down to “what if we used a cheaper, dumber model to reply to you once in a while”.
But this is just speculation. The fact is, systemd introduced a new optional field in the local database. They don’t publish an OS so they have no obligation to do anything more, actual implementation would have to happen in other projects.
What this is, is a spite-fork by some random AI researcher and anybody installing that on their system has way larger problems here and now than hypothetical ID verification in the maybe future.