" …
What exactly is AMI building? The short answer is world models, a category of AI system that LeCun has been arguing for, and working on, for years. The longer answer requires understanding why he thinks the industry has taken a wrong turn.
Large language models learn by predicting which word comes next in a sequence. They have been trained on vast quantities of human-generated text, and the results have been remarkable, ChatGPT, Claude, and Gemini have demonstrated an ability to generate fluent, plausible language across an enormous range of subjects. But LeCun has spent years arguing, loudly and repeatedly, that this approach has fundamental limits.
His alternative is JEPA: the Joint Embedding Predictive Architecture, a framework he first proposed in 2022. Rather than predicting the future state of the world in pixel-perfect or word-by-word detail, the approach that makes generative AI both powerful and prone to hallucination, JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail. The idea is to build systems that understand physical reality the way humans and animals do: not through language, but through embodied experience."
It’s pretty crazy to me that zuck let an actual academic like Yann LeCun go for a kid like Alex Wang. Seems like some very short term thinking.
I’m overall still skeptical, but this does sound a lot more like how I imagine a true AI would work. I’ve also thought LLMs were a dead end for a while now.
LLMs are an obvious dead end when it comes to actual “intelligence” or understanding how the world works.
But, this sounds like a “draw the rest of the owl” situation.
“JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail.”
Oh, it’s that simple is it? Just have it “learn abstract representations of how the world works”. Amazing how nobody thought to do that before!
I think I understand the distinction they’re trying to draw. Current models are trained on billions of pictures of cats and billions of pictures of dogs. You feed it an image of Fido and it finds a point in 2500 dimensional space and knows whether that point is in the “cat space” or “dog space”. It can be very good, but it doesn’t have any “understanding” of what makes something a cat vs. a dog. Humans, OTOH, aren’t trained on billions of images. But, they learn about things like “teeth” and “whiskers” and “snouts” and “eyes”. Within their knowledge of eyes, they spot that vertical slit pupils are unusual and different, and part of what makes something “catlike”. AFAIK, nobody has ever managed to create a system that learns abstract features without intensive human training.
I like that they’re trying something new. But, are they counting on a massive breakthrough on a problem that has existed since people first started theorizing about AI? Or, is it just a matter of refining a known process?
Good luck getting your model to learn how to code through physical experience instead of through text.
Tell it to Lecun. He won the Turing prize. I figure he knows what he’s doing. Let him cook I sez.
PS: I didn’t down vote you. It’s good to be skeptical.
I’m skeptical, but it makes a lot more sense. You don’t just “learn to code.” Writing the text is the easy part. It’s about solving problems. This is next to impossible to do reasonably without actually understanding what the solution needs to do and what capabilities you have to do it. That’s why the LLM method has produced such shit code. It’s just reproducing text. It doesn’t actually understand the problem or what it can use to get it done.
Coding is a solved problem; people with zero understanding can do it by copypasta from stack overflow, and similarly skilled LLMs can do it right now, cheaper. If you’re a “coder”, you have a lovely hobby but no career. Sorry.
If you’re a software engineer though, you have nothing to fear from current LLMs. But there is much more chance of LeCun’s models learning engineering - i.e. problem solving, in which writing code is just one of the tools, and not even the most important one - through physical experience and not just text. It is, after all, how all the software engineers today did the vast majority of their learning.
This page is broken. I accepted the cookies and instead of letting me read the article it shows me a full page about cookies that I can’t close.
The Wayback Machine has not archived that URL.
Yes, it has, though it seems that particular URL borked itself between me posting it and you looking at it.
Here - have the raw text, copy pasted.
Yann LeCun just raised $1bn to prove the AI industry has got it wrong Ana-Maria Stanciuc 6–7 minutes
The Turing Award winner left Meta four months ago convinced that large language models are a dead end. Today he announced $1.03 billion in seed funding, Europe’s largest ever, to build something different.
In November 2025, Yann LeCun walked into Mark Zuckerberg’s office and told his boss he was leaving. He had spent twelve years building Meta’s AI research operation into one of the most respected in the world, and had become one of the industry’s most vocal critics of the technology dominating it.
Large language models, he argued, were a statistical illusion. Impressive, yes. Intelligent, no. He thought he could build something better, and he thought he could do it faster outside Meta than inside it. On Tuesday, investors put $1.03 billion behind that conviction.
Advanced Machine Intelligence Labs, AMI, pronounced like the French word for “friend”, announced its seed round on 10 March 2026, just four months after its founding. The round values the company at $3.5 billion on a pre-money basis and is believed to be the largest seed round ever raised by a European startup.
Five firms co-led it: Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, the vehicle through which Amazon founder Jeff Bezos makes personal investments. Nvidia, Toyota, Samsung, and Singapore’s Temasek also participated, alongside French VC firm Daphni, South Korean investor SBVA, and a long list of prominent individuals including Tim and Rosemary Berners-Lee, venture capitalist Jim Breyer, entrepreneur Mark Cuban, and former Google chief executive Eric Schmidt.
LeCun initially sought around €500 million, according to a leaked pitch deck reported by Sifted. Demand exceeded that figure significantly. He ended up with €890 million, roughly $1.03 billion, and told journalists this week that interest had been high enough that AMI could be selective about which investors it accepted.
The company’s headquarters are in Paris, with additional offices planned in New York, Montreal, and Singapore. LeCun, who holds dual French-American citizenship and remains a professor of computer science at New York University, will serve as executive chairman. Day-to-day operations will be led by Alexandre LeBrun, a French entrepreneur who previously founded and ran Nabla, the medical AI startup, and who now becomes AMI’s chief executive.
The rest of the founding team is drawn almost entirely from Meta’s AI research organisation. Michael Rabbat, Meta’s former director of research science, joins as vice president of world models. Laurent Solly, Meta’s former vice president for Europe, becomes chief operating officer. Pascale Fung, a former senior director of AI research at Meta, takes the role of chief research and innovation officer. Saining Xie, previously at Google DeepMind, becomes chief science officer.
What exactly is AMI building? The short answer is world models, a category of AI system that LeCun has been arguing for, and working on, for years. The longer answer requires understanding why he thinks the industry has taken a wrong turn. The case against LLMs
Large language models learn by predicting which word comes next in a sequence. They have been trained on vast quantities of human-generated text, and the results have been remarkable, ChatGPT, Claude, and Gemini have demonstrated an ability to generate fluent, plausible language across an enormous range of subjects. But LeCun has spent years arguing, loudly and repeatedly, that this approach has fundamental limits.
His alternative is JEPA: the Joint Embedding Predictive Architecture, a framework he first proposed in 2022. Rather than predicting the future state of the world in pixel-perfect or word-by-word detail, the approach that makes generative AI both powerful and prone to hallucination, JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail. The idea is to build systems that understand physical reality the way humans and animals do: not through language, but through embodied experience.
Within one to two years, LeCun told AFP, AMI plans to begin discussions with corporate partners. Within three to five years, he said, the goal is to produce “fairly universal intelligent systems” capable of being deployed across almost any domain requiring machine intelligence. He wants AMI, he added, to become “the main provider of intelligent systems.”
The timing and geography of the announcement are not coincidental. LeCun has been explicit about AMI’s positioning as a European, and specifically French, counter to the American and Chinese AI giants. “We are one of the few frontier AI labs that are neither Chinese nor American,” he has said. The choice of Paris as headquarters, and the involvement of French investors Cathay Innovation and Daphni, reflects that framing.
Whether that ambition is achievable remains genuinely open. AMI has no product, no revenue, and no near-term prospect of either. LeCun acknowledged to journalists this week that the company would spend its first year focused entirely on research and development. World models, by his own account, are a long-term scientific project, not the kind of AI startup that ships a product in three months and posts revenue in six.
What the $1.03 billion seed round demonstrates, for now, is that the investors backing it are willing to wait. LeCun has one of the most credible research records in AI, he shared the Turing Award in 2018 for work on convolutional neural networks that underpins most of modern machine vision, and his argument that LLMs have fundamental architectural limits has been consistent enough, and long enough, that dismissing it is no longer the safe assumption it once was. The question is whether being right about the problem is the same as being right about the solution.
If he thinks there is any promise in any sort of AI at all, he is as idiotic as the lot of them.
Switching the sauce doesn’t make a shit sandwich any more edible than it was before…




