Artificial General Intelligence (AGI) is not coming.

I came across the post below from the Twitter account Sphinx (@protosphinx) and thought it was worth sharing beyond that platform. I hope you find it interesting as well.

AGI is not coming.

We are nowhere near AGI. What we have today is inference, not learning.

Models get trained once on huge fixed datasets, then frozen. You ask questions, they remix patterns they already saw. Nothing updates. Nothing sticks. Talking to the model does not make it smarter. It does not learn from you. Ever.

Learning is still slow, expensive – and offline.

Look at self driving. You drive around a pothole, make a U turn, and come back. The car’s AI does not learn that you just solved that exact problem. It reacts the same way every time using sensors and rules. Do this 20 times a day and it still has zero memory that the pothole exists. It just re sees it. That is why edge cases never die. There is no local learning. No accumulation.

No ‘oh yeah, I’ve seen this before’

LLMs work the same way. Tell it your name and it does not remember. The only reason it looks like memory is because scaffolding keeps shoving your name back into the prompt every time and sanitizing the output.

The model itself has no idea who you are and cannot learn from interaction. It is structurally incapable.

And the scaffolding is the worst part. It is pure duct tape. Just prompts on prompts on prompts around a frozen model. When something breaks, nobody fixes learning. They add another layer. Another rule. Another retry. Another evaluator model judging the first model.

So you end up with systems that are insanely complex but mentally shallow. Debugging is hell because behavior comes from hack interactions, not a learnable core. Tiny prompt tweaks cause wild behavior shifts. Latency goes up. Costs go up. Reliability goes down. None of this compounds into intelligence. It just hides the cracks.

Until we have real persistent learning and real memory inside the system, there is no AGI.

LLMs are not built for this. You cannot prompt your way out of it. You need a totally different architecture. Yann LeCun is right.

And even then, what architecture can actually learn online, store memory, and stay stable on today’s hardware?

Best case, maybe 5-10 yrs.

Right now it is all inference. It looks magical, but the emperor has no clothes. A lot of people see it. Almost nobody says it out loud.

Original source: https://x.com/i/status/2020197544559829188

My related micropost: https://mothcloud.com/micropost/todays-artificial-intelligence-is-assisted-or-augmented-intelligence/

First dropped: | Last modified: February 12, 2026

This translation function is powered by my locally hosted machine translation server (Libre Translate) so it might be a little bit slow.

Leave a Comment