Noah Smith says superintelligence is already here, and he's not talking about lab demos.
The Signal
Smith's argument cuts through the AGI definition debates by pointing at what's actually happening in research labs right now. AI systems are already discovering novel materials, solving protein folding problems humans couldn't crack, and generating hypotheses in physics that require verification, not invention. The superintelligence threshold, he argues, isn't about passing some Turing test milestone. It's about whether these systems can recursively improve scientific knowledge faster than human researchers can. By that measure, we've crossed it.
The piece lands during a critical moment. Major labs are shifting from "will we get AGI" to "how do we govern systems that already exceed human capability in narrow but expanding domains." DeepMind's latest materials discovery model found 2.2 million stable crystal structures in weeks. Human materials scientists had catalogued 48,000 in all of history. That's not incremental improvement. That's a different kind of intelligence at work.
Smith flags the control problem that keeps AI safety researchers up at night. Not the Hollywood version where Skynet launches nukes. The prosaic version where optimization systems pursue goals in ways we didn't anticipate and can't easily stop. When your AI can redesign itself, discover new physics, and operate faster than human oversight, the question isn't if it's superintelligent. It's whether we can maintain meaningful control.
The shift matters because it changes what we're building toward. If superintelligence is already deployed in research contexts, then the agent economy isn't a future state. It's the transition we're living through. Companies racing to build autonomous AI systems aren't preparing for superintelligence. They're packaging it.
The Implication
Stop waiting for a single AGI moment that never comes. The real question is deployment pace and control architecture. Watch which companies are building agents with clear sandboxes versus those optimizing for autonomy. The winners in the agent economy will be whoever figures out how to harness superintelligent subsystems without losing the reins. If you're building in this space, your governance model matters more than your model parameters.
Source: Noahpinion