The former chief business officer of Google X says three predictions he made in 2020 about AI have already come true, and the world isn't ready for what comes next.

The Summary

The Signal

When someone who helped scale Google in emerging markets and ran business operations at Google X tells you AI will break the old rules, you should probably listen. Mo Gawdat isn't a futurist selling newsletters. He spent years inside the room where AI went from research project to product infrastructure.

In 2020, right after leaving Google, Gawdat made a prediction that seemed obvious to him but bold to everyone else: AI is inevitable, and there's no way to stop it. Not because of some technological determinism, but because of basic incentives. Once AI proved useful, competition took over. Nations and corporations can't afford to fall behind, so they push forward faster than humans can manage the consequences.

"If anyone is watching that video that we're recording right now, it's because an AI recommended it to them."

The second prediction was about agency. Gawdat argued that modern AI isn't "just software" in the traditional sense. It's a new kind of intelligence that learns, improves, and will soon operate with real-world agency through robots and autonomous systems. That's not a 2030 problem. That's shipping now. Robotics companies are integrating foundation models into physical systems that can navigate warehouses, manipulate objects, and make decisions in unstructured environments.

The third prediction cuts deeper: the biggest near-term danger isn't the intelligence itself, but what humans tell powerful systems to do. The risk stack looks like this:

  • Persuasion at scale (already deployed in political campaigns)
  • Misinformation generation (cheaper and more convincing than ever)
  • Surveillance infrastructure (AI makes mass monitoring economically viable)
  • Cyber conflict (automated attacks that adapt faster than human defenders)
  • Automated warfare (the obvious endpoint nobody wants to talk about)

These aren't hypotheticals. They're business models and military procurement categories.

But Gawdat's most interesting claim is economic. He predicts we're entering a turbulent transition that will force the world to rethink capitalism itself. When AI-driven abundance drives the marginal cost of production toward zero across more sectors, the assumptions that underpin market economies start to crack. What happens when software can write itself, design products, manage supply chains, and handle customer service without human labor costs?

"We will likely see disruption before we reap the benefits of AI."

Gawdat describes an AI "arms race" where more decisions get handed over to machines as systems scale faster than humans can manage. That's not a warning about skynet. It's a warning about institutional velocity. When your competitor is moving at machine speed and you're moving at committee speed, the market punishes you. So you automate more decisions. Then everyone automates more decisions. Then the feedback loops get tight and the error correction gets thin.

The Implication

If Gawdat is right, the next five years won't be about AI getting smarter. They'll be about societies adjusting to systems that are already smart enough to remake labor markets, persuade populations, and operate at scales humans can't supervise. The people who win won't be the ones with the best models. They'll be the ones who figure out how to build with AI while maintaining human judgment in the loop.

For anyone building in this space: the technical challenge is solved. The governance challenge just started. Watch for companies that ship AI tools with real guardrails, not just terms of service. Watch for new economic models that account for abundance rather than fighting it. And watch for the places where humans insist on staying in control, because those will be the spaces where trust still matters.

Sources

Business Insider Tech | Business Insider Tech