Y Combinator's CEO bragged about shipping 37,000 lines of AI code per day. A developer cracked it open and found exactly what you'd expect from AI without adult supervision.

The Summary

  • Garry Tan claimed his AI agents ship 37K lines of code daily across five projects. A developer audited the output and found bloated, broken code shipped to production.
  • Tan's AI-built blog loads 6.42MB and makes 169 server requests. Hacker News (also YC) loads 12KB with 7 requests.
  • The site ships 28 test files to users, 78 unused JavaScript controllers, and the same logo in eight formats including a 0-byte empty file.
  • Volume metrics tell you nothing about code quality. AI agents optimize for throughput, not craftsmanship.

The Signal

The Garry Tan roast is funny, but the real story is what happens when you mistake agent output for engineering. Tan runs Y Combinator, the most influential startup accelerator on the planet. He's not some random founder experimenting with Cursor. When he posts about 37,000 lines per day and calls it "absolutely insane" progress, thousands of founders take notes.

Polish developer Gregorein did what good engineers do: looked under the hood. What he found is a master class in why lines of code is the worst possible metric for AI agent performance. The blog loads 535 times more data than it needs. It downloads developer test files that end users should never see. It includes 78 JavaScript controllers for features that don't exist on the homepage. The bear logo shows up in eight formats, one of which is literally empty.

This isn't just sloppy. It's what happens when you treat AI coding agents like employees instead of tools. Agents are phenomenal at generating code. They're terrible at architectural decisions, performance optimization, and knowing what not to build. Without a human setting constraints and reviewing output, they optimize for the wrong things. More code. More files. More "just in case" logic. They're like a junior developer who learned that busy looks productive.

The broader pattern: as AI coding tools get faster, the gap between shipping and shipping well is widening. Tan's 72-day streak sounds impressive until you realize the site is a performance disaster. Speed matters. But speed producing garbage just means you're failing faster. The agents aren't broken. The human workflow around them is.

The Implication

If you're building with AI agents, audit their output like you would any junior engineer. Set explicit performance budgets. Review pull requests. Just because the agent can generate 10,000 lines doesn't mean you should ship them. The companies winning in the agent economy will be the ones who figure out how to constrain AI creativity, not unleash it. Speed is good. Speed with judgment is better.


Source: Fast Company Tech