Goldman Sachs just said the quiet part out loud: the AI they're using to stay competitive might be the same AI that could break them.

The Summary

The Signal

Goldman Sachs is running Claude and now finds itself in the strange position of partnering with Anthropic to defend against the offensive capabilities of Anthropic's own models. Solomon's "hyper-aware" comment wasn't corporate fluff. It was acknowledgment that the tools banks need to compete are the same tools hackers will use to attack them. The bank has been monitoring advances in LLMs as part of broader cyber defense, but Mythos appears to have crossed a threshold that demanded public positioning.

Anthropic itself issued warnings about Mythos's cybersecurity risks, a rare move for an AI lab launching a flagship model. Most companies bury capability concerns in responsible AI frameworks buried in appendices. Anthropic put the warning front and center. That says something about what Mythos can do.

"The US bank had been monitoring the rapid advances in artificial intelligence, including large language models, as part of wider efforts to protect itself from hackers."

The release of Mythos sets up a pattern: Anthropic ships, OpenAI responds. The tech press is already speculating whether OpenAI's rumored "Spud" model will follow. But the real story isn't the model horse race. It's that we've entered an era where:

  • Major financial institutions publicly acknowledge AI as dual-use infrastructure
  • AI companies preemptively warn about their own products' offensive capabilities
  • The gap between "tool that makes us more efficient" and "weapon that could destroy us" has collapsed to zero

Goldman isn't unique here. Every bank, every enterprise with assets worth protecting, is making the same calculation. Do we adopt the newest models to stay competitive, knowing adoption also expands our attack surface? Do we trust the AI companies building offensive capabilities to also build our defenses? The answer appears to be: yes, but nervously, and with the CEO on record saying he sees the risk.

The Implication

Watch for more corporate leaders to follow Solomon's lead. Publicly naming the AI you're using and the risks you're managing isn't just transparency theater. It's liability management. If Goldman gets breached by an AI-powered attack, Solomon can point to this moment and say he was on it. Expect other banks, insurers, and regulated industries to make similar statements in the next quarter.

For AI companies, Anthropic just set a new standard. You can't ship a model this capable without addressing offensive use cases out loud. OpenAI, Google DeepMind, and whoever else is racing to the next capability threshold will need to do the same or look reckless by comparison. The era of "we built it, you figure out the risks" is over.

Sources

The Guardian Tech | Big Technology