CAGI/Tech TV Special on Mythos: is AI now a Danger?

Peter Warren and Mike Loginov from CAGI TechTV host a debate on Mythos, Anthropic’s latest model. 

They ask: Can Anthropic’s new model, Claude Mythos, outperform humans at hacking and cyber-security tasks? And if it can, what are the implications? 

They are joined by: 

  • Sal Kimmich (Security Architect, GadflyAI and Author) 
  • Sharon Gai (Author of How to do more with less with AI) 
  • Neil Sahota (Chief AI Officer, Consolidated Analytics) 
  • Ramin Farassat (Chief Product Officer, Menlo Security)

This inaugural CAGI TechTV panel discussion tackled the risks, opportunities, and governance challenges posed by advanced AI, framed around Anthropic’s recent release of what the panel refers to as “Project Mythos” (or “Mythos”) — an AI system capable of finding and exploiting software vulnerabilities, which some argue is too dangerous to release. The conversation ranged across cybersecurity, regulation, liability, ethics, and public education, featuring contributions from Sal, Neil, Sharon, Ramin, and host Pete.

The central tension: threat or opportunity?

The panel opened by acknowledging the “existential moment” AI represents. Ramin emphasised that AI risks cannot be contained within national borders — multinational corporations operate across jurisdictions including the US, UK, and China, and data moves between them constantly. He lamented that genuine international collaboration on AI governance is still largely absent.

Neil countered that AI is both threat and opportunity, with national security, military, and economic dimensions driving competitive rather than cooperative behaviour. He pointed to the UN’s “AI for Good” initiative as one attempt to coordinate globally, but argued that traditional regulatory processes are fundamentally outdated — regulations typically respond after harm occurs, and AI moves too fast for that model to work. He called for a broader table including technologists, industry, and academia, and introduced the concept of a “double deadlock”: technologists assume business leaders are flagging risks, businesses assume technologists understand the dangers, regulators expect businesses to self-police, and businesses wait for regulatory guidance — with the result that nobody acts.

An optimistic counter-view

Sal pushed back against the prevailing gloom. Because AI operates within the bounded combinatoric space of computer chips, he argued, we are actually on the cusp of closing long-standing gaps in cybersecurity vulnerability disclosure. If hygiene and supply chain practices improve in ecosystems like Python, Java, C, and Kubernetes, the binaries consumers rely on could become safer than ever before. He framed the current moment as a “cyber arms race” where tools like Mythos can both find and patch vulnerabilities — and the closed, slow-disclosure model being used is precisely the right approach while defences catch up on adversaries.

AI as tool or infrastructure?

Pete asked whether AI is still a tool or has become essential infrastructure. Neil maintained it is a tool — albeit a deeply interwoven one, like email. Sal drew a historical parallel to the printing press, which was also once demonised. But he drew a sharp distinction between LLMs and agentic AI: deploying autonomous agents inside a business is effectively creating an artificial insider actor, requiring guardrails from the kernel level up through human rights considerations. He insisted ethics discussions must move beyond philosophy to mathematically provable “secure primitives.”

On agents, Ramin noted the industry phrase that “the next billion users are going to be AI agents.” Sal added that agentic models require a new regulatory threat model — controls must be tight in time and space, and critically, agents must be ephemeral, dissolving once their task completes to prevent the accumulation of what Java developers call “god class” privileges.

Education, liability, and the human gap

Sharon highlighted what she sees as the biggest blind spot: basic AI education for non-technical people. She referenced a viral chart showing that most of the world’s population has never even touched a chatbot, while a tiny sliver builds and trains these models. The gulf between the two groups will create serious societal shock when tools like Mythos are widely deployed.

Pete raised David Brin’s argument that lawyers — “super predators” — will ultimately regulate AI through liability. Neil agreed that major law firms see this as a business opportunity and are repositioning as risk management advisors, but noted that traditional liability frameworks break down with technologies like autonomous vehicles, where ownership models themselves are shifting. He gave a sobering example of a Canadian case involving deepfaked child imagery generated from social media photos — a use case nobody at organisations like UNICEF had anticipated. His point: humans are good at building toward positive outcomes but terrible at imagining misuse, and that proactive adversarial thinking is exactly what responsible AI requires.

Conclusion

Ramin closed by noting that Anthropic’s Mythos project embodies precisely this adversarial mindset — using AI to find vulnerabilities so they can be fixed. Pete wrapped up on an optimistic note, contrasting the panel’s call for guardrails and slow rollout with JD Vance’s recent Munich Security Summit declaration calling for deregulation. The panellists broadly agreed: the conversation is shifting back toward thoughtful governance, broad consultation, and recognition that AI is powerful enough to warrant real caution — provided that caution is matched by the technical rigour to implement it.

  • Tech TV Presenter

  • Co-Founder of The Cybersecurity & AI Governance Initiative (CAGI)

  • Chief AI Officer, Consolidated Analytics

  • Chief Product Officer, Menlo Security

  • Security Architect, GadflyAI and Author

  • Guest:

    Author of How to do more with less with AI

Leave a Reply

Your email address will not be published. Required fields are marked *