Threat Intelligence: How Anthropic stops AI cybercrime

Anthropic August 27, 2025
Video Thumbnail
Anthropic Logo

Anthropic

@anthropic-ai

About

We’re an AI safety and research company. Talk to our AI assistant Claude on claude.com. Download Claude on desktop, iOS, or Android. We believe AI will have a vast impact on the world. Anthropic is dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.

Video Description

AI helps people work more efficiently. Unfortunately, this also applies to criminals. We've discovered that our own AI models are being used in sophisticated cybercrime operations, including a large-scale fraud scheme run by North Korea. What is Anthropic doing to detect and prevent AI cybercrime? How exactly are criminals using large language models to scam their victims? And what is "vibe hacking"? In this video, Anthropic's Jacob Klein and Alex Moix answer all these questions and more, discussing their work disrupting AI cybercrime and their new Threat Intelligence report. Read the Threat Intelligence report: https://www.anthropic.com/news/detecting-countering-misuse-aug-2025 Sections: Introduction [00:00] "Vibe hacking" [01:49] Our response to abuses of our models [08:34] North Korea's employment scam [15:02] Ransomware, romance scams, and other abuses [23:50] How concerned should we be? [31:46]