The Email That Looked Real: AI Phishing and the People We Are Leaving Behind
A developer reflects on AI-powered phishing after a colleague nearly fell for a near-perfect fake insurance email. Who is targeting ordinary people, how do attackers get their data, and what happens to those who do not have the tools to verify? A personal take on the human cost of AI-accelerated fraud.
📅
✍️ Gianluca
The Email That Looked Real: AI Phishing and the People We Are Leaving Behind
This week Anthropic announced that its new Claude Mythos Preview model can discover vulnerabilities in virtually any operating system or browser, and autonomously develop working exploits for them. Access has been restricted to a small consortium of defenders from Microsoft, Apple, Google, and the Linux Foundation, under an initiative called Project Glasswing. The announcement reignited a debate that has been simmering in security circles for years: has AI finally changed the game, or is this just the latest chapter in a long cycle of hype?
I will leave that debate to the researchers for now. Because while experts argue about sophisticated exploit chains and nation-state threat models, something simpler is already happening, today, in inboxes around the world. And it is affecting people who have never heard of a large language model.
The email that looked 98% real
A colleague showed me an email this week. It appeared to come from her health insurance company. The sender domain looked right. The logo was correct. The formatting matched what she had seen in legitimate communications before. The message asked her to verify her account information and click a link to reset her password. Nothing in the email looked obviously wrong.
We spent time analyzing it together. The domain had a subtle character variation, easy to miss at a glance. The link pointed to a lookalike site designed to harvest credentials. After going directly to her insurer's real website and checking her account there, it became clear: the email was a fraud. A well-constructed, patient, and targeted fraud.
What struck me was not that such an attack existed. What struck me was that she had almost clicked. And I am a developer. I work with these tools every day. I know what phishing is. I know the patterns to look for. And even I needed a second look.
How did the attacker know?
The most unsettling question is not about the email itself. It is about how the attacker knew she was a customer of that specific insurance company. Was it pure coincidence, a mass campaign that happened to match? Was her data exposed in a breach she never heard about? Or was it sourced from one of the hundreds of data broker companies that aggregate, package, and sell personal information, including insurance relationships, medical categories, and financial affiliations, to anyone willing to pay?
Data brokers collect information from loyalty programs, public records, social media profiles, app permissions, and third-party trackers. They sell it to advertisers, insurers, employers, and, indirectly, to anyone who can access the same data pipelines through less legitimate channels. A targeted phishing campaign does not need a sophisticated exploit when it can simply buy a list of people associated with a specific insurer and craft a convincing email for each of them. AI makes the crafting part trivially fast and virtually free.
The people we are forgetting
The security community tends to frame AI threats in terms of sophisticated attacks against organizations, critical infrastructure, and technical professionals. Those concerns are legitimate. But they tend to overshadow a quieter and more immediate problem.
Think about who else receives email. Your parents. Your grandparents. A teenager who has never encountered the concept of a phishing attack. A neighbor who uses a computer mainly to stay in touch with family. These are not edge cases. They represent the majority of people online. And they are facing the same AI-generated, data-brokered, near-perfect fake emails that almost fooled a developer who was looking for them.
The attack my colleague received was designed to pass a visual inspection. It did not need to be technically sophisticated. It needed to look real. AI makes that trivially easy now: correct grammar, localized language, personalized details, appropriate urgency. The effort required to produce a convincing phishing email has dropped to nearly zero. The effort required to detect one has never been higher.
What is at stake for non-technical users
For a developer, clicking a phishing link is a serious incident that requires immediate action: credential rotation, device audit, incident reporting. It is recoverable, if discovered quickly and handled correctly.
For an elderly person who uses the same password for their email, their bank, and their health portal, the same click can compromise their entire digital life. Savings accounts drained. Medical identity stolen. Email access lost permanently. These are not abstract risks. They are documented outcomes that happen every day, increasingly accelerated by tools that cost almost nothing to deploy.
Children are equally exposed. A child receiving an email that appears to come from a platform they use, a game, a school service, a streaming account, has no framework to evaluate what is real and what is manufactured. And increasingly, attacks targeting younger users are not random: they are constructed using data purchased from brokers who track app usage, device identifiers, and behavioral patterns collected across years of activity.
The infrastructure that never got the update
There is another dimension to this problem that rarely enters the conversation about AI and cybersecurity: the systems that cannot be patched.
Post offices, government counters, public transport displays, hospital administrative terminals, and industrial control panels are often running operating systems that are decades old. Windows 98. Windows 2000. Early versions of Windows XP. These systems were never designed to connect to a world of AI-generated threats. Many cannot receive security updates. Many are embedded in hardware that was never meant to be replaced. And many are operated by staff who were trained on those specific systems and have no context for modern cybersecurity practices.
These are not unusual examples. Look at the screen showing departure times at your local train station. Look at the terminal at the post office counter. These systems hold payroll data, citizen records, transit schedules, patient information, and utility access points. A well-targeted phishing campaign against a single employee at one of these organizations, using AI to craft a convincing internal-looking email, can open a door into infrastructure that has no modern defenses whatsoever.
We are moving too fast
I am not opposed to technological progress. I work in this industry. I use these tools. I find genuine value in them. But I think we are being dishonest with ourselves about the pace of change and what it costs the people who are not at the center of the conversation.
AI capabilities are accelerating at a pace that leaves almost everything else behind. Legal frameworks for data brokers remain weak in most jurisdictions. Consumer protection laws around digital fraud have not been meaningfully updated to reflect AI-assisted targeting. Security awareness programs in schools, hospitals, and public institutions are underfunded or nonexistent. And the economics of the problem are completely asymmetric: launching a phishing campaign is cheap, fast, and increasingly automated. Defending against one, especially for individuals without technical resources, remains expensive and requires expertise most people simply do not have.
Consider what it costs a large organization to migrate away from legacy infrastructure. The technical debt alone can span decades of work and millions in investment. A hospital cannot shut down its patient management system for a six-month migration. A public transit authority cannot replace embedded display terminals across an entire city on a software security timeline. These are real constraints, and they do not disappear because the threat landscape has changed overnight.
The reckoning that is already underway
Anthropic frames Mythos Preview as a warning of what is coming. The debate about whether it crosses a genuine threshold is worth having. But the more urgent reckoning is not in the future. It is happening now, in inboxes, in bank accounts, in the compromised credentials of people who had no way to know they were specifically targeted.
Progress in AI is not something we can or should stop. But we need to be honest about who bears the cost when it outpaces the systems designed to protect people. Those people are not in any consortium of defenders. They are not reading threat intelligence reports. They are reading what looks like a password reset email from their health insurance company, and most of them have no one nearby to ask.
What should actually change
Regulatory pressure on data brokers is overdue. The ability to purchase a list of people segmented by insurance type, medical category, or financial relationship and use it to target them with personalized fraud is not a feature of a functional market. It is a structural failure that enables every social engineering campaign at scale. Addressing it would not stop AI-powered phishing, but it would significantly raise the cost of targeted attacks against ordinary people.
Security education needs to reach outside the developer and IT community. Schools, libraries, community centers, and general practitioners are the distribution channels for digital literacy among the populations most at risk. The investment required is modest. The impact would be substantial and immediate.
And for those of us in the industry, the most useful thing we can do is not to debate the theoretical capabilities of the next frontier model. It is to help the people around us understand what a suspicious email looks like, how to verify before clicking, and that it is always safer to navigate directly to a website than to follow a link from an unsolicited message. That colleague showed me the email. I helped her recognize it. That is where the real security work is happening right now, and it scales only through people, not through systems.
Sources and Further Reading
The announcement of Anthropic Mythos Preview and Project Glasswing was reported by Wired (April 2026), which collected expert reactions from security practitioners including Alex Zenla (Edera CTO), Niels Provos, and Jen Easterly, former director of CISA. Background on data broker practices and their role in enabling targeted attacks is documented by the FTC Data Broker Report and by EPIC (Electronic Privacy Information Center). For an overview of AI-powered phishing at scale, see the IBM Cost of a Data Breach Report 2025 and automated spear-phishing research (arXiv 2412.00586), which found that fully AI-automated spear-phishing emails matched expert human performance with a click-through rate of approximately 54%.
Published April 2026. This is an independent opinion piece and personal reflection, not a sponsored post. CodeHelper has no commercial relationship with the companies or organizations mentioned.