The Real Problem Is Not Future Job Loss. It Is Today's Anxiety.

Matt Shumer's viral essay "Something Big Is Happening" shocked millions. But beyond the debate about future job loss, a more immediate crisis is already unfolding: the psychological damage of anticipatory AI anxiety on workers today.

📅

✍️ Gianluca

The Real Problem Is Not Future Job Loss. It Is Today's Anxiety.

In early February 2026, a nearly 5,000-word essay titled "Something Big Is Happening" exploded across the internet. Written by Matt Shumer, CEO of HyperWrite AI, it accumulated over 73 million views across platforms, was republished by Fortune, Inc., and Yahoo, and earned its own Wikipedia article. Shumer appeared on CNBC and CBS Mornings to discuss it. For a piece of writing about the future of work, it provoked something rare: genuine, widespread panic.

The essay's catalyst was February 5, 2026, when two major AI labs released new frontier models simultaneously. Shumer described encountering the new generation of AI not as a tool upgrade but as a qualitative leap: something that felt like judgment, like taste. His most striking admission was personal: "I am no longer needed for the actual technical work of my job." He is a tech CEO and founder, not a junior clerk. The implication was unsettling precisely because he was not the kind of person anyone expected to say this first.

"I think we're in the 'this seems overblown' phase of something much, much bigger than Covid."

Matt Shumer, "Something Big Is Happening" (February 2026)

The core of his argument was simple and brutal: nothing that happens on a screen is safe in the medium term. He cited Dario Amodei, CEO of Anthropic, who estimated that AI would eliminate 50% of entry-level white-collar jobs within one to five years. He cited METR's autonomous task measurement, which found AI could already complete tasks requiring nearly five hours of human expert work, with that number doubling roughly every seven months. He was not writing speculation. He was writing a warning that he felt he had waited too long to send.

On CNBC, Shumer clarified that the essay was not meant to scare people but to push workers to start experimenting with AI tools immediately, so they can understand what is actually coming before it arrives. He chose honesty over comfort. The internet's reaction suggested that many people were not prepared for either.

This Is a Real Essay About a Real Problem. But There Is a Larger One.

The debate that followed Shumer's essay centered, predictably, on whether his claims were accurate. AI skeptic Gary Marcus published a detailed rebuttal. Business journalists asked whether the timeline was plausible. Quantitative analysts tested the specific claims against institutional evidence. All of this is legitimate and important.

But there is a problem that does not require the debate to be resolved before it starts causing damage. The anxiety itself is already here. And that anxiety, sustained and unaddressed, is doing measurable harm to real workers right now, regardless of whether the predictions in the essay prove accurate in two years or twenty.

The Numbers on Worker Anxiety

The scale of current psychological distress is substantial and well-documented.

  • Pew Research Center (February 2025)

    52% of U.S. workers report being worried about AI's future impact on their workplace. 50% of U.S. adults say AI makes them more concerned than excited. Only 10% are more excited than concerned.

  • Mercer Global Talent Trends 2026 (12,000 workers globally)

    Concern about job loss due to AI jumped from 28% in 2024 to 40% in 2026, a 12-point spike in two years. 62% of employees feel their leaders underestimate AI's emotional and psychological impact on them.

  • Jobs for the Future (National Survey 2025)

    44% of workers say AI is doing more harm than good. Workers of color are disproportionately affected: 38% plan to change career pathways due to AI, compared to 23% of the overall workforce.

  • World Economic Forum: FOBO

    The WEF has named a new phenomenon: FOBO, Fear of Becoming Obsolete. 64% of workers are "job hugging," clinging to their current roles despite burnout because they do not trust their ability to compete elsewhere in an AI-shifting market.

The Psychology of Anticipatory Loss

What makes AI-related anxiety structurally different from ordinary job insecurity is its anticipatory nature. Classical job loss anxiety is triggered by an event: a layoff notice, a restructuring announcement, a performance review. It is reactive. AI anxiety is proactive: it begins before any event occurs, activated by the perception that one's skills are degrading in real time, that the ground is shifting beneath you even as you stand still.

A peer-reviewed study published in PMC (2025) examining AI-induced displacement anxiety among IT professionals found this pattern confirmed in clinical terms. Participants described layered psychological disruptions: acute shock at AI's capabilities, erosion of professional identity (when machines perform the tasks that defined their expertise), and what researchers identified as classic signs of anticipatory grief, including repetitive thoughts about financial insecurity, skill irrelevance, and employability. Crucially, these symptoms appeared before any actual displacement. The threat was sufficient to trigger the response.

Key finding (Frontiers in Psychology, 2026): A paper on "algorithmic anxiety" found that the anticipatory nature of AI-related threat is specifically corrosive because it depletes coping resources chronically, before any event actually occurs. Workers experience a kind of pre-traumatic stress: the wound arrives before the injury.

A separate study found that the more human-like and personalized AI tools become, the more they amplify job replacement anxiety. This creates a perverse dynamic: the more helpful and capable AI becomes, the more threatening it psychologically appears, even to workers who are actively benefiting from it.

The Burnout Paradox: More Capable, More Exhausted

There is a second dimension to today's AI-at-work crisis that receives far less attention than job loss: the workers who are not being replaced are being burned out.

UC Berkeley research published in February 2026 found that AI tools create an implicit pressure to take on more tasks and more variety. Having a capable "partner" at work does not create breathing room: it creates the expectation of expanded output. Workers described cognitive fatigue and a progressive erosion of the boundary between work and rest. The researchers warned this trajectory leads toward burnout, not productivity.

A BCG study published in March 2026 named this "AI brain fry": sustained AI-assisted work makes people more exhausted, not more productive. Teams with high AI-related burnout showed 18 to 20 percent lower productivity than baseline, directly undermining the efficiency gains that AI is supposed to deliver.

The double pressure on workers in 2026

Worker groupPrimary stressDocumented effect
Workers threatened by AI replacementAnticipatory grief, identity erosionPre-traumatic stress before any actual event
Workers actively using AI toolsExpanded output expectationsCognitive fatigue, 18 to 20% lower productivity under burnout

What the Economists Actually Say

It is worth stepping back from the panic and examining what the most rigorous economists have actually found.

Daron Acemoglu (MIT, Nobel Laureate in Economics) is one of the most prominent skeptics of AI disruption narratives. His quantitative estimate is that AI will increase total factor productivity by no more than 0.66% over ten years. His diagnosis of the problem is structural: firms are using AI primarily for automation (replacing workers) rather than for machine usefulness (augmenting worker expertise). His conclusion is not optimistic by default but conditional: the outcome depends heavily on policy choices we have not yet made.

David Autor (MIT) takes a more hopeful long view. His argument is that AI could reverse four decades of job polarization and rebuild the middle class, but only if developed as a collaboration tool rather than a pure automation engine. His historical reading: the Industrial Revolution took roughly 60 years to benefit rank-and-file workers. Computers transformed offices but took nearly a decade after their public release to become commonplace. Every major technology wave eventually created jobs that did not exist before.

The Yale Budget Lab's assessment is sobering in its precision: in the 33 months since ChatGPT's release, the broader labor market has not experienced a discernible aggregate disruption. But employment growth in marketing, graphic design, office administration, and call centers has fallen below trend. The broad numbers look stable. The specific, exposed sectors are already contracting.

History Has Seen This Before. And History Has Limits.

The historical record on technological anxiety is consistent: we always overestimate short-term disruption and underestimate long-run transformation.

  • The Luddites (1811 to 1816)

    Skilled English textile workers who destroyed machinery not because they feared technology, but because industrialization enriched factory owners while impoverishing them. England eventually grew richer and employment rose, but gains took decades to reach ordinary workers.

  • Keynes and Technological Unemployment (1930)

    Keynes coined "technological unemployment" during the Depression and predicted a 15-hour work week by 2030. The anxiety was real. The actual outcome was the longest sustained period of productivity and employment growth in history: the postwar economic boom.

  • The Computerization Panic (1980s to 1990s)

    A Second Luddite Congress issued a manifesto in 1996 against "the increasingly bizarre and frightening technologies of the Computer Age." Computers eventually transformed offices but the occupational mix changed far less than feared in any single decade.

  • What Is Genuinely Different This Time

    AI targets cognitive work, not just physical tasks. ChatGPT reached 100 million users in two months. Steam engines took nearly 100 years from first installation to peak adoption. The pace of diffusion is unlike anything in prior waves, even if the eventual outcome is uncertain.

A Reflection: The Problem We Need to Name

Matt Shumer's essay was not really about the future. It was about the present: the moment when a person who builds AI for a living looked at what he had built and felt genuinely disoriented. That disorientation, multiplied across tens of millions of workers, is a public health problem that is already in progress.

The policy conversation about AI and employment is almost entirely focused on future scenarios: how many jobs will be lost, which sectors are most exposed, what safety nets should be designed. These are serious questions. But they are the wrong priority for right now. The worker who is anxious today, losing sleep today, "job hugging" today out of fear rather than satisfaction, is being harmed today. No future outcome changes that.

Research in anticipatory grief teaches us something important: chronic low-level fear is often more damaging than an acute event. The person who loses their job in a layoff experiences a defined crisis with a defined endpoint. The person who spends three years wondering whether they will be replaced experiences an open-ended erosion. It never resolves. There is no closure. The BCG burnout data suggests this is not hypothetical: the anxiety is already lowering productivity, undermining the very argument that AI tools are making workers more effective.

We are debating whether AI will take jobs in 2028 while ignoring the damage that the fear of AI is doing to workers in 2026. Both problems deserve a response. Only one of them is already here.

Shumer said his goal was to push people to start experimenting, learning, adapting. That is reasonable advice. But it places the burden entirely on the individual worker, which is where most of this conversation tends to land. The harder question is institutional: what do organizations, governments, and educational systems owe workers who are navigating a transition they did not choose and cannot fully control?

History suggests the transition will take longer than the most alarmed predictions, and that new work will eventually emerge. It also suggests the workers who bear the cost of the transition rarely benefit proportionally from the gains that follow. The Luddites were not wrong about what was happening. They were wrong about what could stop it. The question worth asking now is not whether AI will transform work, but how we choose to distribute the cost of that transformation while it is happening, not after.

Sources and Further Reading