Why AI Companies Want You to Be Afraid of Them
Tech firms working on AI make warnings. They tell us that their tools are dangerous even as they sell them. Anthropic, a top firm, now speaks up. Their AI tool, Claude Mythos, can spot cybersecurity gaps better than humans. At the same time, these companies market and use their AI.
The Fear Narrative: A Double-Edged Sword
Anthropic says Mythos has great power and big risks. In an April blog, they warned that if bad actors gain this tech, economies and public safety may suffer. Other AI leaders have said similar things. Their warnings sound dire.
Yet, the message feels both mixed and planned. Why would a firm stress dangers that come from its own tech? In most fields, companies celebrate safe, helpful tools. But many AI firms choose to spread fear about what they build.
Critics claim this fear is a clever move. It may hide real harms like environmental damage and social disruption. It also makes the companies seem like the only ones able to control these tools. Shannon Vallor, an ethics professor, says that by calling AI nearly supernatural, we feel weak. We then trust only the companies.
Anthropic’s Claude Mythos: A Closer Look
Anthropic’s boss, Dario Amodei, has warned before about risky tools. In 2019, he and his team at OpenAI said that GPT-2 was too dangerous for public use. Even so, GPT-2 came out later.
Now, some experts doubt Mythos’s claims. Anthropic says Mythos has found thousands of deep cybersecurity flaws and that over 40 groups help patch these risks. Yet, some experts say they see little proof. Heidy Khlaaf, chief AI scientist at the AI Now Institute, points to missing data like false positive rates. She notes that while AI can scan large amounts of code, saying Mythos is all-powerful does not add up. Some reports say Anthropic may have delayed public release due to high computing costs. The firm did not explain this further.
The Broader Industry Context
Anthropic is not the only one to warn about AI. Companies like OpenAI and Google DeepMind and figures like Bill Gates and Elon Musk share fears. They have signed statements asking us to lower AI risks along with risks from pandemics and nuclear war. Meanwhile, calls for a pause in advanced AI work come quickly, and then new projects start.
Emily M. Bender, a professor at the University of Washington, sees this as a tactic. She says, “They want you to focus on this one danger. Meanwhile, they hide the harm done to nature and workers.” This pattern distracts us from deeper issues.
OpenAI says it wants to stop AI from concentrating power. They back shared decisions on safety. Yet, the conflict between warning of future doom and making money today remains.
The Stakes: Market Dominance Versus Safety
Both OpenAI and Anthropic began focused on safety. OpenAI started as a nonprofit meant to serve humanity. Anthropic broke away because it thought safety was not enough at OpenAI. Now, both push toward profit and even public stock.
This change raises hard questions about their true aims. Vallor reminds us, “Look at an organization’s incentives to know its behavior.” When companies drop bans on risky AI or shift their policies, their profit motive may lean against safety.
AI also works in crucial fields like healthcare, despite risks like misdiagnosis. Its energy use makes a heavy mark on the planet. In many ways, AI’s cost is quite high.
Conclusion: A Call for Balanced Perspective
Companies say that AI tools may one day end the world. This claim is both a marketing trick and a sign of real worries. While AI risks matter, stretching them too far can hide present harms and delay much-needed oversight.
Experts ask the public and leaders to stay alert. We must demand clear rules, accountability, and shared choices. Only then can AI help us instead of merely scaring us.
As the debate goes on, remember this: even if companies warn us of danger, being well informed, watchful, and active is the surest way to shape a better future with AI.
Try this workflow today, Writer Link AI and Write Easy provide smart outputs with a natural voice. Get started with a free plan at