House Lawmakers Exposed to Risks of ‘Jailbroken’ AI in Chilling Demonstration
Washington, D.C. — On Wednesday, researchers from the National Counterterrorism Innovation, Technology and Education Center (NCITE) met with House lawmakers. They showed how AI models change when safety features are removed. In a closed-door session, the researchers proved that jailbroken AI loses its guardrails. Bad actors can then use these models to plan terrorist attacks, make bombs, or launch cyberattacks.
NCITE is a university-led project funded by the Department of Homeland Security (DHS). It joined the House Homeland Security Committee to host the session. Lawmakers tested several versions of jailbroken AI. Representative Gabe Evans (R-Colo.) described his shock. “What we saw in there with the jailbroken AI is what happens when you take those guardrails off of AI, and ask, ‘How do I make a nuclear bomb?’” Evans told POLITICO. He added that these unfiltered models “gave answers to all of those things.”
Censored vs. Abliterated AI Models
NCITE researchers explained a clear difference. Censored AI models—such as Anthropic’s Claude and OpenAI’s ChatGPT—keep safety features. They refuse to produce harmful content. Abliterated models have these safety checks turned off. At one demo, lawmakers watched two models respond to the same prompt. One model refused to draft an attack plan for the upcoming America 250 celebration in Washington, D.C. The abliterated model, however, offered step-by-step instructions for an attack.
House Homeland Security Chairman Andrew Garbarino (R-N.Y.) shared another disturbing example. “I asked one large language model how to kidnap a member of Congress,” he told reporters. “It spit out an answer in under three seconds — ways to find them, where to look for them, the best spots to do it.”
Concerns Over AI Safety and Weaponization
The session stressed that even powerful AI can be misused. Companies work hard to add safety guardrails, yet hackers find ways around them. These bypass methods include masking harmful questions in complex language and directly jailbreaking the models. Lawmakers also face serious international challenges. Groups linked to Russia have used AI to spread disinformation, while Beijing-backed hackers reportedly used Anthropic’s Claude model in one of the first fully automated cyberattacks.
Rep. Andy Ogles (R-Tenn.), chair of the House Homeland Security Committee’s cyber subcommittee, warned, “What’s extraordinary about this presentation is how most of [the AI tools] are readily off-the-shelf and easy to access. That just increases the probability that the wrong person gets their hands on this.”
Growing Calls for Regulatory Action
After the demonstration, lawmakers pressed for stronger AI safety rules. New AI technology grows faster than federal regulations. Several states now push for tighter safety protocols. The topic hit home in Florida when Attorney General James Uthmeier expanded an investigation into OpenAI after a deadly shooting at Florida State University. Officials say the suspect discussed plans using ChatGPT before the attack.
President Donald Trump has offered a legislative proposal. His plan would install AI guardrails that focus on protecting underage users and help prevent a mix of conflicting state laws.
Rep. August Pfluger (R-Texas) said, “It’s really scary. What AI is supposed to do is have some guardrails on certain things like, ‘How would you terrorize a school?’ ‘What type of weapons would you use?’ And now those guardrails can be removed.”
Looking Ahead
The NCITE briefing and the Homeland Security Committee’s session add to the growing call for careful rules on AI. As AI models become more powerful and common, lawmakers and experts face a tough task. They must balance innovation, safety, and security.
Correction: An earlier version of this article misstated the relationship between the Department of Homeland Security and NCITE. NCITE is a DHS Center of Excellence and receives funding from DHS.
Filed under: Cybersecurity, Terrorism, Artificial Intelligence
Try this workflow today, Writer Link AI and Write Easy provide smart outputs with a natural voice. Get started with a free plan at