On December 17, two US House Homeland Security subcommittees held a joint hearing to address rising cybersecurity threats linked to artificial intelligence (AI) and quantum computing.
- Anthropic Report Highlights AI-Driven Cybersecurity Threats
- Frequently Asked Questions
- What did the US House subcommittees discuss about AI and cybersecurity?
- How are hackers using AI models like Claude in cyberattacks?
- Did these AI attacks compromise the companies’ systems?
- How can AI help improve cybersecurity defenses?
- What risks does quantum computing pose to current encryption?
- What strategies did experts recommend to combat AI and quantum-enabled cyberattacks?
- Why is rapid detection and automation critical in modern cyber defense?
- Conclusion
The Subcommittee on Oversight, Investigations, and Accountability and the Subcommittee on Cybersecurity and Infrastructure Protection questioned tech companies and cybersecurity experts about strategies Congress could adopt to protect U.S. digital infrastructure from advanced cyber threats.
No legislative proposals were introduced, but lawmakers explored the increasing complexity of AI- and quantum-driven cyberattacks, noting that such threats are only growing.
“The rapid advancement of AI and quantum computing accelerates cyber risks,” said Oversight Ranking Member Shri Thanedar, D-Mich. “These technologies strengthen the cyber capabilities of nations like China and empower less-resourced actors, including organized crime groups. AI-assisted attacks are now faster, more widespread, and harder to detect.”
The hearing coincided with warnings from AI developers, including Anthropic and OpenAI, that their models could be misused to launch sophisticated cyberattacks. Thanedar highlighted that nation-state actors—China, North Korea, and Russia—along with organized crime groups, have spent decades refining cyberattacks for espionage, intellectual property theft, infrastructure disruption, and ransom. He urged Congress to renew the Cyber Security Information Sharing Act before it expires in January.
Cybersecurity and Infrastructure Protection Chair Rep. Andy Ogles, R-Tenn., stressed the need for bipartisan solutions, suggesting a working group to develop actionable proposals. “If we don’t get this right, it changes everything forever,” Ogles said. “This is about national security, not politics. The future is coming whether we prepare or not.”
The hearing used Anthropic’s report as a foundation to examine how AI could reshape cybersecurity risks and responses.
Read More: AI Inferencing to Drive Saudi Arabia’s Next Industrial Revolution
Anthropic Report Highlights AI-Driven Cybersecurity Threats
During the hearing, Homeland Security subcommittees focused on an Anthropic report revealing that Chinese hackers exploited the AI model Claude’s coding feature to launch autonomous attacks against roughly 30 global organizations. The hackers tricked Claude into thinking it was performing legitimate defensive cybersecurity tasks.
Logan Graham, head of Anthropic’s Frontier Red Team, clarified that the attacks did not compromise Claude’s internal code or Anthropic itself. He noted that agentic AI systems could automate up to 80–90% of human tasks required for a successful cyberattack.
“This dramatically increases the speed and scale of operations compared to traditional methods,” Graham said. Hackers used sophisticated networks to bypass safeguards and deceived the model into performing malicious tasks under the guise of ethical cybersecurity work.
Rep. Morgan Luttrell, R-Texas, questioned the risks as AI models increasingly automate tasks humans once oversaw. “If AI removes the human element needed to detect attacks, what happens when attackers lie in wait for the next opportunity?” he asked.
Graham explained that automated detection measures were triggered, but the hackers’ obfuscation network masked their location and split the attack into smaller parts, partially evading detection. He recommended Congress support measures such as rapid national security testing of AI models, threat intelligence sharing for developers, and equipping cyber defenders with AI capabilities.
“This is the first time we’ve seen these dynamics,” Graham said. “Sophisticated actors are already preparing for the next model and next exploit. Rapid detection and mitigation are critical.”
Automating Cyber Defenses with AI
Google VP for Privacy, Safety, and Security Engineering, Royal Hanse, highlighted a recent shift in cyber threats: malicious actors are increasingly using AI not just for productivity, but to deploy “novel AI-enabled malware” in active attacks.
Hansen stressed that cybersecurity professionals must adopt similar automation and experiment with advanced AI tools to keep pace with criminals and nation-state actors. With much commerce still relying on legacy systems, AI-driven automation is crucial for patching vulnerabilities and strengthening defenses.
“This represents a new operational phase of AI abuse, with tools that adapt mid-execution,” Hansen said. “While still emerging, these threats can be countered, and AI can supercharge our cyber defenses. Large language models can sift through complex telemetry, aid secure coding, discover vulnerabilities, and streamline operations—unlocking new opportunities to enhance collective security.”
Quantum Computing and the Future of Cybersecurity
Lawmakers focused on how to prioritize securing government data against future quantum-enabled attacks that could break current encryption.
Quantum Xchange CEO Eddy Zervigon urged a proactive “architectural approach,” calling for the U.S. to reinforce networks with post-quantum cryptography. “For more than 50 years, encryption safeguarded our data with a set-it-and-forget-it mindset,” he said. “That era is ending with quantum computing.”
Michael Coates, founding partner of Seven Hill Ventures, outlined five ways Congress can enhance cyber resilience for AI and quantum threats. Recommendations included secure-by-design standards for hardware and software, automated and streamlined cyber defenses, and transparent AI development.
“Intelligent automation makes attacks continuous rather than episodic, challenging the assumption that organizations can recover between incidents,” Coates said. “AI and quantum computing are accelerating forces that reshape cybersecurity. Our success depends on whether technical, operational, and institutional responses can keep pace.”
Frequently Asked Questions
What did the US House subcommittees discuss about AI and cybersecurity?
They explored how AI and quantum computing could accelerate cyberattacks and strategies to protect U.S. digital infrastructure.
How are hackers using AI models like Claude in cyberattacks?
Hackers trick AI models into performing malicious tasks, automating up to 80–90% of the human-required steps in attacks.
Did these AI attacks compromise the companies’ systems?
No. Companies like Anthropic reported that their internal code and systems were not breached.
How can AI help improve cybersecurity defenses?
AI can automate threat detection, patch vulnerabilities, analyze telemetry, and strengthen secure coding practices.
What risks does quantum computing pose to current encryption?
Quantum computers could break traditional encryption, making sensitive government and corporate data vulnerable if not secured proactively.
What strategies did experts recommend to combat AI and quantum-enabled cyberattacks?
Experts advised post-quantum cryptography, secure-by-design principles, automated defenses, transparent AI development, and threat intelligence sharing.
Why is rapid detection and automation critical in modern cyber defense?
Continuous AI-driven attacks leave little time for human intervention, requiring faster detection and adaptive, automated responses.
Conclusion
The joint House Homeland Security subcommittee hearings highlight the growing urgency of addressing AI and quantum computing cybersecurity threats. Lawmakers and experts emphasized that sophisticated attacks are evolving faster than traditional defenses, making proactive measures essential. From securing government data with post-quantum cryptography to leveraging AI for automated threat detection, the path forward requires coordinated, bipartisan action.
