Here’s a quick look at the biggest AI news from the past week. We’ve pulled together the headlines shaping technology, business, and policy.
Pentagon labels Anthropic a supply chain risk
Anthropic is facing escalating conflict with the U.S. government after refusing Pentagon requests to remove safeguards limiting the use of its AI models for mass domestic surveillance and fully autonomous weapons. The dispute has intensified after the Pentagon designated Anthropic as a “supply chain risk,” a move that could require some defense contractors to stop using its Claude model. For legal ops teams, the episode shows how vendor AI policies can quickly become procurement and contracting issues, requiring closer scrutiny of model usage restrictions and vendor governance. [NPR]
U.S. considers permits for AI chip sales
U.S. officials are reportedly drafting regulations that would require government approval for nearly all exports of advanced AI chips. The proposal would position D.C. as a gatekeeper for the global distribution of high-performance AI chips from companies like Nvidia and AMD, potentially giving the U.S. broad influence over where AI infrastructure can be built. For legal ops teams, the shift highlights emerging supply-chain and vendor risk considerations as hardware restrictions reshape the AI ecosystem. [Seeking Alpha]
OpenClaw vulnerabilities highlight risks of AI agents
Security researchers have identified critical vulnerabilities in OpenClaw, an open-source AI agent platform designed to automate tasks across systems. The flaws could allow attackers to hijack the agent through malicious websites and exploit trusted local connections to execute commands or access sensitive data. For legal ops teams, this reinforces the need for strict governance around AI agents, including access controls, audit logging, and security review before deploying automation tools. [CNET]
“Brain fry” highlights emerging AI burnout concerns
As AI tools become embedded in daily workflows, some workers report increased cognitive load from reviewing AI outputs, validating results, and managing automated processes. Instead of eliminating work, AI shifts employees into oversight roles that require constant monitoring. Researchers and workplace surveys increasingly describe this phenomenon as “AI brain fry,” where productivity gains are offset by mental fatigue. For legal ops teams, this reinforces the importance of thoughtful implementation, training, and workflow design to ensure AI reduces friction rather than creating additional burdens. [Harvard Business Review]
AI layoffs accelerate across tech
Block announced plans to cut roughly half its workforce, with leadership pointing to AI-driven efficiency gains as a key reason the company can operate with fewer employees. Some analysts, however, say the layoffs reflect organizational cleanup after rapid pandemic hiring rather than AI replacement. Reports also surfaced that Oracle is considering job cuts as it reallocates billions toward AI infrastructure and data center expansion. These announcements reflect a broader shift where companies are restructuring around AI and the massive capital required to support it. For legal ops teams, this reinforces the need to evaluate vendor stability and workforce impacts. [The Guardian] [Wall Street Pit]
“Quit ChatGPT” campaign spreads online
A growing online movement is urging users to cancel ChatGPT subscriptions following reports of OpenAI’s involvement in military-related AI contracts. The backlash has spread across social media platforms, where activists argue AI systems should not be used in defense or surveillance contexts. While it remains unclear how much the campaign will affect adoption, it highlights how AI companies are becoming targets of broader ethical and political scrutiny. For legal ops teams, movements like this reinforce how public controversy around AI could influence employee trust and adoption of new tools. [MIT Technology Review]