Here’s a quick look at the biggest AI news from the past week. We’ve pulled together the headlines shaping technology, business, and policy.
AI penalties rise as courts grapple with growing use of generative AI
Courts are increasingly sanctioning lawyers for submitting AI-generated filings with fabricated citations, even as adoption of AI tools across the legal system continues to accelerate. Judges and attorneys are using AI for research, drafting, and case analysis, but errors and hallucinations are leading to fines and professional consequences. Despite these risks, experts say AI is becoming deeply embedded in legal workflows, raising concerns about overreliance and erosion of critical thinking. For legal ops teams, this underscores the importance of governance, validation processes, and clear guidelines for AI-assisted work product. [NPR]
Critical vulnerabilities emerge in Anthropic’s Claude Code following source leak
A newly discovered security vulnerability in Anthropic’s Claude Code surfaced just days after portions of its source code were leaked, raising concerns about the risks of exposing sensitive AI systems and intellectual property. Security researchers warned that the flaw could potentially be exploited depending on deployment context, highlighting how quickly vulnerabilities can follow increased transparency or leaks. For legal ops teams, this reinforces the need to evaluate the security posture of AI vendors and understand how sensitive code exposure or integrations could introduce downstream risk. [Security Week]
OpenAI quietly shuts down Sora amid concerns over impact and readiness
OpenAI reportedly pulled back or limited aspects of its Sora video-generation tool, with leadership expressing concerns about the technology’s societal impact and readiness for broad release. Discussions around the decision highlighted tensions between rapid innovation and responsible deployment, particularly in media and entertainment contexts. For legal ops teams, this illustrates how quickly AI companies can launch and then sunset products, raising questions about how thoroughly companies evaluate market impact before release. It also reinforces the need for careful vendor due diligence and contingency planning, including assessing whether reliance on a given AI tool could create a potential point of failure if it is scaled back or discontinued. [Variety]
Claude Code leak reveals Anthropic’s broader AI product strategy
Analysis of the leaked Claude Code repository suggests Anthropic is developing more advanced agentic coding tools, with features aimed at automating complex software development workflows. The leak provides insight into how leading AI labs are building systems that go beyond simple copilots toward more autonomous execution. For legal ops teams, this signals a continued shift toward AI agents that can perform multi-step tasks, increasing both productivity potential and the need for oversight of autonomous systems. [Ars Technica]
Disconnect emerges between leadership and frontline AI adoption
There's a growing gap between executive enthusiasm for AI and the reality of adoption among employees, with many workers lacking training, clarity, or incentives to use AI effectively. While leadership teams push AI-first strategies, frontline teams often struggle with implementation, governance, and trust in outputs. This dynamic suggests organizations may be entering the “trough of disillusionment” in the AI adoption curve, where early hype gives way to real-world friction. For legal ops teams, this underscores the need for structured enablement, training, and clear policies to bridge the gap between AI ambition and practical usage. [IT Pro]
Debate grows over whether OpenAI could justify a $1 trillion valuation
Analysts are increasingly questioning whether OpenAI could realistically support a $1 trillion valuation in a future IPO, citing both massive growth potential and significant uncertainty around monetization, competition, and infrastructure costs. While demand for AI products continues to surge, skeptics point to unclear long-term margins and the capital intensity of building frontier models. For legal ops teams, this reflects the broader volatility and hype surrounding AI vendors, reinforcing the need for careful vendor evaluation and long-term viability assessment. [Yahoo Finance]