AI Weekly News Roundup

2 min read

AI and Legal Tech News

Artificial Intelligence

Here’s a quick look at the biggest AI & legal-tech news from the past week. We’ve pulled together the headlines shaping technology, business, and legal operations. 

  1. The New York Times files new lawsuit against Perplexity over alleged copyright infringement 
    The Times accused Perplexity of scraping and redistributing its articles without permission including alleged verbatim or near-verbatim copies and fabricated misattributions. This latest legal challenge reflects escalating tensions between legacy publishers and genAI platforms. It underscores increasing liability risk and reinforces the need for rigorous vendor due diligence, licensing, and intellectual property governance before relying on AI-sourced material. [Reuters
  1. Companies ramp up AI risk warnings in SEC filings 
    More than 400 U.S. companies have added AI-related risk factors to their SEC filings this year. Firms warn that AI could damage reputations through biased or incorrect outputs, security lapses, or IP violations, with sectors from financial services to consumer products highlighting potential harm. Despite these concerns, companies continue accelerating AI adoption to stay competitive, reflecting a balancing act between disclosing risk and pursuing productivity gains. [Business Insider
  1. New York introduces new AI court system policy 
    New York’s Unified Court System issued an AI policy that restricts genAI use, mandates training, and reinforces human oversight and confidentiality standards. For legal and legal ops teams, these developments highlight tightening state-level AI governance with direct implications for compliance and workflow practices. [Nexstar Media Inc
  1. Lawsuit highlights growing risk of AI-driven defamation 
    A Minnesota solar company, Wolf River Electric, sued Google after its AI-powered search results falsely stated the firm had settled a lawsuit with the state attorney general—claims that never occurred, but reportedly led to canceled contracts and significant lost revenue. The case raises questions about who is liable when AI systems generate and circulate false statements. Legal experts note that if courts find companies responsible for AI-generated content, it could unleash a wave of litigation, making this a closely watched issue for navigating AI risk. [The New York Times