Category: Artificial Intelligence

AI Bias in Legal Work: Why the Source and Training of Your LLM Matters 

AI Bias in Legal Work

Artificial intelligence, and especially large language models (LLMs), are being marketed as the next big innovation in legal work. From drafting contracts to summarizing case law, the potential is real. But there’s a problem lurking beneath the surface: AI bias in legal work. 

If you don’t know how an LLM was programmed, what data it was trained on, or how it processes information, you could be opening the door to serious risks including ethical pitfalls and even legal liability. 

Where Does AI Bias Come From? 

AI systems learn from data. If that data is incomplete, skewed, or inaccurate, the outputs will reflect those flaws. Examples of AI bias in legal work: 

Bias DescriptionExample
Algorithmic or model bias The way the AI is trained or optimized to predict can shape how it interprets legal language. Heavy reliance on SEC filings and public disclosures to train LLMs could bias outputs toward public company compliance standards, overlooking the standards of private companies. 
Anchoring bias Early outputs shape user expectations, even if flawed. If the model’s first summary is wrong, reviewers may unconsciously look for evidence to support it. 
Automation bias Users over-trust AI outputs.  Relying on AI-suggested citations without verification, assuming the system must be correct. 
Confirmation bias The tendency of models to reinforce patterns seen in data rather than challenge or nuance them. An LLM might surface only majority opinions in case law while ignoring dissenting views. 
Cultural or representation bias Some groups, topics, or perspectives are underrepresented or overrepresented in the training data. More case law available from large U.S. courts than from smaller jurisdictions may skew outputs. 
Framing bias The way a prompt is worded influences the model’s response. Asking “Why is this contract unenforceable?” may lead the AI model to assume unenforceability and search only for problems. 

Think of it this way: garbage in, garbage out. If you can’t verify what went in, how can you trust what comes out? 

Why AI Bias in Legal Work Matters

Bias isn’t just a technical issue, in law, it can become a compliance and liability problem. 

  • Client Risk: Imagine relying on AI to summarize employment law and it systematically overlooks protections for certain groups. It’s potentially discriminatory advice. 
  • Court Risk: Submitting AI-generated briefs without verifying citations (as seen in recent sanctions cases) can erode credibility or even lead to professional discipline. 
  • Compliance Risk: Regulations like GDPR or EEOC guidelines could be violated if AI introduces biased assumptions into decisions about people or contracts. 

For legal teams, bias can translate into lawsuits, reputational damage, or ethical violations. 

Questions Legal Professionals Should Ask About LLMs 

Before adopting AI tools, here are some key questions: 

  1. What data was the model trained on? Public data, proprietary legal documents, or something else? 
  1. How is bias monitored and mitigated? Does the vendor or internal team test outputs for fairness or accuracy? 
  1. Is there transparency in sources? Can you see where the AI is pulling its information from? 
  1. How often is the model updated? Outdated data can perpetuate old, irrelevant, or incorrect assumptions. 
  1. What’s my role as a human? Am I verifying, editing, and fact-checking AI outputs before use? 

The Legal Professional’s Responsibility 

Even though AI can pass the bar exam, it’s not a lawyer. You can delegate drafting, summarizing, or analyzing. But at the end of the day, you cannot delegate judgment, ethics, or accountability. 

Every legal professional using AI should approach it with caution: 

  • Treat outputs as first drafts, not final work. 
  • Trust, but verify sources and check citations. 
  • Document how AI tools are used in your workflow. 
  • Stay informed about regulatory guidance around AI use by staying active in the Legal Ops Community.  

The Gist of It 

AI bias is already shaping outcomes in industries from legal to finance to HR. If you don’t understand the source of your LLM’s data, or if it wasn’t programmed with bias safeguards in mind, you may create risks instead of mitigate them. 

The best defense? AI literacy, skepticism, and oversight. AI can make your work faster and more efficient, but only if you keep your professional judgment front and center.  

Downloadable resource

AI Weekly News Roundup

AI and Legal Tech News

Here’s a quick look at the biggest AI and legal-tech news from the past week. We’ve pulled together the headlines shaping technology, business, and legal operations. 

  1. Paper highlights “verification-value paradox” in legal AI use 
    A new study argues that while generative AI promises efficiency gains in legal practice, those benefits may be offset by the need to manually verify outputs, especially given recent disciplinary cases over AI-generated filings. The authors call this the “verification-value paradox,” urging legal teams to prioritize governance and accuracy over speed. [The University of Auckland Faculty of Law
  1. OpenAI debunks “ban” on health and legal advice 
    Recent headlines claimed that ChatGPT banned all legal and medical advice. OpenAI clarified that no new policy was introduced. ChatGPT has never been a substitute for professional advice but continues to provide general information and educational guidance. When tested, the chatbot declined to offer diagnoses or tailored legal advice but gave general next steps and referenced professional resources. [Indy100
  1. Even AI can experience “brain rot” 
    Large language models exposed to “junk” short-form content—similar to viral posts on TikTok or Reels—show measurable declines in reasoning, attention, and safety. “Brain rot,” once used to describe the effects of doomscrolling on humans, may also impair AI trained on low-quality data. Researchers warn that both humans and AI need safeguards to counter the cognitive toll of short-form, shallow content. [Yahoo News
  1. AI darlings prop up Wall Street amid bubble concerns 
    Nvidia, Amazon, and other AI leaders lifted Wall Street this week even as most other stocks fell. Amazon surged 4% after announcing a $38 billion deal with OpenAI to host its AI workloads, while Microsoft signed a $9.7 billion contract with IREN for access to Nvidia chips. Despite the gains, analysts warn that AI valuations may be inflating into a new tech bubble reminiscent of the dot-com era. [Associated Press

Prompt Like a Pro: AI Prompting Tips for Legal Teams

AI Prompts for Legal Teams: Prompt Like a Pro

When it comes to working with AI prompts for legal teams, one simple truth stands out: the quality of your prompt determines the quality of the AI’s output. Think of prompting like giving instructions to an intern—the clearer and more detailed you are, the better the work product you’ll get back. 

Whether you’re asking AI to summarize a contract, draft a policy, or explain e-billing, learning how to structure prompts can make all the difference. 

Start Simple 

Don’t dive in expecting perfect results right away. Begin with basic, straightforward prompts: 

  • “Summarize this contract in plain English.” 
  • “List the key obligations for the vendor in this agreement.” 

These simple requests let you test the system’s capabilities and limitations while you sharpen your prompting skills. From there, you can build more complex requests. 

The 3Ps of Prompting 

A handy framework for crafting better prompts is the 3Ps: Prompt, Priming, Persona. 

  • Prompt: The core instruction. Be clear, specific, and provide examples when possible. 
  • Priming: Set the stage with context. Just as you’d brief a colleague, give the AI background information, client needs, or definitions upfront. 
  • Persona: Ask the AI to “think” like a certain type of lawyer. For example: “Respond as a patent attorney reviewing this invention for potential risks.” 

This structure keeps your instructions sharp and your outputs more relevant. 

Anatomy of a Strong AI Prompts for Legal Teams 

The best prompts share a few key qualities: 

✔ Clarity – No ambiguity in your wording  
  • Basic: Summarize this 
  • Strong: Summarize this quarterly Legal Ops KPI deck in plain English, focusing on contract cycle time, spend v budget and the top 3 risks. End with 2 actions for the next quarter. 
✔ Specifics – Define exactly what you want  
  • Basic: Tell me about e-billing 
  • Strong: In 5 bullet points, explain common outside billing rules (e.g., block billing, travel caps, staffing limits) and why each rule reduces cost variance.  
✔ Context – Supply enough background for nuanced answers  
  • Basic: Build a checklist 
  • Strong: Build a vendor onboarding compliance checklist covering sanctions screening, privacy Data Privacy Impact Analysis (DPIA) trigger, SOC 2/ISO evidence, data transfer mechanism, and record-of-processing entry; add RACI with Legal/IT/Security 
✔ Examples – Show what a good output looks like  
  • Basic: Draft a how-to for Legal Holds 
  • Strong: Draft a how-to for legal holds in this style: “Trigger: Litigation Notice. Do now (5 min): Notify custodians, pause deletions. Do next (24 hours): confirm systems on hold, log acknowledgements.” Use 3 bolded headers and short imperative lines.   
✔ Constraints – Set limits (length, jurisdiction, format, etc.) 
  • Basic: Compare outside counsel proposals. 
  • Strong: Score each of these outside counsel proposals using the following rubric: expertise 30%, staffing plan 25%, pricing/AFA 25%, timeline 10%, innovation/DEI 10%. Your output should be a table with scores and a 5 sentence recommendation capped at 120 words. 

Prompt like a Pro Using AI

View on-demand webinar on the topic of AI prompting

The Gist of It 

Prompting is a skill you build with practice. Over time, you’ll learn how to balance detail, context, and constraints to get outputs that truly elevate your legal work. 

Want to go deeper on AI prompts for legal teams? Download the Onit eBook: The Legal Professional’s Handbook: Generative AI Fundamentals, Prompting, and Applications

Prompting Exercise

Download to share with your colleagues.

AI Weekly News Roundup

AI and Legal Tech News

Here’s a quick look at the biggest AI & legal-tech news from the past week. We’ve pulled together the headlines shaping technology, business, and legal operations. 

  1. Research-paper warns that AI agents could exploit legal procedures 
    A new academic paper titled “LegalSim: Multi‑Agent Simulation of Legal Systems for Discovering Procedural Exploits” models how AI agents might exploit procedural rules (e.g., discovery tactics, calendar pressure) within legal systems. The findings signal emerging risks around litigation strategy, model behavior, and oversight. For legal ops, the takeaway: AI-agent risks aren’t only technical, they may become procedural and substantive. [Cornell University
  1. Legal AI funding boom pushes 2025 investment past record levels 
    Investment into legal-AI startups has already exceeded $2.4 billion in 2025, setting a record and signaling that automation in law is no longer a side project but a strategic pivot. As money flows in, legal operations must assess not only capabilities but data-governance, risk, and vendor sustainability. [Payment Week
  1. Italy becomes first EU nation to pass comprehensive AI law 
    Italy has enacted the EU’s first national AI law, introducing prison terms for harmful uses such as deepfakes and fraud, and requiring parental consent for minors under 14 to access AI tools. The law aligns with the EU AI Act and sets strict rules on transparency, copyright, and oversight. Legal and legal ops teams should watch how this framework shapes cross-border compliance across Europe. [The Guardian
  1. What the wave of AI-driven layoffs could mean for legal teams 
    As AI-driven layoffs sweep companies like Amazon, Meta, and Microsoft, the legal profession isn’t entirely exempt. Still, AI can’t replace lawyers’ judgment or ethical reasoning—recent hallucinations and flawed filings make that clear. Instead, these layoffs underscore a turning point: lawyers and legal ops teams must embrace AI, refine their skills, and focus on higher-value, human-led legal strategy. [Above the Law
  1. Big Tech doubles down on AI investments 
    Meta, Microsoft, Google, Amazon, and Apple all reported strong quarterly results and signaled plans to further expand their massive AI spending. Despite brief market jitters over Meta’s heavy capital outlays, executives emphasized that demand for AI infrastructure continues to exceed expectations. The acceleration among these tech giants points to continuing investment across the broader AI ecosystem, from chipmakers to data centers and power infrastructures. [Investor’s Business Daily

AI Weekly News Roundup 

AI and Legal Tech News
Here’s a quick look at the biggest AI news from the past week. We’ve pulled together the headlines shaping technology, business, and policy. 
  1. Survey warns of looming legal disputes for tech 
    Gartner, Inc. projects that by 2028, AI-regulatory violations will produce a 30% increase in legal disputes for tech companies. The report also found that more than 70% of IT leaders consider regulatory compliance one of their top three challenges in deploying generative AI. This highlights the urgent need to operationalize AI governance and audit readiness. [Gartner]  
  1. California signs targeted AI laws
    Governor Gavin Newsom signed a batch of laws aimed at AI and social media: requiring labels on AI-generated content and regulating chatbots for minors. At the same time, he vetoed broader bills seen as too sweeping. The developments deepen state-level regulation while reflecting a careful balance between innovation and safety. Legal departments will now face a patchwork of state rules that demand proactive policy tracking and compliance strategies. [San Francisco Chronicle
  1. Boardroom lens on generative AI 
    A new KPMG survey found that companies continue to face major obstacles in deploying GenAI including shortages in talent and challenges with data quality, security, and compliance. As boardrooms weigh AI’s risks and rewards, legal ops leaders will play a growing role in aligning governance frameworks and risk assessments with enterprise AI strategy. [KPMG
  1. Reddit files suit against Perplexity over data rights 
    Reddit has sued Perplexity and several other data-mining firms, accusing them of scraping and misusing its content despite the company spending millions on anti-scraping and data-protection systems. The complaint alleges Perplexity devised schemes to evade those safeguards and access proprietary data without authorization. The case spotlights growing tensions over AI training data. Legal and legal ops teams should reinforce data-protection measures and monitor the outcome for precedents shaping future data-use standards. [Business Insider
  1. EU AI Act and patent strategies 
    The EU AI Act’s conformity assessment is already influencing how patent portfolios are managed. US-based companies with EU exposure are urged to adjust claim drafting and global IP strategy to align with AI regulatory thresholds. [Bloomberg

Part 3: Transform Spend Data into Intelligence

See how legal teams transform legal spend data into intelligence with AI
Executive Summary:
Transforming legal spend data into intelligence requires two fundamentals: unifying the data onto a single platform and using conversational AI to “talk” to spend data. Together, these steps create a more strategic approach to financial management, leading to better cost control and risk mitigation.

This is Part 3 of our 5-part series “Stop the Leak: A Guide to Mastering Legal Spend.

In the first two parts of this series, we explored the hidden costs of manual reviews and outlined how to build effective, operationalized Outside Counsel Billing Guidelines. Those are foundational steps to stop financial leakage, but simply having clean invoices and clear rules is only half the battle. For most legal department spend data remains “trapped”. It may be used for historical accounting, but not for strategic foresight.

The true transformation happens when legal teams turn streams of historical data into a forward-looking intel-engine. This article explores how to make that leap, demonstrating how AI plays a major role extracting insights from legal spend data, and shifting legal financial planning from reactive to proactive.

Unifying the Data

It’s difficult, if not impossible, for an organization to transform spend management when cost, matter status, and work-in-progress data live in disconnected systems across emails, spreadsheets, and legacy platforms. The first step is therefore to centralize the data onto a single, unified platform. Once the data is unified, management can begin the shift from backward-looking reporting to forward-looking intelligence. What’s the difference?

  • Backward looking: “What did we spend last quarter?”
  • Forward-looking: “What are we likely to spend next quarter?” or “How can we optimize our firm selection for better value?”

This is where AI offers a real breakthrough. By looking across complete datasets, AI can spot patterns, identify anomalies, and surface insights invisible to the human eye, enabling a new level of strategic decision-making.

“Talking” to Spend Data

One of the most powerful examples of AI in action is chat. Yes, chat. The simple experience using an AI chatbot to converse directly with spend data transforms legal finances into a search-powered conversation. Leaders no longer need to ask an analyst to spend a week building a complex report. They can simply ask the chatbot a question in plain English and get an instant, data-backed answer. Imagine:

  • The CFO: “Show me our total spend with our top five law firms this quarter and flag any matters that are trending more than 10% over their approved budget.”
  • The General Counsel (GC): “What is our total cost for employment litigation matters in Texas versus California?”
  • The Legal Ops Leader: “Which of our practice areas has the highest rate of billing guideline violations this year?”

With this capability, leaders move beyond static reporting and toward a strategic, real-time financial dialogue.

Core Pillars for Improved Financial Strategy

Once you can talk to your data, you can build a more proactive and strategic approach to financial management. This new capability supports three core strategic pillars.

Pillar 1. Control Costs

Historically, legal budgeting has been relative guesswork based on last year’s numbers. An AI-native system can analyze historical data to model future spend with a much higher degree of accuracy, transforming the annual budget conversation with the CFO from a defensive negotiation into a strategic planning session. A recent study published by Harvard Business Review demonstrated that AI consistently outperformed professionals in budget optimization thanks to its ability to learn from past data and metrics

Example:
  • The Ask: “Show me our average spend on Phase 1 discovery for all patent litigation matters in the last 24 months, broken down by our top three IP firms.”
  • The Outcome: The Legal Ops Manager uses this data to build a highly accurate, defensible budget for upcoming litigation, creating traceability to any forecast.

Pillar 2. Optimize Value

Controlling spend isn’t just about cutting costs; it’s about maximizing value. AI enables smarter fee negotiations and more effective vendor management.

Smarter Fee Arrangements:

Alternative Fee Arrangements (AFAs) shift the focus from elapsed time to value by pricing work by scope or outcome. Common structures include flat fees, retainers, contingency fees, success fees, and risk-sharing models. The intent is to shift away from pure hours. According to the American Bar Association some common forms of AFAs include:

  • Flat fees. A predetermined fee for specific services or projects.
  • Contingency fees. Fees based on the outcome of a case, where attorneys receive a percentage of the settlement or award.
  • Retainers. An up-front fee that secures a lawyer’s services for a defined period or specific tasks.
  • Success fees. Additional fees earned upon achieving specific outcomes or milestones.
  • Risk-sharing arrangements. Fees are adjusted based on the results achieved, creating a partnership-like relationship between the law firm and the client.

The aim is to improve cost certainty and to make services more affordable, accessible and transparent in pricing. However, organizations cannot negotiate an effective AFA without credible data.

Example:
  • The Ask: “Show me the average cost and cost range for Phase 1 discovery in all employment litigation matters we’ve had over the last 3 years.”
  • The Outcome: The legal team moves from hourly billing to a fixed-fee negotiation, supported by internal data. “Our internal data shows that a Phase 1 discovery for this type of litigation typically costs between $X and $Y. Let’s work together to build a fixed fee based on that data.” This transforms the negotiation to data-driven discussion about value and predictability.
Managing the Vendor Panel:

AI offers the potential to easily compare vendors on performance, efficiency, and compliance, helping consolidate spend with top performers and address underperformance.

Example:
  • The Ask: “Compare the average blended hourly rate and invoice compliance score for Firm A versus Firm B on all commercial contract matters this year.”
  • The Outcome: The organization uses this to negotiate better rates with the more expensive firm and shift work to the vendor that provides best value.

Pillar 3. Mitigate Risk

The choice to settle early or commit to a lengthy legal battle is often based on an outside counsel’s initial budget and the GC’s own experience. The process is more art than science, with little financial discipline. This creates a significant challenge for the CFO, who must try to forecast the potential impact of a liability based on incomplete information.

AI provides a disciplined alternative by analyzing past matters to model cost, duration, and outcomes.

Example:
  • The Ask: “For all employment discrimination cases in California over the past three years that we settled before trial, what was the average total legal spend?”
  • The Outcome: When a new, similar case arises, the GC can understand the likely cost of litigation. In this case, the data might offer a baseline for deciding whether to pursue an early settlement or commit to a lengthy legal battle.

The Gist of It

Simply having clean invoice data is not enough; its value is lost if it’s only used for looking at the past. The real value comes from centralizing data on a unified platform and using AI to transform it into forward-looking intelligence. By “chatting” with spend data, leaders gain instant insights for proactive budgeting, data-driven AFAs, and risk-smart decision-making.


Coming up in Part 4: We’ll explore the post-automation playbook, outlining five high-value strategic functions to which you can redeploy your essential knowledge workers. 

AI Weekly News Roundup 

AI and Legal Tech News
Here’s a quick look at the biggest AI news from the past week. We’ve pulled together the headlines shaping technology, business, and policy.  

1. Legal AI startup Eve hits $1B valuation with major funding 
Eve, which builds AI tools for plaintiffs’ law firms (case evaluation, drafting, discovery), raised $103M and now commands a $1B valuation. The milestone signals growing investor confidence in AI’s role in litigation practice. [Reuters

2. Anthropic cofounder argues for cautious optimism in “Technological Optimism and Appropriate Fear” 
Anthropic cofounder Jack Clark shared an essay urging a middle ground between AI hype and fear. He describes frontier AI as powerful but unpredictable. His framing reinforces the need for governance that balances innovation with risk, compliance, and liability safeguards. [Import AI

3. Taiwan Semiconductor rides the AI wave with stellar earnings 
Taiwan Semiconductor Manufacturing Company (TSMC), the world’s largest chip producer, announced record profits alongside an improved revenue forecast, reflecting multiyear demand from AI infrastructure growth that will continue shaping global tech and compliance landscapes. [Barron’s

4. Zelda Williams demands respect over deepfakes of her father 
Robin Williams’ daughter, Zelda, publicly urged people to stop generating AI videos of her late father, calling them disrespectful and deeply distressing. This underscores the urgency around rights of publicity, posthumous likeness laws, and enforcement mechanisms against misuse of identity in AI content. [Guardian

5. Illinois advances guardrails for AI in mental health 
A new Illinois law aims to limit how AI is used in mental health settings, requiring oversight and protections to prevent harm or misdiagnosis. More than 250 bills targeting AI in healthcare have been proposed in state legislatures across the country. Legal teams should pay close attention to regulatory requirements and liability risks when deploying AI in sensitive domains. [Association of Health Care Journalists

6. OpenAI’s Sora app crosses 1M downloads in under 5 days 
The media-generation tool Sora hit 1 million downloads in just days. The rapid adoption shows that consumers are experimenting with AI-generated video at scale. This puts pressure on legal and compliance teams to clarify use rights, consent, and liability before synthetic media becomes ubiquitous. [BBC

California Frontier AI Act: Key Takeaways for Legal Teams 

California Frontier AI Act requires disclosures and transparency to regulate frontier AI.

Summary of SB 53 

California recently enacted Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). It targets large “frontier” AI developers and requires: 

  • Public safety disclosures: Big AI developers must publish a frontier AI framework that explains how they assess catastrophic risks, apply standards, test mitigations, secure model weights, and govern internal processes. Redactions are allowed to protect trade secrets and security. 
  • Transparency reports before deployment: When releasing a new or substantially modified frontier model, developers must publish a report that includes intended use, restrictions, and summaries of catastrophic risk assessments and third-party evaluations. 
  • Incident reporting to the state: Developers must report critical safety incidents to California’s Office of Emergency Services within 15 days of discovery, or within 24 hours if there is an imminent risk of death or serious injury. 
  • Whistleblower protections: Covered employees who raise concerns about catastrophic risks or violations receive legal protections. Large developers must also provide an internal anonymous reporting channel. 
  • Possible civil penalties: Up to $1 million per violation, enforced by the Attorney General. 
  • “Frontier” scope: The law applies to large developers training models above a high compute threshold and with revenue above a specified level. Smaller developers are not the primary target. 
  • CalCompute concept: A state-led framework for a public cloud cluster to support safe, equitable AI research, subject to future appropriation. 

This approach differs from the EU AI Act, which generally emphasizes private disclosures to regulators. California’s law requires public safety frameworks and transparency reports, with sensitive content redacted where necessary. 

What this means for companies using AI 

Even if a company does not build frontier models, this law affects how AI is procured, integrated, and governed. 

  1. Vendors will be more transparent. Expect system cards, safety frameworks, and incident reporting practices to become standard. Procurement teams should request and evaluate these artifacts. 
  1. Incident handling will mature. Vendors will need defined incident taxonomies, timelines, and contacts. Downstream users should align internal playbooks to vendor timelines to manage operational risk. 
  1. Security and access controls matter. The focus on protecting model weights signals higher expectations for cybersecurity. Companies that host or fine-tune models should review controls, access logging, and key management. 
  1. Public disclosures have ripple effects. Public transparency reports may reveal capabilities, limits, and risk mitigations. Risk management, communications, and compliance should be ready to interpret and operationalize these disclosures. 
  1. Whistleblower readiness is now part of AI governance. Even non-developers benefit from clear, trusted channels to report AI risks within the enterprise and with vendors. 
  1. Multi-jurisdiction complexity increases. Companies operating in California, the EU, and other regions will need a harmonized compliance posture that can satisfy different transparency and reporting obligations. 

What legal teams need to know  

1) Map exposure 

  • Inventory AI use across the organization. Note systems that rely on third-party models, fine-tuning, or internal tools. 
  • Classify vendors by risk and determine which suppliers may fall under “frontier developer” definitions. 

2) Update contracts and procurement 

  • Require transparency artifacts: frontier AI framework, system/model cards, safety testing summaries, and redaction policies. 
  • Add incident clauses: notification within set timelines, incident definitions aligned to SB 53, cooperation and evidence preservation, and remediation plans. 
  • Strengthen security terms: controls for model weights and parameters, credential management, third-party testing, and audit rights where appropriate. 
  • Whistleblower and non-retaliation provisions: reflect legal protections and specify internal channels for concerns. 

3) Build an internal AI safety and disclosure framework 

  • Adopt clear policies for AI selection, testing, deployment, and monitoring. Tie policies to recognized standards and internal risk tiers. 
  • Define an incident taxonomy that maps to vendor and regulatory categories. Include triggers, severity levels, and escalation paths. 
  • Document the verification process for safety claims and use limits, including how business units validate vendor statements. 

4) Coordinate with security, risk, and comms 

  • Create a unified playbook that covers legal, security, privacy, and communications actions for AI incidents. 
  • Run tabletop exercises that include AI-specific scenarios such as deceptive model behavior, loss of control, or significant misuse. 

5) Train the organization 

  • Educate stakeholders on safe AI use, data handling, and limitations. 
  • Explain disclosure expectations so teams understand what may be shared publicly, what must be redacted, and how to route questions. 

6) Monitor the landscape 

  • Track California guidance from the Office of Emergency Services and the Attorney General. 
  • Align with other regulations such as the EU AI Act and sector-specific rules that may interact with AI obligations. 

The Gist of It 

California’s SB 53 sets a new baseline for public safety transparency, incident reporting, and whistleblower protections in frontier AI. Even companies that only use AI will feel the effects through vendor terms, security expectations, and incident coordination. Legal teams should move now to update procurement language, tighten governance and playbooks, and train stakeholders so the organization can adopt AI confidently and stay ahead of evolving rules. 

AI Glossary for Legal Professionals

AI glossary for legal professionals
AI is reshaping how legal professionals research, draft, review, and manage work. With new technology, comes new technical jargon. This glossary breaks down the key terms, so you can talk about AI with confidence. 

  • Agents – A software system that can perceive its environment, make decisions, and take actions to achieve specific goals—often autonomously. AI agents can use tools, respond to user input, and adapt based on context or feedback. 
  • AI Governance – The frameworks, policies, and controls that organizations use to manage AI responsibly. 
  • Alignment – Making sure AI systems behave in ways that match human goals and values. Since AI models don’t always follow instructions reliably, alignment is about making their responses safe, consistent, and predictable in real-world use. 
  • Algorithm – A set of rules or instructions that a computer follows to perform a task or solve a problem. 
  • Artificial Intelligence (AI) – Technology that simulates human intelligence to perform tasks like reasoning, learning, and decision-making. 

  • Bias – LLMs form biases from their training data, leading to potentially unfair outcomes. 

  • Classification – Teaching AI to group data into categories (e.g., “privileged vs. non-privileged” documents). 
  • Context Limits – The maximum amount of text that can be input into an AI model 

  • Deep Learning – A field of machine learning that uses multi-layered neural networks to learn complex representations of data, enabling the model to discover patterns. Deep learning is often used for tasks such as image recognition, and natural language processing/generation. 

  • Embedding – A way of turning words, sentences, or other data into numbers that capture meaning and similarity. 
  • Entity Recognition – The AI’s ability to identify key terms like names, dates, or clauses in a contract. 
  • Explainability – The ability to understand and justify how an AI reached its conclusion. 

  • Fine Tuning – Adjusting pre-trained AI models to a specific task or subject using task-specific data to improve performance and adapt to narrower subjects. 
  • Frontier AI – the most advanced artificial intelligence systems at the cutting edge of current technology. These are large-scale foundation models trained with massive computing resources on broad datasets, designed to perform a wide range of tasks rather than a single function. Because of their scale and adaptability, frontier AI models have the potential to deliver powerful new capabilities, but also introduce higher levels of risk 

  • Generative AI – A type of AI that can create new content—such as text, images, or audio—based on patterns it has learned from large amounts of data. For example, drafting a contract clause or generating a summary of case law. 
  • Grounding – Connecting an AI’s answers to trusted sources, such as a database or document, so that the response is more accurate and reduce hallucinations. 

  • Hallucination – When an AI system generates information that sounds convincing but is actually wrong or made up. For example, citing a legal case that doesn’t exist. 

  • Inference – The process of generating an output from an AI model after it’s been trained. 

  • Large Language Model (LLM) – An artificial intelligence model trained on extensive datasets, able to understand and generate human-like language for a range of natural language processing tasks. 

  • Machine Learning (ML) – Learns from data & improves without being programmed. 
  • Model – Algorithm designed to perform specific tasks, make predictions or solve problems by learning patterns from data 
  • Model Drift – Over time, AI performance can decline as data or conditions change. Regular monitoring is required. 

  • Natural Language Generation (NLG) –  A field of natural language processing that involves the generation of human-like language by AI models. NLG can include text summarization and chatbot responses. 
  • Natural Language Processing (NLP) – A field of artificial intelligence that focuses on enabling models to understand, interpret, and generate human language. NLP can include translation, sentiment analysis, and speech recognition. 
  • Neural Network – A model inspired by the human brain, made up of interconnected nodes (“neurons”) that help AI process data and recognize patterns. 

  • Overfitting – When a model learns the training data too well, including noise, and performs poorly on new data. 

  • Parameters – The internal settings a model adjusts during training (often billions in modern LLMs). 
  • Predictive Analytics – Using AI to forecast outcomes (e.g., case results, contract risk) from historical data. 
  • Prompt – Providing specific input/queries to an AI model to shape the nature and content of the output 

  • Reinforcement Learning – A training method where AI learns by trial and error, receiving rewards or penalties for its actions. 

  • Supervised Learning – Trains AI models using labeled data, where the AI learns to map inputs to corresponding outputs 
  • Semi-supervised Learning – Combining labeled and unlabeled data to guide training and improve understanding of patterns and relationships within the data 
  • System Prompt – The initial input or instruction given to a language model that dictates output and performance of tasks, guiding subsequent responses based on the provided context. 

  • Token – A chunk of text (often a few characters or words) that AI models process. Costs and limits are usually based on tokens. 
  • Training – Teaching AI models to perform specific tasks through exposure to relevant data 
  • Training Data – The data the AI is fed to learn language and patterns. 

  • Unsupervised Learning – Trains AI models on unlabeled data to uncover inherent patterns, often used for anomaly detection 

  • Validation – The process of assessing the performance, accuracy and reliability of an AI model 

  • Zero-Shot / Few-Shot Learning – An AI’s ability to complete tasks without (zero) or with minimal (few) examples. 
The Gist of It 

This AI glossary for legal professionals is a living document. New terms will be added as they surface in community conversations and at events. 

How AI Is Changing What We See and Believe 

How AI Is Changing What We See and Believe With Video Generating Apps

OpenAI’s new tool, Sora, marks a real turning point. Unlike earlier AI tools that mostly generated text or supported tasks, Sora can create full videos from a simple prompt. You can drop yourself, or anyone, into a video with a likeness, voice, and identity that feels real. 

That’s where things get slippery. What looks real might not be real at all. When your name, your face, and your voice can be replicated in seconds, the stakes rise for both individuals and organizations. 

From Tool to Media 

Up until now, AI has been mostly about assistance. It helped with drafting, research, and analysis. Sora and tools like it change the equation. AI is no longer just a behind-the-scenes helper. It is creating the actual media we consume and shaping what people see and believe. 

The Psychological Toll 

More people are beginning to experience what experts call AI psychosis—the feeling of blurred reality when synthetic content gets too close to believable. When the mind can’t separate truth from fabrication, it fills in the gaps. That’s dangerous territory, especially in professions that rely on evidence, trust, and precision. 

Anthropic’s 4D Framework for AI Literacy 

To navigate this, we need to build new habits around AI media: 

  • Delegation: Know when and why you’re letting AI create content. 
  • Description: Be transparent about what is AI-generated and what isn’t. 
  • Discernment: Develop the ability to question and validate media before trusting it. 
  • Diligence: Create legal, technical, and operational safeguards to protect identity and ensure authenticity. 

The Bigger Question 

How do we build AI into our lives and work without losing our grip on reality? Or our ability to think critically?  

Final question: What do you think of the new AI capabilities? Did you think it was Nick in the video? 

Nick Whitehouse, Chief Artificial Intelligence Officer, Onit explains how AI is evolving to create content.