Skip to content
Ayliea — AI Security Assessment & Compliance Consulting

THREAT PROFILE · LEGAL

AI Threats Targeting Legal

MITRE ATLAS v5.6.0 techniques targeting legal AI — hallucinated citations, privilege bypass via prompt injection in document review, RAG poisoning of case-law corpora, agentic billing-tool credential harvesting — mapped to the AISS sub-controls that mitigate each.

10 ATLAS techniquesMitigated via AISS

Why Legal AI is a Distinct Threat Surface

Legal AI is governed by professional-responsibility rules that map awkwardly onto AI risk. Model Rules 1.1 (competence), 1.4 (communication), 1.6 (confidentiality), and 5.3 (responsibilities regarding non-lawyer assistance) all bear on AI use — and bar associations are actively interpreting their reach. Failure modes that another vertical would treat as embarrassing become bar-discipline cases in legal practice.

The threat surface is also unusually adversarial. In litigation, the other side wants your AI to misbehave. Prompt-injection attacks via productions, footnotes, and exhibits are a documented pattern, not a theoretical concern. Adversaries have motive and direct access to inputs your model will read.

AISS maps every legal-relevant ATLAS technique to specific sub-controls — most commonly in AC-3 (Data Protection), AC-6 (Output Validation), AC-9 (Training and Awareness), and AC-10 (Model Security). Running an AISS assessment against your firm's AI surface gives you a defensible record under Model Rule 1.1 — that you took reasonable steps to understand and control the technology you were using.

ATLAS TECHNIQUES

Top AI threats in this vertical

Drawn from MITRE ATLAS v5.6.0, contextualized for the threat surface this vertical actually faces. Each entry lists the AISS sub-controls that mitigate it — so the assessment-to-mitigation path is auditable end-to-end.

AML.T0060Resource Development

Publish Hallucinated Entities

Documented sanctions in 2023 (Mata v. Avianca, US v. Cohen) and continuing reprimands through 2025 establish a clear pattern: courts treat hallucinated citations as a competence failure under Model Rule 1.1, not a footnote issue. Without citation-verification controls, a single bad brief becomes a bar-discipline case and a malpractice exposure.

Mitigated by

AC-6.1AC-6.2AC-6.3AC-9.1
AML.T0051Execution

LLM Prompt Injection

Document-review AI ingests opposing counsel's productions. A hostile prompt embedded in a footnote, header, or image OCR can redirect the model — surface privileged communications across matters, mark privileged content as non-privileged, or bias review categorization. The attacker is on the other side of the litigation, with motive and access.

Mitigated by

AC-6.4AC-6.5AC-10.1
AML.T0057Exfiltration

LLM Data Leakage

LLMs trained or fine-tuned on a firm's corpus memorize client identifiers, deal terms, and matter-specific phrasing. Cross-matter leakage — a query in matter A elicits text from matter B — violates Model Rule 1.6 (confidentiality), and may breach engagement-letter representations about information segregation.

Mitigated by

AC-3.1AC-3.4AC-6.2AC-10.4
AML.T0070Persistence

RAG Poisoning

Legal-research RAG pulls from internal knowledge bases, Westlaw / Lexis exports, and increasingly from public case-law databases. An attacker — or a careless ingestion process — that adds a false case summary to the index shapes every subsequent query's output. The poisoning persists across all attorneys, all matters, until the index is audited.

Mitigated by

AC-3.7AC-5.2AC-6.1
AML.T0067Defense Evasion

LLM Trusted Output Components Manipulation

Contract-redline AI emits diff-style outputs that downstream tools (DMS, e-signature platforms) treat as trusted edits. Manipulated outputs — extra clauses, altered indemnification language, removed termination triggers — can flow through to signature without human catch when reviewers rubber-stamp "low-risk" AI suggestions. This is the AI-era equivalent of last-minute scrivener fraud.

Mitigated by

AC-6.1AC-6.3AC-10.4
AML.T0084Discovery

Discover AI Agent Configuration

Conflict-of-interest screening agents have configuration that reveals which clients and matters the firm has handled. An attacker probing the agent — directly or via an embedded prompt in a new-client intake — can enumerate the firm's confidential client list, breaching Model Rule 1.6 and triggering disclosure obligations to existing clients.

Mitigated by

AC-4.3AC-8.1AC-10.5
AML.T0098Credential Access

AI Agent Tool Credential Harvesting

Agentic legal assistants chain tools: docket monitoring, court e-filing, billing system, trust-account reconciliation. Each tool has credentials the agent uses on behalf of the firm. Prompt injection extracting these credentials in benign-looking conversations exposes IOLTA trust-account access — the highest-severity breach a firm can suffer short of theft.

Mitigated by

AC-4.3AC-10.5AC-10.6
AML.T0040AI Model Access

AI Model Inference API Access

A firm fine-tunes a model on its proprietary precedents, brief banks, or argument libraries — the firm's actual intellectual property. An adversary with inference-API access (a departing associate, a compromised vendor account, an unsecured staging endpoint) extracts the model via query crafting and walks out with the firm's accumulated know-how.

Mitigated by

AC-4.2AC-4.3AC-8.2
AML.T0020Resource Development

Poison Training Data

Models trained on public case-law corpora (CourtListener, RECAP) are vulnerable to upstream poisoning by adversaries filing intentionally crafted briefs into public dockets that distort the training distribution. Once ingested at scale, the model learns adversary-chosen patterns — for example, systematically misclassifying a specific defense argument.

Mitigated by

AC-3.2AC-3.3AC-5.1AC-5.4
AML.T0010Initial Access

AI Supply Chain Compromise

Legal AI products almost always wrap a foundation model and add legal-specific fine-tuning. A backdoor introduced anywhere in the chain — base model, fine-tune, vendor SaaS — flows directly into the firm's privileged workflows. Vendor SOC 2 reports do not cover model integrity; due diligence here is its own discipline.

Mitigated by

AC-5.1AC-5.2AC-5.3AC-5.4

Assess your AI against these threats

An AISS assessment scores your organization on the AISS sub-controls that mitigate each ATLAS technique in this profile — and shows you the gaps, with audit-grade transparency.

Or browse other verticals at /threats