THREAT PROFILE · FINANCIAL SERVICES
AI Threats Targeting Financial Services
MITRE ATLAS v5.6.0 techniques targeting financial-services AI — fraud-detection evasion, trading-model extraction, agentic credential harvesting, supply-chain compromise of underwriting models — mapped to the AISS sub-controls that mitigate each.
Why Financial Services AI is a Distinct Threat Surface
Financial-services AI exists at the intersection of three accelerating threat vectors: high-value targets (every model touches money, sensitive PII, or material non-public information), commoditized attacker tooling (adversarial-input crafting and model-extraction services are now openly sold), and a regulatory regime that punishes both the breach and the failure to anticipate it (FINRA, SEC, OCC, state banking regulators, NYDFS Part 500, EU DORA).
The threats below are not generic LLM concerns. Fraud-detection evasion at major card networks has been independently demonstrated in 2024-2026 research. Trading-model extraction via API has been documented as a competitive intelligence technique. Agentic back-office automation has been shown leaking service-account credentials to prompt injection. The threat actor in FinServ is well-funded, patient, and rapidly adopting AI offensively.
AISS maps every FinServ-relevant ATLAS technique to specific sub-controls — most commonly in AC-4 (Access Control), AC-5 (Supply Chain), AC-10 (Model Security), and AC-6 (Output Validation). Running an AISS assessment against your FinServ AI surfaces the gaps in audit-grade detail, with crosswalks to the SOC 2 and ISO 27001 surfaces your auditors already care about.
ATLAS TECHNIQUES
Top AI threats in this vertical
Drawn from MITRE ATLAS v5.6.0, contextualized for the threat surface this vertical actually faces. Each entry lists the AISS sub-controls that mitigate it — so the assessment-to-mitigation path is auditable end-to-end.
Evade AI Model
Adversaries probe fraud-detection and AML-transaction-monitoring models with synthetic transactions until they find input patterns the model classifies as benign. This is documented at scale against major card networks and is now sold as a service on illicit marketplaces. Once a working evasion is found, it scales linearly.
Mitigated by
Craft Adversarial Data
Trading-strategy models can be probed for inputs that elicit profitable signals to the attacker. With API access — even rate-limited — an adversary can craft inputs that nudge the model into producing exploitable signals, then place trades on the implied direction faster than the defender can detect the pattern.
Mitigated by
Exfiltration via AI Inference API
Customer-facing FinServ LLMs handle PII, account balances, and transaction history per inference. A compromised API key or insider with read access to the inference layer can extract sensitive data through normal queries — a path that bypasses traditional DLP because it looks like product use.
Mitigated by
Poison Training Data
Credit-underwriting and fraud-classification models are retrained on rolling transaction windows. An adversary with access to upstream data pipelines — or to the alternative-data providers many FinServ orgs ingest from — can introduce subtly biased labels that flip a small but profitable fraction of future decisions.
Mitigated by
AI Supply Chain Compromise
Trading floors, robo-advisors, and customer-chat platforms rarely train base models in-house. Foundation models pulled from public marketplaces (Hugging Face, AWS Bedrock, Azure OpenAI Service) can contain pre-positioned backdoors triggered by specific market-condition inputs. Vendor SOC 2 reports do not address model integrity.
Mitigated by
Publish Poisoned Models
Public model hubs host fine-tunes of foundation models marketed for specific FinServ use cases ("trained on SEC filings", "optimized for credit scoring"). Adversaries publish convincing poisoned variants that organizations download and deploy. Hash-verification of model weights is rare in production pipelines.
Mitigated by
LLM Jailbreak
Customer-service LLMs enforce policy boundaries — "don't quote unverified pricing", "don't disclose other customers' balances." Jailbreaks have been demonstrated bypassing these controls via multi-turn prompt-engineering attacks, leading to documented disclosures of competitor pricing, internal compensation discussions, and other customers' transaction summaries.
Mitigated by
AI Model Inference API Access
Proprietary FinServ models — credit scorecards, risk-tier classifiers, churn predictors — are competitive IP. Adversaries with inference-API access run extraction queries to clone the model's decision boundary into a substitute, which they then probe offline at no cost and use to game subsequent submissions.
Mitigated by
AI Agent Tool Credential Harvesting
Agentic FinServ workflows chain high-privilege tools: brokerage execution, wire-transfer initiation, account-update APIs. Prompt-injection attacks against the agent runtime have demonstrated extraction of service-account credentials embedded in agent configuration — opening a path to abuse with stronger trust than a normal user has.
Mitigated by
Publish Hallucinated Entities
Customer-facing advisory chat that hallucinates pricing terms, regulatory deadlines, or specific product features creates direct compliance liability under FINRA Rule 2210 and SEC Marketing Rule. Hallucinated advice acted on by retail customers generates actionable claims; hallucinated regulatory citations in internal use generate audit findings.
Mitigated by
Assess your AI against these threats
An AISS assessment scores your organization on the AISS sub-controls that mitigate each ATLAS technique in this profile — and shows you the gaps, with audit-grade transparency.
Or browse other verticals at /threats
