Skip to content
Ayliea — AI Security Assessment & Compliance Consulting

THREAT PROFILE · FINANCIAL SERVICES

AI Threats Targeting Financial Services

MITRE ATLAS v5.6.0 techniques targeting financial-services AI — fraud-detection evasion, trading-model extraction, agentic credential harvesting, supply-chain compromise of underwriting models — mapped to the AISS sub-controls that mitigate each.

10 ATLAS techniquesMitigated via AISS

Why Financial Services AI is a Distinct Threat Surface

Financial-services AI exists at the intersection of three accelerating threat vectors: high-value targets (every model touches money, sensitive PII, or material non-public information), commoditized attacker tooling (adversarial-input crafting and model-extraction services are now openly sold), and a regulatory regime that punishes both the breach and the failure to anticipate it (FINRA, SEC, OCC, state banking regulators, NYDFS Part 500, EU DORA).

The threats below are not generic LLM concerns. Fraud-detection evasion at major card networks has been independently demonstrated in 2024-2026 research. Trading-model extraction via API has been documented as a competitive intelligence technique. Agentic back-office automation has been shown leaking service-account credentials to prompt injection. The threat actor in FinServ is well-funded, patient, and rapidly adopting AI offensively.

AISS maps every FinServ-relevant ATLAS technique to specific sub-controls — most commonly in AC-4 (Access Control), AC-5 (Supply Chain), AC-10 (Model Security), and AC-6 (Output Validation). Running an AISS assessment against your FinServ AI surfaces the gaps in audit-grade detail, with crosswalks to the SOC 2 and ISO 27001 surfaces your auditors already care about.

ATLAS TECHNIQUES

Top AI threats in this vertical

Drawn from MITRE ATLAS v5.6.0, contextualized for the threat surface this vertical actually faces. Each entry lists the AISS sub-controls that mitigate it — so the assessment-to-mitigation path is auditable end-to-end.

AML.T0015Defense Evasion

Evade AI Model

Adversaries probe fraud-detection and AML-transaction-monitoring models with synthetic transactions until they find input patterns the model classifies as benign. This is documented at scale against major card networks and is now sold as a service on illicit marketplaces. Once a working evasion is found, it scales linearly.

Mitigated by

AC-10.1AC-10.2AC-10.3
AML.T0043AI Attack Staging

Craft Adversarial Data

Trading-strategy models can be probed for inputs that elicit profitable signals to the attacker. With API access — even rate-limited — an adversary can craft inputs that nudge the model into producing exploitable signals, then place trades on the implied direction faster than the defender can detect the pattern.

Mitigated by

AC-10.1AC-10.2AC-4.2
AML.T0024Exfiltration

Exfiltration via AI Inference API

Customer-facing FinServ LLMs handle PII, account balances, and transaction history per inference. A compromised API key or insider with read access to the inference layer can extract sensitive data through normal queries — a path that bypasses traditional DLP because it looks like product use.

Mitigated by

AC-3.5AC-3.6AC-4.2AC-4.4
AML.T0020Resource Development

Poison Training Data

Credit-underwriting and fraud-classification models are retrained on rolling transaction windows. An adversary with access to upstream data pipelines — or to the alternative-data providers many FinServ orgs ingest from — can introduce subtly biased labels that flip a small but profitable fraction of future decisions.

Mitigated by

AC-3.2AC-3.3AC-5.1AC-5.4
AML.T0010Initial Access

AI Supply Chain Compromise

Trading floors, robo-advisors, and customer-chat platforms rarely train base models in-house. Foundation models pulled from public marketplaces (Hugging Face, AWS Bedrock, Azure OpenAI Service) can contain pre-positioned backdoors triggered by specific market-condition inputs. Vendor SOC 2 reports do not address model integrity.

Mitigated by

AC-5.1AC-5.2AC-5.3AC-5.4
AML.T0058Resource Development

Publish Poisoned Models

Public model hubs host fine-tunes of foundation models marketed for specific FinServ use cases ("trained on SEC filings", "optimized for credit scoring"). Adversaries publish convincing poisoned variants that organizations download and deploy. Hash-verification of model weights is rare in production pipelines.

Mitigated by

AC-5.2AC-5.3AC-5.4
AML.T0054Privilege Escalation

LLM Jailbreak

Customer-service LLMs enforce policy boundaries — "don't quote unverified pricing", "don't disclose other customers' balances." Jailbreaks have been demonstrated bypassing these controls via multi-turn prompt-engineering attacks, leading to documented disclosures of competitor pricing, internal compensation discussions, and other customers' transaction summaries.

Mitigated by

AC-4.1AC-6.4AC-10.5
AML.T0040AI Model Access

AI Model Inference API Access

Proprietary FinServ models — credit scorecards, risk-tier classifiers, churn predictors — are competitive IP. Adversaries with inference-API access run extraction queries to clone the model's decision boundary into a substitute, which they then probe offline at no cost and use to game subsequent submissions.

Mitigated by

AC-4.2AC-4.3AC-8.2
AML.T0098Credential Access

AI Agent Tool Credential Harvesting

Agentic FinServ workflows chain high-privilege tools: brokerage execution, wire-transfer initiation, account-update APIs. Prompt-injection attacks against the agent runtime have demonstrated extraction of service-account credentials embedded in agent configuration — opening a path to abuse with stronger trust than a normal user has.

Mitigated by

AC-4.3AC-10.5AC-10.6
AML.T0060Resource Development

Publish Hallucinated Entities

Customer-facing advisory chat that hallucinates pricing terms, regulatory deadlines, or specific product features creates direct compliance liability under FINRA Rule 2210 and SEC Marketing Rule. Hallucinated advice acted on by retail customers generates actionable claims; hallucinated regulatory citations in internal use generate audit findings.

Mitigated by

AC-6.1AC-6.2AC-6.3AC-9.1

Assess your AI against these threats

An AISS assessment scores your organization on the AISS sub-controls that mitigate each ATLAS technique in this profile — and shows you the gaps, with audit-grade transparency.

Or browse other verticals at /threats