The AI vendor market has grown faster than most organizations' ability to evaluate it. A purchasing decision that once involved a straightforward security review now implicates model training practices, data residency, prompt logging, output safety, and supply chain dependencies that your vendor may not fully disclose unless you ask directly.
Treating an AI vendor the same way you would evaluate a SaaS tool is not sufficient. The risk surface is different. The questions are different. And the consequences of getting it wrong are harder to reverse.
Why AI Vendor Risk Is a Distinct Problem
Traditional vendor risk management focuses on questions like: Where is the data stored? Who has access? What certifications do they hold? What happens if they get breached?
Those questions still apply. But AI vendors introduce additional risk dimensions that most standard vendor questionnaires do not cover.
When you send data to an AI vendor's API, you need to understand whether that data is used to train future models. Many vendors default to using API inputs for model improvement unless customers opt out or purchase specific data handling agreements. This is not a hypothetical concern — it has compliance implications under GDPR, HIPAA, and other frameworks, and it determines whether your proprietary data is contributing to a competitor's model improvement.
AI vendors also introduce opacity that other software vendors do not. You can audit code. You can inspect network traffic. You can review logs. The behavior of a model is harder to audit, particularly when that model is updated without notice. A model that passed your evaluation in January may behave differently in March.
Finally, many AI vendors are themselves dependent on upstream model providers. A startup building on top of a foundational model API is not just one vendor — it is a supply chain. Their security practices matter, and so do the practices of whoever's model they are running.
The Questions You Need to Ask
Before adopting any AI vendor, your evaluation should cover the following areas.
Data handling and training practices. Does the vendor use your data to train or fine-tune models? What is the opt-out process, and does it require a specific contract tier? Where is your data processed, and where is it stored? What is the data retention period for inputs and outputs? Are inputs logged, and who has access to those logs?
Model provenance and supply chain. What foundational model or models does the vendor's product use? Are they using open-source models, proprietary models, or models licensed from a third party? If they use a third-party model provider, what data handling commitments does that provider make? If they fine-tune a base model, on what data was the fine-tune performed?
Security certifications and audits. What compliance certifications does the vendor hold? SOC 2 Type II is a baseline expectation. Ask for the report, not just a summary. For regulated industries, ask about HIPAA BAA availability, FedRAMP status if relevant, and ISO 27001 certification. A vendor that cannot produce third-party audit documentation is a vendor making unverifiable claims.
Incident response and breach notification. What is their documented incident response process? What is their SLA for notifying customers of a breach? Do they maintain cyber liability insurance? Who is the contact for security escalations?
Model update and change management. How does the vendor communicate model updates? Can you pin to a specific model version? If not, how do they validate that updates do not change outputs in ways that affect your use case or safety requirements? This is particularly important for compliance-sensitive applications where output consistency matters.
Output safety and content controls. What guardrails does the vendor implement on model outputs? How do they handle prompt injection attempts, jailbreaks, and adversarial inputs? Do they provide content filtering, and is it configurable? What is their process for handling outputs that cause harm?
Red Flags in Vendor Evaluations
Some signals should trigger additional scrutiny or disqualification.
A vendor that cannot clearly answer whether your data is used for training — or that provides vague, non-committal language in their privacy policy — is a vendor whose data practices are ambiguous by design. Ambiguity here benefits the vendor, not you.
Vendors that resist contractual data processing agreements or push back on specific data handling commitments may not be able to make those commitments because their upstream dependencies do not allow it. That is a supply chain issue worth understanding before you sign.
SOC 2 Type I reports, as opposed to Type II, cover the design of controls, not their operational effectiveness over time. A vendor presenting only a Type I report is presenting less assurance than the label might suggest.
Watch for vendors whose security documentation was clearly written to check a compliance box rather than to communicate genuine practice. Generic security white papers that do not address specifics, certifications in progress with no timeline, and security questionnaire responses that reference policies rather than controls are all indicators of an immature security program.
Lastly, be cautious of vendors who cannot explain their model's limitations. A vendor that overstates accuracy, refuses to discuss failure modes, or does not maintain documentation on model behavior is a vendor that has not thought carefully about how their product will perform in your environment.
What Your Contracts Need to Include
Standard vendor contracts are not sufficient for AI. Your legal and security teams should negotiate for the following provisions specifically.
Data processing agreement (DPA). This should specify that your data is not used for model training, establish data retention limits, define breach notification timelines, and identify the sub-processors the vendor uses — including their model provider.
Model version control. If output consistency matters for your use case, negotiate the right to pin to a model version or receive advance notice before model updates that could affect behavior. This is more achievable than vendors often suggest, particularly for enterprise contracts.
Right to audit. You should have contractual rights to request audit reports, penetration test summaries, and security questionnaire responses on a defined schedule. Some vendors will resist a full right-to-audit clause, but the ability to request documentation annually is a reasonable baseline.
Termination and data deletion. Define what happens to your data if you terminate the contract. The vendor should commit to deletion within a defined period and provide confirmation.
Liability provisions. Understand what the vendor is willing to indemnify if their model produces an output that causes harm or exposes your organization to regulatory action. Many AI vendor contracts include broad disclaimers of liability for model outputs. Know what you are accepting.
Building This Into Your Governance Program
The NIST AI RMF Govern function emphasizes that AI risk management requires ongoing attention, not a one-time evaluation. Vendor assessments should be part of an annual review cycle, not just an onboarding checklist.
ISO 42001, the international standard for AI management systems, and NIST SP 800-161 (Supply Chain Risk Management) both provide frameworks for integrating vendor evaluation into broader AI governance. Using these frameworks gives your program a defensible, auditable structure.
If you are building out an AI vendor risk management process or preparing for a regulatory inquiry that involves third-party AI tools, Ayliea's assessment services can help you identify gaps and build a review process proportionate to your risk environment.
