AI Vendor Risk Assessment: Evaluating AI Tools for Enterprise Use
April 17, 2026
AI tools introduce vendor risks that traditional assessment frameworks were not designed to catch. When employees use ChatGPT, GitHub Copilot, or Google Gemini, they may be sending proprietary code, customer data, or strategic plans to third-party models — often without IT's knowledge. A 2025 survey found that 68% of knowledge workers use AI tools at work, but only 24% of organizations have formal AI vendor assessment processes. The gap between adoption and governance creates data exposure, compliance violations, and intellectual property risks that standard security questionnaires do not address.
Unique risks of AI vendors
AI vendors present risk categories that do not map cleanly to traditional TPRM frameworks.
Data training risk. Some AI providers use customer inputs to train future models. This means proprietary data entered into the tool may influence model outputs for other users. OpenAI's default ChatGPT terms allow training on inputs unless the user opts out or uses the API with data usage controls. Enterprise agreements (ChatGPT Enterprise, Copilot for Business) typically include contractual no-training commitments — but the default consumer tiers do not.
Prompt injection and data extraction. Adversarial prompts can cause AI models to reveal training data, bypass safety filters, or execute unintended actions. For AI tools integrated into workflows (Copilot in IDE, AI in CRM), prompt injection could expose data from connected systems.
Model hallucination and accuracy risk. AI models generate plausible but incorrect outputs. In legal, financial, or compliance contexts, hallucinated facts create liability. Vendors differ in how they address hallucination — some offer grounding, citation, or confidence scoring; others do not.
Shadow AI. Employees adopting free-tier AI tools without IT approval is the most immediate risk. Shadow AI bypasses procurement review, data classification, and access controls. For more on AI data usage risk, see our dedicated guide.
What to evaluate in an AI vendor assessment
Your AI vendor assessment should cover these categories beyond standard security checks:
Data handling and training policies. Does the vendor train on customer inputs? Under which product tiers? Can training be disabled? Is there a data processing agreement (DPA) with explicit AI-specific terms? Review the vendor's terms of service carefully — marketing claims about privacy may not match contractual language.
Data residency and model hosting. Where are prompts processed? Are they logged? For how long? Some vendors route requests through multiple regions. If you handle data subject to GDPR, HIPAA, or data sovereignty requirements, model hosting location matters.
Output ownership and IP. Who owns AI-generated outputs? Can outputs be used commercially? Are there indemnification provisions for IP infringement claims related to AI-generated content?
Security architecture. How is the API secured? What authentication mechanisms are available? Does the vendor support SSO, SCIM provisioning, and audit logging? Can administrators control which features are available to users?
Subprocessor transparency. AI vendors often rely on cloud infrastructure providers, data labeling services, and third-party model components. A clear subprocessor list is essential for understanding your fourth-party AI risk.
Trying to verify a vendor's compliance right now?
ThirdProof runs the investigation in an average of 7 minutes — 27 sources, audit-ready PDF, and 133 security questions auto-filled.
Run a Free Investigation →AI vendor comparison: data usage policies
The major AI vendors differ significantly in their data handling:
[OpenAI](/vendors/openai) (ChatGPT) — Free and Plus tiers use inputs for training by default. ChatGPT Enterprise and API (with zero data retention) do not train on inputs. SOC 2 Type II certified for Enterprise tier.
[Anthropic](/vendors/anthropic) (Claude) — Does not train on customer inputs via API or business tiers. Consumer conversations may be used for safety research with personally identifiable information removed. SOC 2 Type II certified.
[Google](/vendors/google-gemini) (Gemini) — Workspace versions (Gemini for Google Workspace) do not use inputs for training. Free Gemini tier data policies are less restrictive. Backed by Google Cloud's extensive compliance portfolio.
[Microsoft](/vendors/microsoft-copilot) (Copilot) — Microsoft 365 Copilot processes data within the Microsoft 365 compliance boundary. Does not train foundation models on customer data. Inherits Microsoft's compliance certifications.
[Perplexity](/vendors/perplexity) — Enterprise tier includes no-training provisions. Searches the web in real-time, introducing additional data flow considerations.
[Cursor](/vendors/cursor) — AI code editor. Privacy mode available to prevent code from being stored or used for training. Enterprise agreements include additional data protections.
Shadow AI: the risk you cannot assess
The biggest AI vendor risk may be the tools your organization does not know about. Shadow AI — employees using personal accounts on free AI tools — bypasses every control in your TPRM program. An employee pasting a customer list into ChatGPT's free tier, uploading a contract to an AI summarizer, or using an AI coding assistant without IT approval creates unmonitored data exposure.
Detection approaches: Network monitoring for known AI tool domains, browser extension audits, CASB (Cloud Access Security Broker) integration, and regular employee surveys about tool usage. Governance approaches: Provide approved AI tools with appropriate data protections so employees do not seek unauthorized alternatives. Create clear acceptable use policies that acknowledge AI tools specifically. Make the approval process for new AI tools fast enough that employees do not route around it.
For a broader view of how autonomous assessment fits into your TPRM program, ThirdProof can assess AI vendors alongside your entire vendor portfolio — the same evidence-based approach applied to this emerging risk category.
Building an AI vendor assessment framework
Start with these steps to formalize AI vendor governance:
1. Inventory existing AI usage. Survey departments, review expense reports for AI subscriptions, and check network logs for AI tool domains. You cannot assess what you have not identified.
2. Classify AI tools by data exposure. Tier 1: AI tools that process regulated data (PHI, PII, financial data). Tier 2: AI tools that process proprietary data (code, strategy, internal communications). Tier 3: AI tools with minimal data exposure (design tools, scheduling assistants).
3. Assess each tool. Use ThirdProof for the independent evidence layer — sanctions screening, security posture, compliance verification, and adverse media — then supplement with AI-specific checks on data training policies, output ownership, and model governance.
4. Establish ongoing monitoring. AI vendor policies change frequently. Set quarterly reviews for data handling terms and monitor for policy changes, security incidents, and regulatory actions targeting AI companies.
Frequently asked questions
Does ChatGPT train on my company's data?+
How do I assess shadow AI risk?+
Should AI vendors be assessed differently than other SaaS?+
Which AI vendors have SOC 2 certification?+
Can ThirdProof assess AI vendors?+
Stop chasing vendors for questionnaires.
ThirdProof delivers a complete vendor risk report and pre-filled security questionnaire in minutes, not months — without contacting the vendor. Try it free with 5 investigations.
Start Free Trial →No credit card required