Skip to main content
Skip to main content
AI Risk

AI Vendor Risk Assessment: Evaluating AI Tools for Enterprise Use

April 17, 2026

AI tools introduce vendor risks that traditional assessment frameworks were not designed to catch. When employees use ChatGPT, GitHub Copilot, or Google Gemini, they may be sending proprietary code, customer data, or strategic plans to third-party models — often without IT's knowledge. A 2025 survey found that 68% of knowledge workers use AI tools at work, but only 24% of organizations have formal AI vendor assessment processes. The gap between adoption and governance creates data exposure, compliance violations, and intellectual property risks that standard security questionnaires do not address.

Unique risks of AI vendors

AI vendors present risk categories that do not map cleanly to traditional TPRM frameworks.

Data training risk. Some AI providers use customer inputs to train future models. This means proprietary data entered into the tool may influence model outputs for other users. OpenAI's default ChatGPT terms allow training on inputs unless the user opts out or uses the API with data usage controls. Enterprise agreements (ChatGPT Enterprise, Copilot for Business) typically include contractual no-training commitments — but the default consumer tiers do not.

Prompt injection and data extraction. Adversarial prompts can cause AI models to reveal training data, bypass safety filters, or execute unintended actions. For AI tools integrated into workflows (Copilot in IDE, AI in CRM), prompt injection could expose data from connected systems.

Model hallucination and accuracy risk. AI models generate plausible but incorrect outputs. In legal, financial, or compliance contexts, hallucinated facts create liability. Vendors differ in how they address hallucination — some offer grounding, citation, or confidence scoring; others do not.

Shadow AI. Employees adopting free-tier AI tools without IT approval is the most immediate risk. Shadow AI bypasses procurement review, data classification, and access controls. For more on AI data usage risk, see our dedicated guide.

What to evaluate in an AI vendor assessment

Your AI vendor assessment should cover these categories beyond standard security checks:

Data handling and training policies. Does the vendor train on customer inputs? Under which product tiers? Can training be disabled? Is there a data processing agreement (DPA) with explicit AI-specific terms? Review the vendor's terms of service carefully — marketing claims about privacy may not match contractual language.

Data residency and model hosting. Where are prompts processed? Are they logged? For how long? Some vendors route requests through multiple regions. If you handle data subject to GDPR, HIPAA, or data sovereignty requirements, model hosting location matters.

Output ownership and IP. Who owns AI-generated outputs? Can outputs be used commercially? Are there indemnification provisions for IP infringement claims related to AI-generated content?

Security architecture. How is the API secured? What authentication mechanisms are available? Does the vendor support SSO, SCIM provisioning, and audit logging? Can administrators control which features are available to users?

Subprocessor transparency. AI vendors often rely on cloud infrastructure providers, data labeling services, and third-party model components. A clear subprocessor list is essential for understanding your fourth-party AI risk.

Trying to verify a vendor's compliance right now?

ThirdProof runs the investigation in an average of 7 minutes — 27 sources, audit-ready PDF, and 133 security questions auto-filled.

Run a Free Investigation →

AI vendor comparison: data usage policies

The major AI vendors differ significantly in their data handling:

[OpenAI](/vendors/openai) (ChatGPT) — Free and Plus tiers use inputs for training by default. ChatGPT Enterprise and API (with zero data retention) do not train on inputs. SOC 2 Type II certified for Enterprise tier.

[Anthropic](/vendors/anthropic) (Claude) — Does not train on customer inputs via API or business tiers. Consumer conversations may be used for safety research with personally identifiable information removed. SOC 2 Type II certified.

[Google](/vendors/google-gemini) (Gemini) — Workspace versions (Gemini for Google Workspace) do not use inputs for training. Free Gemini tier data policies are less restrictive. Backed by Google Cloud's extensive compliance portfolio.

[Microsoft](/vendors/microsoft-copilot) (Copilot) — Microsoft 365 Copilot processes data within the Microsoft 365 compliance boundary. Does not train foundation models on customer data. Inherits Microsoft's compliance certifications.

[Perplexity](/vendors/perplexity) — Enterprise tier includes no-training provisions. Searches the web in real-time, introducing additional data flow considerations.

[Cursor](/vendors/cursor) — AI code editor. Privacy mode available to prevent code from being stored or used for training. Enterprise agreements include additional data protections.

Shadow AI: the risk you cannot assess

The biggest AI vendor risk may be the tools your organization does not know about. Shadow AI — employees using personal accounts on free AI tools — bypasses every control in your TPRM program. An employee pasting a customer list into ChatGPT's free tier, uploading a contract to an AI summarizer, or using an AI coding assistant without IT approval creates unmonitored data exposure.

Detection approaches: Network monitoring for known AI tool domains, browser extension audits, CASB (Cloud Access Security Broker) integration, and regular employee surveys about tool usage. Governance approaches: Provide approved AI tools with appropriate data protections so employees do not seek unauthorized alternatives. Create clear acceptable use policies that acknowledge AI tools specifically. Make the approval process for new AI tools fast enough that employees do not route around it.

For a broader view of how autonomous assessment fits into your TPRM program, ThirdProof can assess AI vendors alongside your entire vendor portfolio — the same evidence-based approach applied to this emerging risk category.

Building an AI vendor assessment framework

Start with these steps to formalize AI vendor governance:

1. Inventory existing AI usage. Survey departments, review expense reports for AI subscriptions, and check network logs for AI tool domains. You cannot assess what you have not identified.

2. Classify AI tools by data exposure. Tier 1: AI tools that process regulated data (PHI, PII, financial data). Tier 2: AI tools that process proprietary data (code, strategy, internal communications). Tier 3: AI tools with minimal data exposure (design tools, scheduling assistants).

3. Assess each tool. Use ThirdProof for the independent evidence layer — sanctions screening, security posture, compliance verification, and adverse media — then supplement with AI-specific checks on data training policies, output ownership, and model governance.

4. Establish ongoing monitoring. AI vendor policies change frequently. Set quarterly reviews for data handling terms and monitor for policy changes, security incidents, and regulatory actions targeting AI companies.

Frequently asked questions

Does ChatGPT train on my company's data?+
It depends on the tier. Free and Plus ChatGPT use inputs to improve models by default (users can opt out). ChatGPT Enterprise and API with zero data retention do not train on inputs. Always review the specific terms for your product tier — marketing claims may differ from contractual language.
How do I assess shadow AI risk?+
Combine technical detection (network monitoring for AI tool domains, CASB integration, browser extension audits) with organizational measures (employee surveys, clear acceptable use policies, fast approval processes for new AI tools). The goal is visibility into what tools employees are actually using, not just what IT has approved.
Should AI vendors be assessed differently than other SaaS?+
Yes. Standard TPRM frameworks miss AI-specific risks: data training on inputs, prompt injection vulnerabilities, model hallucination, output IP ownership, and rapidly changing terms of service. Layer AI-specific assessment criteria on top of your standard vendor risk assessment process.
Which AI vendors have SOC 2 certification?+
OpenAI (Enterprise tier), Anthropic, Microsoft (Copilot via Microsoft 365), and Google (Gemini via Google Cloud) hold SOC 2 Type II certifications. However, SOC 2 does not specifically cover AI-related risks like data training or model safety — it covers the infrastructure and operational controls around the service.
Can ThirdProof assess AI vendors?+
Yes. ThirdProof assesses AI vendors the same way it assesses any SaaS vendor — 27 intelligence sources covering sanctions, security posture, compliance verification, breach history, and business legitimacy. For AI-specific risks like data training policies, the assessment flags what is discoverable from public evidence and identifies gaps requiring direct vendor engagement.

Stop chasing vendors for questionnaires.

ThirdProof delivers a complete vendor risk report and pre-filled security questionnaire in minutes, not months — without contacting the vendor. Try it free with 5 investigations.

Start Free Trial →

No credit card required