From Model License to Product ToS: What You Must Reflect?
How to Build Your Product Terms of Service Around AI Model License Restrictions
The licence you accept from Meta, Google, OpenAI, or Mistral shapes every clause in your own Terms of Service — from prohibited uses and export controls to your liability exposure and data processing obligations. This guide maps the connection between model licences and the product-facing legal documents that depend on them.
Introduction — Your Terms of Service Are Downstream of Your Model Licence
Most AI product founders treat Terms of Service and Data Processing Agreements as generic legal documents to be templated and adapted. When the product's core capability is delivered by an AI model running under a third-party licence, that approach creates a structural gap: the model licence imposes obligations that must flow into the product's legal framework, and every section of the ToS that touches what users can do with the product is, at its legal foundation, a restatement of the model provider's terms.
This is not a theoretical risk. When OpenAI's API terms prohibit specific use categories, your ToS must prohibit the same categories — and if a user breaches your ToS in a way that also breaches the OpenAI API terms, your product is the contractual breach point between OpenAI and the violating user. The same applies to Llama 3's competitor restriction, Gemma's Prohibited Use Policy, and Mistral API's acceptable use standards.
Three Gaps That Create Legal Exposure
Model restrictions not reflected in product ToS
If your model provider prohibits a category of use and your ToS does not, a user who engages in that use is not in breach of your ToS — but your product is still in breach of the model provider's terms. You absorb the compliance risk without a contractual mechanism to recover it.
Liability exposure not capped at the model layer
Model provider agreements typically limit the provider's liability to you. If your ToS does not pass through appropriate limitations to your customers, you face unlimited liability to customers for harms ultimately caused by the model's outputs — with no equivalent indemnity from the model provider. The asymmetry is your problem to fix contractually.
DPA data flows inconsistent with model provider terms
If your product involves personal data processed through a model API, the DPA you sign with your customers must be consistent with the sub-processor terms your model provider requires. Inconsistencies between your customer-facing DPA and your model provider's data processing terms create GDPR and CCPA compliance exposure that cannot be fixed without model provider cooperation.
The Licence Cascade — How Model Obligations Flow to Your Customers
OpenAI API Terms, Llama 3 Community License, Gemma Terms of Use, or Mistral API Terms establish use-case restrictions, data handling requirements, export control compliance obligations, and liability limitations that bind your product as the licensee.
Your ToS, DPA, and acceptable use policy must reflect all model-level restrictions. You cannot grant users rights you do not have. Every permission your ToS gives customers is constrained by the permissions your model licence gives you. Every obligation your model provider places on you must be mirrored in what you require of your customers.
Your customers' permitted uses are a subset of your permitted uses under the model licence. If you fail to pass through obligations correctly, customers may operate outside model licence terms without breaching your ToS — leaving your product as the sole point of contractual liability between the model provider and the end use.
Four Dimensions Where Model Licence Directly Shapes Product Legal Documents
Prohibited use categories
Model AUP/PUP categories must be reproduced or exceeded in your product ToS and acceptable use policy
Export controls
Model provider export compliance obligations bind your product globally — your ToS must restrict use in sanctioned territories
Liability and indemnity
Provider-to-you liability caps must be reflected downstream; gaps between layers create asymmetric exposure for your product
Data processing obligations
Model provider sub-processor terms, data retention limits, and training consent controls shape your customer-facing DPA commitments
Note on IP and licensing: The question of what rights your product holds in the outputs generated by a licensed model — and how those rights interact with customer IP warranties in your ToS — is part of a broader AI IP ownership framework. For background on how model licence choice affects IP ownership and investment structuring, see AI IP Ownership — wcr.legal.
Section 1 — Translating Model Licence Restrictions into Your Product ToS
Every model provider's acceptable use policy (AUP) or prohibited use policy (PUP) is a list of use cases you are not permitted to enable. When your product delivers capabilities powered by that model, the same list must appear — at minimum — in your own product Terms of Service. Failing to include it means your product absorbs liability for prohibited uses without a contractual mechanism to enforce compliance or pass back responsibility to the user.
Meta — Llama 3 Community License AUP
Competitor restriction + standard prohibited uses + 700M MAU commercial threshold
The Llama 3 licence prohibits use in products that directly compete with Meta's core businesses. If your product is a messaging application, social networking platform, or AI assistant in those categories, the AUP restriction must be reflected in your ToS as a category your customers cannot use your product for — not just as an internal operational limit. Additionally, the 700M MAU clause is a commercial condition that must be disclosed in your ToS if your growth projections approach that threshold, since it would affect service continuity for your customers.
Google — Gemma Prohibited Use Policy (18 categories)
Flow-down obligation requires contractual pass-through to every downstream user
Gemma's Terms of Use include an explicit flow-down requirement: every product built on Gemma must ensure its users are contractually prohibited from the same 18 categories of use Google prohibits. This means your ToS cannot simply be silent on these categories — they must be explicitly reproduced or incorporated by reference in your product's acceptable use policy, and your enterprise customer contracts must include equivalent restrictions. For B2B SaaS products, this translates to a required amendment to standard MSA templates before any Gemma-powered feature can be delivered.
OpenAI — API Terms & Usage Policies
Usage policies bind every product built on the API; violations by users create API Terms risk
OpenAI's API Terms of Service require that developers who build on the API take responsibility for ensuring downstream uses comply with OpenAI's usage policies. This is a direct pass-through obligation: you are responsible for your users' compliance, not just your own. Products that allow users to generate content using the OpenAI API must include OpenAI's prohibited use categories in their ToS, implement usage monitoring, and take action against users who violate those policies. OpenAI retains the right to suspend API access if usage patterns indicate policy violations by any user of a platform built on the API.
Mistral AI — API Terms & Acceptable Use Policy
Permissive AUP with no competitor restriction or flow-down obligation
Mistral's API acceptable use policy is narrower than OpenAI's or Gemma's — it covers clearly illegal and harmful uses but does not include the competitor restriction found in Llama 3 or the 18-category PUP found in Gemma. For products using Mistral's API, the ToS pass-through obligation is lighter: a standard prohibited use clause covering illegal content, violence, and CSAM satisfies the Mistral API terms. For open-weight Mistral models used under Apache-2.0, the position is even cleaner — no AUP pass-through obligation exists in the licence itself, though best practice recommends maintaining a standard product AUP regardless.
Export Control — How Model Provider Obligations Flow into Your ToS
US EAR, OFAC sanctions, and EU dual-use regulations create ToS requirements that often go unaddressed
What model providers require
OpenAI, Meta (Llama 3), and Google (Gemma) all require licensees to comply with applicable export control laws — specifically the US Export Administration Regulations (EAR) and OFAC sanctions programmes. Meta's Llama 3 licence explicitly states that the model may not be used, exported, or re-exported to sanctioned countries or restricted parties.
This means if your product makes a Llama 3 or Gemma model accessible via API, you are contractually required to ensure no end user in a sanctioned jurisdiction accesses it. That obligation does not automatically flow to your customers unless your ToS says so.
What your ToS must include
Your product ToS should include a dedicated export control clause that: (a) restricts use to users not located in or acting on behalf of sanctioned countries (currently including Cuba, Iran, North Korea, Russia, Syria, and specific Crimea/Donetsk regions); (b) requires users to warrant they are not on restricted party lists (SDN, Denied Persons, Entity List); and (c) places compliance responsibility on the user while preserving your right to terminate access on suspicion of violation.
For B2B enterprise contracts, the export compliance clause must also include a representation by the enterprise customer that their sub-users are equally compliant — a critical pass-through for AI products with multi-tenant access.
Model Restriction → ToS Clause Mapping
Section 2 — Your Liability Towards the Model Provider vs Your Liability Towards Your Customers
AI products operate within a three-layer liability structure: the model provider (who sets the limits of what the model can be used for), the product (which sits between the provider and the end user), and the customer (who relies on the product to perform as described and within applicable legal limits). Each layer owes obligations to the next, and the critical commercial risk is the gap between what the model provider limits your liability to and what your customers can hold you liable for.
The asymmetry between layers is structural: model providers typically cap their liability to you at the fees you paid in recent months, while your customers may hold you liable for losses that are orders of magnitude larger. Without deliberate ToS design, your product bears the full exposure in the middle of that stack.
Three Critical Liability Gaps Between the Model Layer and Your ToS
Gap 1 — Service availability and API dependency
Your model provider can suspend your API access for any breach of their terms — including breaches committed by your customers. Your ToS must include a force majeure or service dependency clause that explicitly excludes liability for downtime or degradation caused by third-party AI service provider actions. Without this clause, you are liable to customers for outages you cannot prevent or predict.
Gap 2 — Output accuracy and reliance
Model providers disclaim all warranties on the accuracy or reliability of model outputs. Your ToS must include equivalent disclaimers covering AI-generated content: that outputs are not professional advice, that customers must independently verify outputs before acting on them, and that the product does not warrant the accuracy of any AI-generated information. Absence of these disclaimers makes your product liable for customer losses arising from incorrect model outputs — regardless of the model's technical performance.
Gap 3 — IP indemnity and model output ownership
Some model providers (OpenAI) offer limited copyright indemnity for outputs generated by their API used within their policies. Others (Llama 3, Gemma) do not. Your customer-facing IP indemnity clause must be scoped to match what your model provider actually covers — offering broader IP indemnity than your provider backs creates unhedged exposure. Enterprise customers frequently request IP indemnity for AI-generated content; your response must be calibrated to your provider's actual indemnity position.
Liability Allocation by Type — Provider, Product, Customer
bears risk
bears risk (default)
bears risk (if ToS correct)
Section 3 — ToS and DPA Sections That Directly Depend on the Underlying Model
Not every clause in your product Terms of Service is model-dependent. Subscription fees, invoicing, jurisdiction, and dispute resolution can be drafted independently of the AI model powering the product. But four categories of ToS and DPA content are directly constrained by the underlying model licence and provider agreement: the acceptable use policy, the data processing agreement (particularly sub-processor disclosures), the data retention and deletion terms, and the IP ownership and output warranty provisions.
Switching AI model providers after launch — from OpenAI API to Mistral API, or from Gemma to Llama 3 — may require updating all four of these document sections. Organisations that treat their ToS and DPA as static post-launch documents expose themselves to gaps that emerge whenever the underlying model relationship changes.
Acceptable Use Policy — Driven by model AUP/PUP
Must reproduce all model-level prohibited categories and include enforcement mechanism
Your product's Acceptable Use Policy (AUP) is the primary contractual mechanism through which model-level prohibited uses flow to your customers. Its content is directly constrained by whichever model provider's AUP is most restrictive in your stack. If you use OpenAI API for one feature and Gemma for another, your product AUP must cover the prohibited use categories of both providers, since a user violating either provider's terms creates compliance risk for your product.
The AUP must also include an enforcement mechanism: a right to suspend or terminate customer access on detection or reasonable suspicion of prohibited use, without liability for wrongful suspension. This is required by OpenAI's API Terms (developer responsibility for user compliance) and is best practice for all custom-licence model deployments.
Data Processing Agreement — Sub-processor disclosures and data handling
Model API providers must be listed as sub-processors; their DPA terms constrain your customer commitments
Under GDPR Article 28 and equivalent data protection laws, a data controller who uses a processor (your product) must be informed of every sub-processor the processor engages. If your product transmits personal data to an OpenAI, Mistral, or Anthropic API during processing, those providers are sub-processors and must be listed in your DPA. Failure to disclose them — and to obtain the necessary consent from your customers — is a GDPR compliance breach regardless of whether any data incident occurs.
The constraints run in both directions: your DPA commitments to customers are bounded by the DPA terms you can actually obtain from your model provider. If OpenAI's data processing addendum limits their commitments to GDPR Standard Contractual Clauses without additional safeguards, and your customer's DPA requires binding corporate rules or equivalent protections, you face a gap that cannot be closed without model provider cooperation or a change in model provider.
Data Retention and Training Use — Shaped by model provider data policies
Whether prompts and outputs are retained or used for model training varies by provider and must be disclosed
A critical question that enterprise customers ask about AI products is: "Does our data get used to train your AI model?" The answer depends entirely on your model provider's policies, which differ significantly. OpenAI's API (with a data processing agreement) does not use customer data to train models by default. Gemma fine-tuning pipelines may use customer data depending on how the product is architected. Mistral's API terms allow customers to opt out of data use for model improvement. Your product's data retention section must accurately reflect which provider-specific policies apply — and enterprise customer DPAs will frequently require this to be represented as a warranty.
IP Ownership of Outputs — Determined by model licence and copyright law
Output ownership representations in your ToS must reflect what the model licence actually permits you to claim
Most enterprise customers expect their product ToS to confirm that AI-generated outputs produced within the platform belong to them. The extent to which you can make that warranty depends on: (a) what the model licence says about output ownership; (b) what copyright law in the relevant jurisdiction grants to the customer as the prompter; and (c) whether the model's outputs might reproduce third-party copyrighted material.
OpenAI's API terms state that outputs belong to the customer, subject to usage policy compliance. Llama 3's licence confirms that outputs belong to the user, but does not provide any copyright indemnity. Gemma's Terms of Use do not explicitly address output ownership in the same way. Your ToS clause must match the actual legal position — representing outputs as the customer's property when the legal foundation for that representation is uncertain creates IP warranty exposure.
DPA Obligations by Model Provider — Summary Matrix
Section 4 — Audit Trail: How to Demonstrate Compliance with the Model Licence
Model licence compliance is not a one-time activity at product launch. It is an ongoing operational obligation that must be demonstrable — to model providers if queried, to enterprise customers performing vendor due diligence, to investors and acquirers reviewing the product's IP and legal risk stack, and in the event of a dispute or regulatory inquiry. Building a compliance audit trail is the operational infrastructure that converts your ToS and DPA commitments into evidence.
The audit trail requirement has become more acute as enterprise procurement teams develop AI-specific vendor questionnaires and as regulators in the EU (AI Act), UK, and US begin to formalise requirements around AI system documentation. A product that can demonstrate model licence compliance with contemporaneous records is in a materially stronger position than one that relies on assertions without documentation.
Four Pillars of a Model Licence Compliance Audit Trail
Pillar 1 — Documentation
Static records of licence terms, provenance, and ToS decisions
Compliance documentation establishes the legal baseline: what terms you accepted, when you accepted them, and what product decisions those terms informed.
Pillar 2 — Technical controls
System-level enforcement of use restrictions with logged evidence
Technical controls convert policy commitments into enforceable system behaviours, generating audit logs as a by-product of normal operation.
Pillar 3 — Process records
Ongoing operational evidence of compliance monitoring and enforcement
Process records demonstrate that compliance is actively managed — not just documented at launch and left static.
Pillar 4 — Periodic review
Scheduled reassessment of model licence, ToS alignment, and risk exposure
Model licences are not static. Providers update their terms, expand AUP categories, and change sub-processor data policies. A scheduled review cycle converts periodic checks into audit evidence.
What Auditors and Due Diligence Teams Will Ask — Evidence Mapping
Your ToS Is Only as Strong as the Model Licence It Reflects
Terms of Service and Data Processing Agreements for AI products cannot be treated as boilerplate legal documents with AI-themed language added on top. Every section that touches what users can do, what data flows where, who owns outputs, and who is liable for what is directly shaped by the model licence — and must be kept consistent with it as both the product and the licence evolve.
The practical framework is a three-part programme: first, translate every model restriction into an equivalent ToS obligation before the product goes live; second, close the liability gaps between what your model provider covers and what your customers can hold you responsible for; third, build the documentation, technical controls, and review processes that convert your commitments into contemporaneous evidence.
Organisations that complete this programme are not just legally protected — they are commercially differentiated. Enterprise procurement teams have become significantly more sophisticated about AI vendor due diligence. A product that can produce a complete, dated compliance file — model dependency register, AUP alignment map, sub-processor schedule, violation incident log, and review records — shortens procurement cycles and removes the most common objections to approving AI tools in regulated industries.
For the broader framework on how model licence choice affects IP ownership and investment structuring, see AI IP Ownership — wcr.legal.


