How to Choose an AI Model for Your Product: Legal and Business Risks of Different License Types

How to Choose an AI Model for Your Product: Legal and Business Risks of Different License Types

How to Choose an AI Model for Your Product: Legal and Business Risks of Different License Types

🤖 AI Licensing · Legal Risk · 2024–2025

How to Choose an AI Model for Your Product: Legal and Business Risks of Different Licence Types

Before you integrate an AI model into your product, you are making a legal decision — not just a technical one. The licence type determines what you can build, who owns the outputs, and who is liable when things go wrong.

6 licence types covered · ~8 min read
Proprietary APIs
Open-weight
RAIL licences
Open source
📋 In This Guide
6 licence types covered · ~8 min read
1
Why model licence type is a legal decision
IP, liability, and compliance before you write a line of code
2
Proprietary / closed-source API models
OpenAI, Anthropic, Google — what you get and what you give up
3
Open-weight models with usage restrictions
The "open" that is not fully open — Llama, Mistral, and conditions
4
Truly open-source models (Apache 2.0, MIT)
Full freedom — and why "free" is not the same as "risk-free"
5
RAIL and responsible AI licences
Use restrictions, enforcement, and the new compliance layer
6
Decision framework: matching licence to your product
5 questions every product team must answer before choosing a model
⚖️ Section 1

Why Model Licence Type Is a Legal Decision

Most product teams evaluate AI models on performance benchmarks, cost per token, and latency. The licence is treated as a legal formality — something to scan quickly before moving on. That is the wrong approach, and it creates real exposure.

The three legal dimensions of any AI model licence
Before you build

When you integrate an AI model into your product — whether through an API, a self-hosted deployment, or a fine-tuned derivative — you are entering a legal relationship that governs your product's IP, your liability exposure, and your regulatory compliance obligations. The licence is the contract. Understanding it before you build is not optional.

🏛️
IP ownership
Who owns the outputs your product generates? Can you patent improvements? Can you use outputs commercially without restriction? The licence determines this.
⚠️
Liability allocation
Who is responsible when the model produces harmful, inaccurate, or discriminatory outputs? Proprietary licences typically disclaim all liability. Open-source licences do the same — but differently.
🌐
Regulatory compliance
The EU AI Act, GDPR, and sector-specific rules create compliance obligations that vary by model type. Your licence choice affects which rules apply and what documentation you must maintain.

The regulatory dimension is becoming increasingly concrete. Under the EU AI Act's risk-based framework, products built on high-risk AI systems — regardless of whether the underlying model is proprietary or open-weight — carry specific obligations around transparency, human oversight, and technical documentation. The licence type you choose directly affects how you can satisfy those obligations. See the guide to AI risk and liability frameworks for the interaction between model licensing and product liability under the Act.

Cross-border products face an additional layer. A model licensed under US terms, deployed in an EU product, accessed by users in multiple jurisdictions creates a three-way regulatory intersection that is easy to underestimate at the build stage. The cross-border AI compliance guide covers the interaction between licence jurisdiction and applicable law in more detail.

The four main licence categories in active use today — proprietary API, open-weight with restrictions, truly open-source, and RAIL/responsible AI — each create a different legal profile for your product. Here is what that profile actually means for your business.
🔓 Section 3

Open-Weight Models with Usage Restrictions

The term "open" applied to AI models covers a wide spectrum. Understanding exactly what is open — and what is not — is essential before you build a product on an open-weight model.

⚠️
"Open weights" ≠ open source. Access to model weights does not mean unrestricted commercial use. The licence — not the availability of weights — determines what you can actually do. Read the licence before you deploy.
📂
What "open-weight" actually means
The technical reality

An open-weight model is one where the trained model weights — the numerical parameters that encode the model's capabilities — are made available for download and local deployment. This is distinct from true open-source, which requires the training code, training data, and model architecture to also be available under an open licence.

Open-weight availability gives you something significant: you can deploy the model on your own infrastructure, customise it through fine-tuning, and avoid per-token API costs. You also have more control over data residency, latency, and regulatory compliance — all of which matter for GDPR and EU AI Act compliance documentation.

What open-weight does not give you is freedom from the licence. Llama 3, Mistral, Falcon, and most other prominent open-weight models are released under custom licences — not Apache 2.0 or MIT. Those licences contain material restrictions on commercial use, redistribution, and permitted applications.

📋
The commercial conditions you must read
Where most teams get caught

The most commercially significant restriction in current open-weight licences is the monthly active user (MAU) threshold. Meta's Llama 3 licence, for example, requires a separate commercial licence agreement from Meta if your product exceeds 700 million monthly active users. Below that threshold, commercial use is permitted — but the licence contains other meaningful restrictions.

Use-case prohibitions are the second major restriction category. Most open-weight licences prohibit use for specific application types — weapons development, surveillance systems, voter suppression, and similar high-risk categories. These prohibitions mirror the EU AI Act's prohibited use categories, but are not always coextensive with them. You need to check both the licence and the Act independently.

Attribution and branding requirements are a third area where teams are frequently unprepared. Some open-weight licences require attribution notices in your product's documentation or user-facing content. Failing to comply voids your licence and creates IP infringement exposure.

Model Commercial use Fine-tuning MAU cap
Llama 3 (Meta) Conditional Yes 700M
Mistral 7B Apache 2.0 Yes None
Gemma 2 (Google) Conditional Yes Prohibited uses list
Phi-3 (Microsoft) MIT Yes None
Falcon 180B (TII) Custom licence Conditional Revenue threshold
💡
Compliance documentation for EU AI Act
If you deploy an open-weight model for a high-risk AI application under the EU AI Act, you carry the full technical documentation and conformity assessment obligations — regardless of whether the model developer has provided any documentation. The model developer's release of weights does not transfer their compliance obligations to you. It transfers the model. The compliance work is yours. See the AI risk and liability framework for what this means in practice.
🆓 Section 4

Truly Open-Source Models: Apache 2.0, MIT, and BSD

A small but growing category of AI models are released under standard open-source licences — not custom model licences, not conditional commercial permissions. Genuine permissive open-source gives you the most freedom, but freedom is not the same as risk-free. The cross-border AI compliance framework still applies regardless of your licence type.

📄
Apache 2.0
Most common permissive licence for AI models
Commercial ✓

Apache 2.0 is the closest thing to an industry standard for permissive commercial open-source AI. It permits unrestricted commercial use, modification, redistribution, and sublicensing — with two conditions: attribution notices must be preserved in distributed copies, and any patent licences from contributors are granted under the same terms.

Apache 2.0 also includes an explicit patent licence grant from contributors, which is meaningfully stronger IP protection than MIT — particularly relevant for AI models where training procedures or architectures may be subject to patent claims.

  • Commercial use: Unrestricted, including SaaS products and derivative models
  • Fine-tuning and modification: Permitted — modified versions can be relicensed under different terms
  • Patent grant: Explicit patent licence from contributors — stronger protection than MIT
  • !
    Attribution required: Must preserve NOTICE file and copyright statements in distributed products
Notable Apache 2.0 models: Mistral 7B, Phi-3 (some variants), BLOOM (base model), StarCoder, Falcon 7B/40B
🔖
MIT Licence
Maximum permissiveness — minimal conditions
Commercial ✓

MIT is the most permissive mainstream open-source licence. It permits doing almost anything with the software — use, copy, modify, merge, publish, distribute, sublicense, and sell — subject to a single condition: the copyright notice and permission notice must be included in all copies or substantial portions of the software.

MIT has one notable weakness compared to Apache 2.0 for AI: it does not include an explicit patent licence grant. If a contributor holds patents covering training techniques or model architecture elements, MIT does not automatically grant you a licence to those patents. In practice this risk is low but non-zero, particularly for models with novel architectural features.

  • Commercial use: Unrestricted — no revenue caps, no MAU thresholds
  • Derivative works: Can be relicensed under any terms, including proprietary
  • !
    No patent grant: Patent rights not explicitly licensed — check model origin for patent exposure
  • !
    No warranty: Model provided "as is" — no representation on output accuracy or fitness for purpose
Notable MIT models: Phi-3 Mini/Small (Microsoft), some BERT variants, DistilBERT family
⚠️
Why "Free" ≠ "Risk-Free"
The residual risks no licence removes
Read carefully

A permissive open-source licence resolves the commercial use question. It does not resolve your product's broader legal obligations — and this is where many teams make a critical planning error.

The EU AI Act's requirements apply based on what your product does — not which licence your underlying model uses. If your product is a high-risk AI system under Article 6, you need a conformity assessment, technical documentation, and a quality management system whether your model is GPT-4 or Apache 2.0 open-source. The licence exempts nothing.

  • !
    Training data liability: Open-source licences do not warrant that training data was lawfully obtained. Copyright infringement claims from training data exposure remain a residual risk regardless of your product's downstream licence.
  • !
    Output liability: No open-source licence disclaims your product's liability for harmful, defamatory, or inaccurate model outputs. That exposure is yours entirely.
  • !
    EU AI Act compliance: Risk classification under the Act follows the use case — a truly open-source model used for employment screening is still a high-risk AI system requiring full compliance documentation.
Key takeaway: Permissive licence = freedom to build. It does not = freedom from regulatory and product liability obligations. The latter follow your product — not your model's licence.
💡
The practical advantage
Truly open-source models — particularly Apache 2.0 releases — give you the clearest legal starting point: no vendor lock-in, no MAU thresholds, no permitted-use prohibitions, no ToS amendments. For teams building products that need predictable long-term IP and commercial terms, a genuine permissive open-source model eliminates the licence risk category entirely. What remains — EU AI Act compliance, output liability, and data protection obligations — is determined by what you build and what it does, not by the model licence. See AI risk and liability frameworks for the product-side obligations.
🛡️ Section 5

RAIL and Responsible AI Licences: The New Compliance Layer

Responsible AI Licences (RAIL) represent a new category of licence that sits between traditional open-source and proprietary terms. They are proliferating rapidly — and they introduce a type of downstream use restriction that most legal teams are not yet equipped to evaluate.

What RAIL licences are and what they actually restrict
Use restrictions + enforcement
1
What RAIL is The framework

Responsible AI Licences were developed by BigScience (initially for the BLOOM model) and subsequently adopted by a growing number of AI developers. They are open-weight licences — the weights are available — but they add a "use restrictions" addendum that explicitly prohibits specific application categories.

The key structural innovation is that RAIL restrictions are designed to propagate downstream. When you fine-tune a RAIL-licensed model and distribute the derivative, your distribution must carry the same use restrictions. This "viral" restriction mechanism is conceptually similar to copyleft in software licensing — but applied to permitted use categories rather than to source code availability.

BLOOM (BigScience)
Stable Diffusion (CreativeML OpenRAIL)
StarCoder (OpenRAIL-M)
InstructBLIP
LLaVA variants
2
The use restriction categories What is prohibited

RAIL licences typically include a use restrictions schedule derived from the Responsible AI Licences framework. The specific list varies by version and model, but common prohibited uses include:

  • Law enforcement, criminal justice risk assessment, or recidivism prediction
  • Autonomous lethal weapons systems or military surveillance
  • Generating or distributing disinformation for political manipulation
  • Targeted advertising based on protected characteristics (race, religion, sexual orientation)
  • Tracking or surveillance of individuals without their informed consent
  • Medical diagnosis or treatment without qualified human oversight
  • Generating child sexual abuse material (CSAM) — absolute prohibition

The overlap with the EU AI Act's prohibited practices list (Article 5) is substantial — but not complete. RAIL restrictions may be narrower or broader than the Act in specific categories. You must check both independently for each intended use case.

3
Enforcement mechanisms Licence termination risk

RAIL licences are enforceable contracts. Breach of the use restrictions schedule constitutes a licence violation — which means your right to use, distribute, or build on the model is automatically terminated without notice in most RAIL variants. Termination does not require court action: the licence simply ceases to apply, and continued use becomes IP infringement.

The practical enforcement question is how this is monitored for downstream users. Currently, RAIL enforcement is primarily complaint-driven — a user or competitor reports a violation to the model developer, who can then pursue termination. As AI governance frameworks mature and as regulatory bodies develop monitoring capabilities, this may become more systematic.

  • Licence automatically terminates on violation — no grace period in most RAIL variants
  • Downstream obligations bind your customers if you distribute RAIL-licensed models or derivatives
  • You may need to implement user-facing terms that carry RAIL restrictions through to your end users
4
Your compliance burden What you must do

Building a product on a RAIL-licensed model creates two distinct compliance obligations. First, you must ensure your own product does not fall within any of the prohibited use categories in the licence. This requires a careful mapping of your product's actual functionality — not just its intended use — against the restrictions schedule.

Second, if you distribute the model or a fine-tuned derivative to third parties — as part of an API, a white-label product, or an open release — you must pass the RAIL restrictions through to those parties. This means your ToS or licence agreement with downstream users must prohibit the same use categories. Failing to include this pass-through makes you liable for downstream violations as a distributor.

  • Map product functionality against restrictions — not just marketing intent
  • If distributing derivatives: include RAIL pass-through obligations in your downstream ToS
  • Document the mapping as part of your EU AI Act technical file if applicable
  • Review restrictions on every major licence version update — RAIL versions evolve
⚠️
Ambiguous use cases — get a legal opinion
The most dangerous RAIL compliance situation is an ambiguous use case — a product that might or might not fall within a prohibited category depending on interpretation. Healthcare decision support. Hiring screening tools. Content moderation systems. Fraud detection. These categories sit near RAIL restriction boundaries and have been interpreted differently across different model developers' guidance. Do not rely on your own interpretation alone. See the AI risk and liability framework for how to assess borderline use cases against both RAIL restrictions and EU AI Act risk classification.
🧭 Section 6

Decision Framework: Matching Licence to Your Product

The right licence type for your product follows from five questions about your business model, use case, market, and risk tolerance. Answer them honestly and the decision narrows quickly. See also cross-border AI compliance for the additional layer when your product operates across multiple jurisdictions.

📊 Licence Type Comparison at a Glance
Licence type Commercial use Output ownership Vendor lock-in Use restrictions IP indemnification
Proprietary API ✓ Yes Conditional High ToS dependent Enterprise only
Open-weight (custom licence) Conditional ✓ Yes None Model-specific None
Apache 2.0 open-source ✓ Yes ✓ Yes None None Patent grant only
MIT open-source ✓ Yes ✓ Yes None None None
RAIL / OpenRAIL Conditional ✓ Yes None Use list applies None
1
Does your use case fall within any prohibited category in the relevant licence?

This is the threshold question. Before any other evaluation, confirm that your product's actual functionality — not just its intended purpose — is permitted under the licence you are considering. RAIL restrictions, custom open-weight licences, and proprietary ToS all contain prohibited use lists. Check each one independently against your product specification.

Clear use case
Any licence type may work — proceed to questions 2–5 to narrow the choice
Ambiguous use case
Prefer Apache 2.0 / MIT open-source or enterprise proprietary with negotiated use terms — do not build on RAIL or custom open-weight licences for borderline use cases without legal review
2
Do you need to deploy the model on your own infrastructure?

Data residency requirements, GDPR compliance, latency requirements, and cost structure may all require self-hosted deployment rather than API access. Self-hosting requires access to model weights — which eliminates proprietary API models entirely and requires either open-weight or open-source options.

API is fine
Proprietary API models are viable — evaluate ToS, data processing terms, and DPA availability
Self-hosted required
Open-weight or open-source only — Apache 2.0 for clearest commercial terms, custom licence for specific models
3
How important is long-term licence stability to your business model?

Proprietary API licences can change unilaterally. Custom open-weight licences (like Llama) can be updated by the model developer. Only permissive open-source licences under stable frameworks (Apache 2.0, MIT) offer genuine licence stability — once a version is released under these terms, that version's licence cannot be retroactively changed.

Tolerance for ToS changes
Proprietary API works — build in model abstraction layers so you can switch providers if terms change materially
Licence stability critical
Apache 2.0 or MIT open-source — pin to a specific model version; that version's licence terms are permanent
4
Does your product distribute the model or derivatives to third parties?

If your product includes a model as a component — for example, an embedded AI feature, a white-label API, or an open-source release — you are distributing a derivative. This triggers pass-through obligations under RAIL licences and attribution requirements under Apache 2.0 and MIT. Proprietary API models generally prohibit redistribution of the underlying model weights entirely.

No distribution
Any licence type is viable from a distribution perspective — internal-only or end-user API products do not trigger pass-through obligations
Distributing derivatives
Avoid RAIL licences unless you are prepared to pass use restrictions through to all downstream users. Apache 2.0 / MIT simplest for derivative distribution.
5
What is your EU AI Act risk classification?

If your product falls under the EU AI Act as a high-risk AI system, you need technical documentation that may be easier to produce for self-hosted models where you have direct access to model specifications, training documentation, and evaluation results. For proprietary API models, you are dependent on provider documentation — which varies significantly in completeness. See AI risk and liability frameworks for the full documentation requirements.

Minimal/limited risk
Proprietary API is straightforward — less documentation burden and no need for self-hosted infrastructure compliance
High risk
Open-weight or open-source preferred — you control the technical file and conformity assessment documentation; do not rely on provider documentation alone
📌
The bottom line
There is no universally correct AI model licence. The right choice follows from your specific use case, deployment model, distribution plans, regulatory risk profile, and tolerance for vendor dependency. What is avoidable: discovering the licence constraints after you have built the product, signed commercial agreements, and launched. Licence review is pre-build work — not post-launch legal remediation. For cross-border products, see the cross-border AI compliance guide for the additional layer of applicable law analysis.
Need a Licence Review for Your AI Product?

WCR Legal advises AI product teams on model licence risk, EU AI Act compliance, IP ownership structures, and cross-border regulatory obligations. We review your specific model selection and use case — not generic advice.

No commitment required · Confidential initial consultation · Response within 1 business day

Oleg Prosin is the Managing Partner at WCR Legal, focusing on international business structuring, regulatory frameworks for FinTech companies, digital assets, and licensing regimes across various jurisdictions. Works with founders and investment firms on compliance, operating models, and cross-border expansion strategies.