Which model for which level of sovereignty?
The best 2026 open-weights models are trained in three jurisdictional zones with very different implications for an EU CISO. Here's how to decide honestly.
The quality × jurisdiction × audit trilemma
European / French
The only option that requires no sovereignty trade-off. French vendor (Mistral), trained in the EU, native GDPR jurisdiction, direct legal recourse.
- Zero non-EU sub-processing
- Compatible with attorney-client privilege, HDS, critical-infra
- Training corpus auditable
American
Excellent quality / accessibility ratio. Global adoption, mature ecosystem, but US jurisdiction (CLOUD Act, sanctions, Patriot Act). Acceptable for most standard mid-market customers.
- High and stable quality
- Extensive documentation and support
- ⚠ CLOUD Act applies, US sanctions possible
Chinese
Champions of 2026 open-weights benchmarks (Kimi K2.6, DeepSeek V4 Pro). Frontier quality but opaque training data, non-auditable alignment, real geopolitical risk.
- Quality comparable to proprietary frontier models
- ⚠ Opaque training corpus provenance
- ⚠ Risk of US/EU sanctions on certain weights
Quality × sovereignty × cost
Stars are not absolute scores — they're relative comparisons within this panel. A model's quality depends on the use case: a 4-star Mistral can beat a 5-star Kimi on nuanced French legal reasoning.
| Model | Origin | Quality | Sovereignty | Cost | Hardware |
|---|---|---|---|---|---|
| Claude Sonnet 4.5 | US (Anthropic) | ★★★★★ | ★★ | ★★ | Bedrock EU API |
| Mistral Large 2 | FR (Mistral) | ★★★★ | ★★★★★ | ★★★★ | LMbox M / L / XL |
| Codestral 22B | FR (Mistral) | ★★★ | ★★★★★ | ★★★★★ | LMbox M / L / XL |
| Gemma 4 31B | US (Google) | ★★★ | ★★★ | ★★★★★ | LMbox M / L / XL |
| Llama 3 70B | US (Meta) | ★★★ | ★★★ | ★★★★ | LMbox L / XL |
| Kimi K2.6 | CN (Moonshot) | ★★★★★ | ★ | ★★★★ | LMbox XL only |
| DeepSeek V4 Pro | CN (DeepSeek) | ★★★★★ | ★ | ★★★★★ | LMbox XL only |
| Qwen 3.6 32B | CN (Alibaba) | ★★★★ | ★★ | ★★★★★ | LMbox M / L / XL |
| GLM 5.1 | CN (Zhipu AI) | ★★★★ | ★★ | ★★★★★ | LMbox L / XL |
| MiMo V2.5 Pro | CN (Xiaomi) | ★★★★ | ★ | ★★★★ | LMbox XL only |
★ = relative to the panel. Cost includes inference and required hardware. Sovereignty weighs weights provenance, vendor jurisdiction, and corpus auditability.
Which model in your Box, by sector
Five typical profiles we meet in EU mid-market customers. Recommendation accounts for regulatory constraints + minimum acceptable quality + budget.
No client matter can leave the firm or transit through a vendor whose jurisdiction allows foreign reach. CN tier is excluded, US tier debatable per the firm's deontological framework.
HDS hosting requires a fully HDS-certified processing chain. Mistral is currently the only vendor able to certify the entire chain in French jurisdiction.
No outbound connection, updates physically delivered, training-corpus audit mandatory. Only the EU tier is politically defensible, and even that choice requires deep documentation review.
Absolute sovereignty constraints don't apply. Priority is agentic quality + marginal cost. Bedrock EU is acceptable, US tier locally too.
Heavy PDF / Office volume, low strategic-leak risk beyond supplier specs. US-trained local models are perfectly adequate.
What CISOs actually ask
+ How do you audit a Chinese model's training corpus?
+ Risk of hidden backdoor in the weights?
+ Risk of US or EU sanctions on Chinese open-weights?
+ How do I justify the model choice to my DPO?
+ If a model is removed from Hugging Face, do I lose everything?
We help you decide
Our team can deliver a customised decision matrix based on your sector, compliance frame and budget. No commitment, delivered within 48 business hours.