LMbox
Vos données restent chez vous. Book a demo
← Back to the AI Box Model sovereignty

Which model for which level of sovereignty?

The best 2026 open-weights models are trained in three jurisdictional zones with very different implications for an EU CISO. Here's how to decide honestly.

Three zones, three trade-offs

The quality × jurisdiction × audit trilemma

EU / FR Sovereignty ★★★★★

European / French

The only option that requires no sovereignty trade-off. French vendor (Mistral), trained in the EU, native GDPR jurisdiction, direct legal recourse.

  • Zero non-EU sub-processing
  • Compatible with attorney-client privilege, HDS, critical-infra
  • Training corpus auditable
Models in this tier
Mistral Large 2 · Codestral · Mistral Small 3
US Sovereignty ★★★

American

Excellent quality / accessibility ratio. Global adoption, mature ecosystem, but US jurisdiction (CLOUD Act, sanctions, Patriot Act). Acceptable for most standard mid-market customers.

  • High and stable quality
  • Extensive documentation and support
  • CLOUD Act applies, US sanctions possible
Models in this tier
Gemma 4 · Llama 3 · Whisper
CN Sovereignty ★

Chinese

Champions of 2026 open-weights benchmarks (Kimi K2.6, DeepSeek V4 Pro). Frontier quality but opaque training data, non-auditable alignment, real geopolitical risk.

  • Quality comparable to proprietary frontier models
  • Opaque training corpus provenance
  • Risk of US/EU sanctions on certain weights
Models in this tier
Kimi K2.6 · DeepSeek V4 Pro · Qwen 3.6 · GLM 5.1
Full matrix

Quality × sovereignty × cost

Stars are not absolute scores — they're relative comparisons within this panel. A model's quality depends on the use case: a 4-star Mistral can beat a 5-star Kimi on nuanced French legal reasoning.

Model Origin Quality Sovereignty Cost Hardware
Claude Sonnet 4.5 US (Anthropic) ★★★★★ ★★ ★★ Bedrock EU API
Mistral Large 2 FR (Mistral) ★★★★ ★★★★★ ★★★★ LMbox M / L / XL
Codestral 22B FR (Mistral) ★★★ ★★★★★ ★★★★★ LMbox M / L / XL
Gemma 4 31B US (Google) ★★★ ★★★ ★★★★★ LMbox M / L / XL
Llama 3 70B US (Meta) ★★★ ★★★ ★★★★ LMbox L / XL
Kimi K2.6 CN (Moonshot) ★★★★★ ★★★★ LMbox XL only
DeepSeek V4 Pro CN (DeepSeek) ★★★★★ ★★★★★ LMbox XL only
Qwen 3.6 32B CN (Alibaba) ★★★★ ★★ ★★★★★ LMbox M / L / XL
GLM 5.1 CN (Zhipu AI) ★★★★ ★★ ★★★★★ LMbox L / XL
MiMo V2.5 Pro CN (Xiaomi) ★★★★ ★★★★ LMbox XL only

★ = relative to the panel. Cost includes inference and required hardware. Sovereignty weighs weights provenance, vendor jurisdiction, and corpus auditability.

Per-customer recommendation

Which model in your Box, by sector

Five typical profiles we meet in EU mid-market customers. Recommendation accounts for regulatory constraints + minimum acceptable quality + budget.

Law firm Attorney-client privilege (CNB Article 5)

No client matter can leave the firm or transit through a vendor whose jurisdiction allows foreign reach. CN tier is excluded, US tier debatable per the firm's deontological framework.

Our reco : LMbox M or L · Mistral Large 2 + Codestral · Pattern A toward Bedrock EU when needed for hard cases
Healthcare · HDS Patient data, GDPR article 9

HDS hosting requires a fully HDS-certified processing chain. Mistral is currently the only vendor able to certify the entire chain in French jurisdiction.

Our reco : LMbox M in `lmbox-health` mode · Mistral Large 2 · no cloud calls, air-gap mode recommended
Critical-infra / Defence PSSIE, classification, air-gap

No outbound connection, updates physically delivered, training-corpus audit mandatory. Only the EU tier is politically defensible, and even that choice requires deep documentation review.

Our reco : LMbox L or XL air-gap · Mistral Large 2 fp16 · corpus audit + ANSSI report
Standard tech mid-market Dev productivity, code review, RAG

Absolute sovereignty constraints don't apply. Priority is agentic quality + marginal cost. Bedrock EU is acceptable, US tier locally too.

Our reco : LMbox M · Gemma 4 31B + Mistral Large 2 + cascade to Claude Sonnet via Bedrock EU
Industry / Manufacturing Quality procedures, ISO, suppliers

Heavy PDF / Office volume, low strategic-leak risk beyond supplier specs. US-trained local models are perfectly adequate.

Our reco : LMbox M · Gemma 4 31B + Codestral for Code Reviewer + Whisper for meetings
Geopolitical risks

What CISOs actually ask

+ How do you audit a Chinese model's training corpus?
You can't. None of the Chinese models (Kimi, DeepSeek, Qwen, GLM, MiMo) publishes its full training corpus or tolerates external audit. Mistral at least publishes the composition by major categories. For a law firm or critical-infra operator, this opacity is disqualifying.
+ Risk of hidden backdoor in the weights?
Theoretically detectable through intensive red-teaming, in practice extremely hard to prove. No French team has published a recognised audit protocol. Cautious stance: avoid models with opaque provenance for anything touching security, defence, justice or health.
+ Risk of US or EU sanctions on Chinese open-weights?
US Export Administration Regulations (EAR) already cover certain AI models. The European Union has signalled via the AI Act that it could extend sectoral restrictions. 12-24 month horizon for targeted restrictions on certain weights or licences.
+ How do I justify the model choice to my DPO?
Provenance documentation, model-specific DPIA, traceability of LMbox audit logs showing zero customer data egress. Mistral is easiest to defend before a DPO. Gemma / Llama are manageable with a well-built DPIA. Kimi / DeepSeek require a dedicated risk dossier.
+ If a model is removed from Hugging Face, do I lose everything?
No, if you've already downloaded it onto your Box. Weights are local — you keep the binary. Future updates become unavailable, but the model keeps working. That's a strong argument for air-gap mode: your LMbox keeps the weights indefinitely.

We help you decide

Our team can deliver a customised decision matrix based on your sector, compliance frame and budget. No commitment, delivered within 48 business hours.