LMbox
Vos données restent chez vous. Book a demo
← Back to the AI Box Supported models

Which LLM in your box?

All supported models are open-source or shipped under a clear commercial licence. You can run several in parallel and switch any time.

Weights provenance EU/FR · Maximum sovereignty, solid quality US · High quality, US jurisdiction CN · Frontier quality, CN jurisdiction

All these models are open-weights, but trained by teams sitting in different jurisdictions. For a CISO, provenance matters as much as licence: sanctions, possible audits, and alignment compatible with your legal frame.

★ EU-sovereign FR
Mistral Large 2
Mistral AI

High-end French-trained model. The best option for EU sovereignty with solid quality on agentic coding.

Parameters
123B
Context
128k
Licence
MRL
Strengths : Quality writing · reasoning · 128k context · trained in France
S M L XL
Fits on
FR
Mistral Small 3
Mistral AI

Good perf/cost trade-off when you don't need the Large. Apache 2.0 = no constraints.

Parameters
24B
Context
32k
Licence
Apache
Strengths : Fast · solid French · Apache 2.0 · made in France
S M L XL
Fits on
FR
Codestral 22B
Mistral AI

Code-specialised, made by Mistral. Powers the Code Reviewer module for tech teams.

Parameters
22B
Context
32k
Licence
MNPL
Strengths : Code (80+ languages) · refactoring · auto-documentation · EU-sovereign
S M L XL
Fits on
★ Recommended US
Gemma 4 31B
Google DeepMind

Excellent quality/size ratio on the M and L. Solid reasoning and writing across many languages.

Parameters
31B
Context
128k
Licence
Gemma
Strengths : Multilingual (135+ languages) · reasoning · summarisation · 128k context
S M L XL
Fits on
US
Gemma 4 9B
Google DeepMind

The smaller sibling: fits on the S, solid performance for everyday tasks.

Parameters
9B
Context
128k
Licence
Gemma
Strengths : Lightweight · fast · very usable in any major language
S M L XL
Fits on
US
Llama 3 70B
Meta

Meta's reference model: massive global adoption, mature ecosystem, but US jurisdiction.

Parameters
70B
Context
128k
Licence
Llama
Strengths : Excellent English · solid French · strong public benchmarks
S M L XL
Fits on
US
Llama 3 8B
Meta

Compact Llama 3: fits on the S, great for high-volume usage.

Parameters
8B
Context
128k
Licence
Llama
Strengths : Fast · stable · widely integrated with third-party tooling
S M L XL
Fits on
US
Whisper Large v3
OpenAI (open weights)

Multilingual audio transcription. Powers the Meeting Summarizer module.

Parameters
1.5B
Context
Licence
MIT
Strengths : Accurate transcription · French + 99 other languages · streaming
S M L XL
Fits on
Frontier open-weights CN
Kimi K2.6
Moonshot AI

2026 open-weights champion on LiveCodeBench. MoE architecture with sub-agent parallelism, requires the LMbox XL.

Parameters
~1T (32B active, MoE)
Context
200k
Licence
Modified
Strengths : Top agentic coding · reasoning · 200k context · sub-agents
S M L XL
Fits on
Frontier open-weights CN
DeepSeek V4 Pro
DeepSeek

The most serious rival to Claude Sonnet on SWE-bench. Permissive MIT licence, MoE 671B / 37B active.

Parameters
~671B (37B active, MoE)
Context
128k
Licence
MIT
Strengths : Frontier coding · MIT · deep reasoning · cost-efficient inference
S M L XL
Fits on
Frontier open-weights CN
Qwen 3.6 Max
Alibaba Cloud

Alibaba's 2026 flagship. Solid multi-file refactoring, 256k context.

Parameters
~110B
Context
256k
Licence
Tongyi
Strengths : Multi-file refactoring · 256k context · multilingual
S M L XL
Fits on
CN
Qwen 3.6 32B
Alibaba Cloud

The Qwen 3.6 sweet spot that fits on a LMbox L. Excellent alternative to Llama 3 70B with less RAM.

Parameters
32B
Context
128k
Licence
Tongyi
Strengths : Fits on Mac Studio 96 GB · multilingual · agentic coding
S M L XL
Fits on
CN
MiMo V2.5 Pro
Xiaomi

Ultra-long-context specialist: 1 million tokens of input. To ingest a whole codebase in one query.

Parameters
~70B
Context
1M
Licence
Apache
Strengths : 1M token context · massive ingestion · Apache 2.0
S M L XL
Fits on
CN
GLM 5.1
Zhipu AI

Excellent option for fine-tuning on your own data: MIT licence, no commercial restrictions.

Parameters
32B
Context
128k
Licence
MIT
Strengths : MIT · fine-tunable · solid reasoning · 32B dense
S M L XL
Fits on
Pick with full visibility

Which model family for which customer?

We built a quality × sovereignty × cost matrix to make the decision explicit. From law firms to critical-infra datacenters, every profile has its answer.

See the decision matrix
Model management

You stay in control of what runs

Pick at install time
During initial deployment we configure the 2 or 3 main models for your use cases. Others are downloadable on demand.
Swap any time
Admin can enable or disable a model in a few clicks. No restart. No data migration.
Benchmark in-house
Admin console with automated tests on your own prompts. Compare Gemma vs Mistral vs Llama on your real cases.
Custom model

A proprietary model to integrate?

You fine-tuned an internal model, or want to integrate one not on the list (Falcon, Phi, DeepSeek, etc.)? We add it to your box. Typical lead time: 2 weeks.

Discuss your model