Challenge

Apertus

Learn about the new foundation model from the Swiss AI Initiative.

↓  Open

An LLM built for the public good

A Large Language Model will be launched in time for Swiss {ai} Weeks 2025, developed by ETH Zurich, EPFL, CERN, CSCS and many other contributors and sources. The project focuses on transparency, data privacy, and linguistic diversity. It will not initially be multi-modal or interchangeable with existing models. Nevertheless it will be an advanced product with unique features. We will try to gather details here as they are made available.

Thanks to Michael J. Baumann (effektiv.ch) for the summary and charts that are adapted further below.

Technical Report

Key characteristics

The model is part of the Swiss AI Initiative, which started in late 2023. This is a platform for over 80 data science projects including the LLM development.

Key highlights of the LLM project, as announced in July:

  • Multilingualism: Trained on more than 15 trillion tokens across 1,500+ languages, 40% non-English - equal usage cost across languages - see @epfml
  • Performance: This is a large model (8 billion and 70 billion parameters), trained on a lot of tokens, and it will be continue to be actively optimized.
  • Open & Transparent: Published under Apache-2.0 license - including source code, weights, and open training data.
  • Data Privacy: Compliant with GDPR, EU AI Act, and Swiss data protection laws - see Fan et al 2025
  • Infrastructure: Developed on the new Alps supercomputer at CSCS with over 10,000 NVIDIA GH200 Grace-Hopper chips
  • Global Reach: Research and borderless applications in mind, for sovereign and international public-interest AI.

The Swiss LLM project is currently led by:

Tech specs

The Swiss LLM is trained on the Alps supercomputer, operational at CSCS since September 2024:

The Swiss LLM was trained on approximately 15 trillion tokens. Particularly noteworthy is the high proportion of non-English data (40%) and coverage of over 1,500 languages, including rare ones like Romansh or Zulu. The data was ethically sourced - without illegal scraping, respecting robots.txt and copyright requirements. While this limits access to certain specialized information, CSCS emphasizes: «For general tasks, this doesn't lead to measurable performance losses.»

For more technical references, see the links below.

Initial benchmarks

See the Evaluation section of the Apertus Model Card, and Section 5 of the Tech Report for more data. This is an initial independent evaluation, and we expect more to come soon:

Model MMLU (Knowledge) Global-MMLU (Multilingual) GSM8K (Math) HumanEval (Code) RULER @32k (Long Context)
Claude 3.5 Sonnet 88.7% 96.4% 92.0%
Llama 3.1 70B 83.6% 95.1% 80.5%
Apertus-70B 69.6% 62.7% 77.6% 73.0% 80.6%
Apertus-8B 60.9% 55.7% 62.9% 67.0% 69.5%

"Notes on Comparability: The prompt setups differ between models (shot numbers and chain-of-thought configurations). Global-MMLU and RULER values are not available in the official documentation for the comparison models. The 70B variant convinces in general knowledge and multilingual tasks, but remains behind the top models in mathematics and programming."

Source: effektiv.ch, lifearchitect.ai

Performance comparison

Model Parameters Openness Language Coverage Training Hardware Strengths
Swiss LLM 8B / 70B Open Source, Weights, Data >1,500 Alps: 10,752 GH200 GPUs Linguistic diversity, data privacy, transparency
GPT-4.5 ~2T (estimated) Proprietary ~80 - 120 Azure: ~25,000 A100 GPUs Creativity, natural conversation, agentic planning
Claude 4 Not published Proprietary ? Anthropic: Internal clusters Adaptive reasoning, coding
Llama 4 109B / 400B Open Weight 12, with 200+ in training Meta: ~20,000 H100 GPUs Multimodality, large community, agentic tasks
Grok 4 ~1.8T MoE Proprietary ? Colossus: 200,000 H100 GPUs Reasoning, real-time data, humor...

Source: effektiv.ch

Sources

💡 Find links to tools and benchmarks in our Resources area

For further information:

Apertus

image/jpeg

Table of Contents

  1. Model Summary
  2. How to use
  3. Evaluation
  4. Training
  5. Limitations
  6. Legal Aspects

Model Summary

Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.

image/png

The model is a decoder-only transformer, pretrained on 15T tokens with a staged curriculum of web, code and math data. The model uses a new xIELU activation function and is trained from scratch with the AdEMAMix optimizer. Post-training included supervised fine-tuning and alignment via QRPO.

Key features

  • Fully open model: open weights + open data + full training details including all data and training recipes
  • Massively Multilingual: 1811 natively supported languages
  • Compliant Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data

For more details refer to our technical report

How to use

The modeling code for Apertus is available in transformers v4.56.0, so make sure to upgrade your transformers version. You can also load the model with the latest vLLM which uses transformers as a backend.

pip install -U transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "swiss-ai/Apertus-70B-Instruct-2509"
device = "cuda"  # for GPU usage or "cpu" for CPU usage

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
).to(device)

# prepare the model input
prompt = "Give me a brief explanation of gravity in simple terms."
messages_think = [
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages_think,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt", add_special_tokens=False).to(model.device)

# Generate the output
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)

# Get and decode the output
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :]
print(tokenizer.decode(output_ids, skip_special_tokens=True))

[!TIP] We recommend setting temperature=0.8 and top_p=0.9 in the sampling parameters.

Long context processing

Apertus by default supports a context length up to 65,536 tokens.

Agentic Usage

Apertus supports tool use

Deployment

Deployment of the models is directly supported by the newest versions of Transformers, vLLM, SGLang, and also for running on-device with MLX,

Evaluation

Pretraining Evaluation: Performance (%) of Apertus models on general language understanding tasks (higher is better) compared to other pretrained models.

Model Avg ARC HellaSwag WinoGrande XNLI XCOPA PIQA
Fully Open Models
Apertus-8B 65.8 72.7 59.8 70.6 45.2 66.5 79.8
Apertus-70B 67.5 70.6 64.0 73.3 45.3 69.8 81.9
OLMo2-7B 64.0 72.9 60.4 74.5 40.4 55.2 80.9
OLMo2-32B 67.7 76.2 66.7 78.6 42.9 60.1 82.1
EuroLLM-1.7B 54.8 57.2 44.9 58.1 40.7 55.7 72.4
EuroLLM-9B 62.8 67.9 57.9 68.8 41.5 61.1 79.6
SmolLM2-1.7B 58.5 66.1 52.4 65.6 37.6 52.3 77.0
SmolLM3-3B 61.6 68.6 56.4 68.1 40.5 58.2 77.7
Poro-34B 61.7 65.7 57.9 70.6 41.6 56.0 78.5
Open-Weight Models
Llama3.1-8B 65.4 71.6 60.0 73.4 45.3 61.8 80.1
Llama3.1-70B 67.3 74.4 56.5 79.4 44.3 66.7 82.3
Qwen2.5-7B 64.4 69.6 60.1 72.8 43.3 61.7 78.7
Qwen2.5-72B 69.8 76.2 67.5 78.0 46.9 68.2 82.0
Qwen3-32B 67.8 75.6 64.0 73.8 44.4 67.9 80.9
Llama4-Scout-16x17B 67.9 74.7 66.8 73.2 43.5 67.7 81.2
GPT-OSS-20B 58.1 67.0 41.5 66.5 37.4 60.4 75.6

Many additional benchmark evaluations, for pretraining and posttraining phases, multilingual evaluations in around hundred languages, and long context evaluations are provided in Section 5 of the Apertus_Tech_Report.pdf

Training

Model

  • Architecture: Transformer decoder
  • Pretraining tokens: 15T
  • Precision: bfloat16

Software & hardware

Open resources

All elements used in the training process are made openly available

  • Training data reconstruction scripts: github.com/swiss-ai/pretrain-data
  • The training intermediate checkpoints are available on the different branches of this same repository

Limitations

Apertus can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.

Legal Aspects

EU AI Act Transparency Documentation and Code of Practice

Data Protection and Copyright Requests

For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly

Output Filter for PII

  • Currently no output filter is provided.
  • Please check this site regularly for an output filter that can be used on top of the Apertus LLM. The filter reflects data protection deletion requests which have been addressed to us as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from this site every six months.

Contact

To contact us, please send an email to llm-requests@swiss-ai.org

Citation

@misc{swissai2025apertus,
  title={{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}},
  author={Apertus Team},
  year={2025},
  howpublished={\url{https://huggingface.co/swiss-ai/Apertus-70B-2509}}
}