1、2025-06-06dots.llm1 Technical Reportrednote-hilabhttps:/huggingface.co/rednote-hilabhttps:/ of Experts(MoE)models have emerged as a promising paradigm for scalinglanguage models efficiently by activating only a subset of parameters for each inputtoken.In this report,we presentdots.llm1,a large-scale
2、 MoE model that activates 14billion parameters out of a total of 142 billion parameters,delivering performance onpar with state-of-the-art models while reducing training and inference costs.Leveragingour meticulously crafted and efficient data processing pipeline,dots.llm1achievesperformance compara
3、ble to Qwen2.5-72B after pretraining on 11.2T high-quality tokensand post-training to fully unlock its capabilities.Notably,no synthetic data is usedduring pretraining.To foster further research,we open-source intermediate trainingcheckpoints at every one trillion tokens,providing valuable insights
4、into the learningdynamics of large language models.7143270Cost(Billion active parameters)4050607080Performance(%MMLU-Pro)dots.llm1Qwen2.5-72BQwen2.5-32BQwen2.5-14BQwen2.5-7BLlama3-70BDeepSeek-V2DeepSeek-V3Qwen3-235B-A22BQwen2-57B-A14BBest performance/costratioFigure 1:Performance and cost comparison
5、 of open MoE and dense language models.Circles()denotedense models,while diamonds()denote MoE models.We benchmark model capabilities using MMLU-Pro,showing that dots.llm1 achieves comparable accuracy to leading models.11IntroductionLarge Language Models(LLMs)have undergone rapid advancements in rece
6、nt years,moving closer tothe goal of Artificial General Intelligence(AGI)as evidenced by substantial progress(OpenAI,2025a;b;Anthropic,2025;xAI,2025).Parallel to these proprietary developments,the open-source community isalso achieving remarkable breakthroughs(Qwen,2024a;DeepSeek-AI et al.,2024;Mist