1、Page#Shanghai Artificial Intelligence L大语言模型的多语言能力增强袁袁飞飞上海人工智能实验室上海人工智能实验室Page#Shanghai Artificial Intelligence LPage 2RoadMap背景词表数据训练机遇与挑战Page#Shanghai Artificial Intelligence LPage 3大语言模型:LLM模型结构 结构基于Transformer架构Decoder-Only架构:仅包含解码器(decoder)大规模参数:现代LLM的模型参数非常庞大 损失函数Next Token Prediction:LLM 的训练目
2、标是基于输入文本(12 1)去预测下一个文本()的可能性(|121)。Input EmbeddingTransformer Layer 1Transformer Layer Output EmbeddingTransformer Layer L+Positional EmbeddingInputOutputPage#Shanghai Artificial Intelligence LPage 4LLM强大的通用智能效果 Wei et al.,Emergent Abilities of Large Language Models,TMLR2022Chen et al.,MEGA-Bench:Sc
3、aling Multimodal Evaluation to over 500 Real-World Tasks,arXiv2024LLMLLM翻译润色邮件摘要回答日常问题数学代码Page#Shanghai Artificial Intelligence LPage 5English-Centric 的LLM多语言性能不好 英语和非英语的性能差异大。推理任务性能Shi et al.,Language Models Are Multilingual Chain-Of-Thought Reasoners,ICLR2023Zhu et al.,Multilingual Machine Transla
4、tion with Large Language Models:Empirical Results and Analysis,Findings of NAACL2024LLaMA3 翻译效果Chinese Input:妈妈总是说生活就像一盒巧克力,你永远不知道你会得到什么。Chinese to English:Mom always says life is like a box of chocolates,you never know what youre gonna get.English to Chinese:妈妈总是说生活就像一盒巧克力,你永远不知道你会得到什么。Chinese to N
5、epali:,.Nepali to Chinese:我妈妈总是说你玩了凡妮莎的巧克力,你不知道我们给的是什么。Page#Shanghai Artificial Intelligence LPage 6多语言增强的必要性是否有必要进行多语言增强,学好英语是不是就够了?No!Lu,Yinquan,et al.Llamax:Scaling linguistic horizons of llm by enhancing translation capabilities beyond 100 languages.Findings of EMNLP 20247.963.8310.297.0814.4915
6、.4716.1616.86Nepali-XX-NepaliLLaMA3LLaMA3-pivotLLaMAXLLaMAX-pivotPage#Shanghai Artificial Intelligence LPage 7LLM 性能差的原因1 词表预训练的词表并未适配多语言的需要。Tokenization Ratio=Llama 分词结果长度/词级别分词长度Yuan,Fei,et al.How Vocabulary Sharing Facilitates Multilingualism in LLaMA?.Findings of the Association for Computationa