会议5_使用 MVAPICH 中的混合 GPU 压缩来扩展大型语言模型训练.pdf

编号:171247 PDF 22页 3.15MB 下载积分:VIP专享
下载报告请您先登录!

会议5_使用 MVAPICH 中的混合 GPU 压缩来扩展大型语言模型训练.pdf

1、SCALING LARGE LANGUAGE MODEL TRAINING USING HYBRID GPU-BASED COMPRESSION IN MVAPICHAamir Shafi,Research ScientistLang Xu,Ph.D.StudentNetwork Based Computing LaboratoryThe Ohio State Universityhttp:/nowlab.cse.ohio-state.edu/2024 OFA Virtual WorkshopFollow us onhttps:/ 2024 Virtual OFA Workshop2Netwo

2、rk Based Computing Laboratory Introduction&Background Motivation&Challenges Hybrid Compression Design Performance Evaluation ConclusionPresentation Outline2024 Virtual OFA Workshop3Network Based Computing LaboratoryLarge Language Models(LLaMA2,GPT4,Claude3)are powerful in various areas(dialogue syst

3、ems,knowledge base,)Model capability scales with number of parameters(100 Million BERT to 500 Billion Megatron-Turing NLG)Training Billion parameter models requires:Parallelism strategies(scaling up to thousands of GPUs)Memory optimization(fitting models within GPUs)Efficient communication(reducing

4、interconnect bandwidth pressure)Training Large Language Model2024 Virtual OFA Workshop4Network Based Computing LaboratoryParallelism StrategiesData Parallelism(DP):Maintains full model replica on each DP rank and takes mini-batch as inputData-intensive gradient synchronization using AllreducePipelin

5、e Parallelism(PP):Shards model layers across devices and executes in a pipeline orderPoint-to-point communication passing activations and gradientsTensor Parallelism(TP):Distributes Matrix Multiplication over different devicesFrequent Allreduce and Allgather communication ensuring correctness3D Para

6、llelism combines DP+PP+TP(Megatron-LM)2024 Virtual OFA Workshop5Network Based Computing LaboratoryMemory OptimizationDeepSpeed ZeRO Optimizer:A novel memory optimization technology for large-scale distributed deep learningEnables training models with billions of parameter among GPUEach GPU only upda

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(会议5_使用 MVAPICH 中的混合 GPU 压缩来扩展大型语言模型训练.pdf)为本站 (Chriswl) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠