NVIDIA:2024年LLM全栈式方案使用和优化最佳实践报告(英文版)(35页).pdf

编号:605716 PDF 35页 1.68MB 下载积分:VIP专享
下载报告请您先登录!

NVIDIA:2024年LLM全栈式方案使用和优化最佳实践报告(英文版)(35页).pdf

1、1NVIDIA LLM 全栈式方案使用和优化最佳实践周国峰(Chandler)NVIDIA 技术研发经理GTC 2024 China AI Day,Mar.19,20242NVIDIA Full-Stack Solution for LLMBest Practices of NVIDIA Megatron-Core for LLM TrainingBest Practices of NVIDIA TensorRT-LLM for LLM InferenceBest Practices of NVIDIA Triton Inference Sever for LLM DeploymentConc

2、lusion and ProspectAgenda3NVIDIA Full-Stack Solution for LLMBest Practices of NVIDIA Megatron-Core for LLM TrainingBest Practices of NVIDIA TensorRT-LLM for LLM InferenceBest Practices of NVIDIA Triton Inference Sever for LLM DeploymentConclusion and ProspectAgenda4NVIDIA Full-Stack Solution for LLM

3、From Training,Inference to DeploymentNVIDIA Megatron-Core(M-core)for LLM Training An open-source library for GPU optimized techniques for LLM training.For customers to build custom LLM framework.NVIDIA TensorRT-LLM for LLM Inference An open-source library that accelerates and optimizes inference per

4、formance of the latest large language models(LLMs)NVIDIA Triton Inference Sever for LLM deployment An open-source library that standardizes AI model deployment and execution across every workloadTensorRT-LLM+Triton Inference Server for deployment The suggested way to deploy LLM-based services on NVI

5、DIA AI platform SOTA performance and rich functionalities TensorRT-LLM backend.The Triton backend for TensorRT-LLM,including in-flight batching,paged KV cache and more.5Overview of NVIDIAs Large Language Model Offerings for TrainingSolutions at Each Level of the Tech StackTransformer Engine:Hopper a

6、ccelerated Transformer models.Specific acceleration library,including FP8 on Hopper.Megatron-LM:A lightweight framework reference for using Megatron-Core to build your own LLM framework.Nemo Framework:Easy to use OOTB FW with a large model collections for enterprise users to experiment,train,and dep

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(NVIDIA:2024年LLM全栈式方案使用和优化最佳实践报告(英文版)(35页).pdf)为本站 (AG) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠