1、Copyright 2025 Arizona Board of RegentsLearning to Prune and Low-Rank Adaptation for Compact Language Model DeploymentAuthors:Asmer Hamid Ali(aali115asu.edu),Fan Zhang,Li Yang,Deliang FanEfficient,Secure and Intelligent Computing(ESIC)Laboratory(https:/faculty.engineering.asu.edu/dfan/)Arizona State
2、 UniversityCopyright 2025 Arizona Board of RegentsOutline1.Motivation and Problem StatementChallenges in deploying large pre-trained models.Limitations of existing methods.2.Key ContributionsOverview of proposed approach and its significance.3.Parameter-Efficient Fine-Tuning and Model PruningBackgro
3、und on PEFT techniques.Importance of structured pruning for efficiency.4.Methodology OverviewTrainable pruning masks.Integration with low-rank adaptation.5.Efficient Pruning and Low-Rank AdaptationDetailed explanation with equations and benefits.6.Experimental SetupModels,datasets,and evaluation met
4、rics.7.ResultsPerformance analysis and comparison with baselines.8.ConclusionSummary of contributions and future directions.Copyright 2025 Arizona Board of RegentsGrowing computational demands of large pre-trained models(LPMs).PEFT techniques address training overhead but fail to optimize inference
5、efficiency.Need for a compact and efficientdeployment-ready solution.Figure 1:Chart showing the growth in the size of models overtime with annotations on memory usage and limits of hardware(Source:LLM:The Rise of Data)Motivation and Problem StatementCopyright 2025 Arizona Board of RegentsGrowing com
6、putational demands of large pre-trained models(LPMs).PEFT techniques address training overhead but fail to optimize inference efficiency.Need for a compact and efficientdeployment-ready solution.Figure 2:Table comparing LLaMA-7B models with various PEFT methods,showing parameter reductions and accur