1、Arya Fayyazi,Mehdi Kamal,Massoud Pedram*afayyazi,mehdi.kamal,pedramusc.eduUniversity of Southern CaliforniaLos Angeles,California,USATuesday,January 21,2025ASP-DAC1Dynamic Co-Optimization Compiler:Leveraging Multi-AgentReinforcement Learning for Enhanced DNN AcceleratorPerformanceMotivation Increasi
2、ng Complexity of neural network modelAdvanced architectures and large-scale workloads demand more than mere software tweaks.Limitations of Existing Auto-TunersTraditional frameworks(e.g.,TVM Chen et al.,2018)primarily focus on software optimizations,leaving hardware optimization potential largely un
3、tapped.Need for HardwareSoftware SynergyJointly optimizing both layers is critical for peak performance but is vastly underexplored.2“Software and Hardware.”Altium Resources,Altium,Related WorkAutoTVM Chen et al.,2018:Uses machine learning-based cost models to optimize DNN configurations but focuses
4、 primarily on software parameters.CHAMELEON Ahn et al.,2020:Employs reinforcement learning for adaptive exploration of the solution space but does not integrate hardware parameter optimization effectively.MetaTune Ryu et al.,2021:Leverages meta-learning for faster adaptation to new optimization spac
5、es but lacks a holistic hardware-software co-design approach.PRIME Kumar et al.,2021:Data-driven offline optimization for hardware design but operates outside of reinforcement learning frameworks,leading to slower compilation times.NaaS Zhou et al.,2022:Joint optimization of neural architectures and
6、 hardware accelerators,but its unified search space approach is extremely large.3Shortcomings of Existing Approaches4Hand-optimized kernels are difficult to design and generally non-scalable.Manual Tuning OverheadFail to do hardware and software co-optimizations(CHAMELEON,NaaS).Lack of HWSW Co-Desig