1、C h i n a A u t o m o t i v e Multimodal Interaction Development Research Report,2024D Multimodal interaction research:AI foundation models deeply integrate into the cockpit,helping perceptual intelligence evolve into cognitive intelligenceChina Automotive Multimodal Interaction Development Research
2、 Report,2024 released by ResearchInChina combs through the interaction modes of mainstream cockpits,the application of interaction modes in key vehicle models launched in 2024,and the cockpit interaction solutions of OEMs/suppliers,and summarizes the development trends of cockpit multimodal interact
3、ion fusion.1.Voice recognition dominates cockpit interaction,and integrates with multiple modes to create a new interaction experience.Among current cockpit interaction applications,voice interaction is used most widely and most frequently in intelligent cockpits.According to the latest statistics f
4、rom ResearchInChina,from January to August 2024,the automate voice systems were installed in about 11 million vehicles,a year-on-year increase of 10.9%,with an installation rate of 83%.Li Tao,General Manager of Baidu Apollos intelligent cockpit business,pointed out that the frequency of people using
5、 cockpits has increased from 3-5 times a day at the beginning to double digits today,and has even reached nearly three digits on some models with leading voice interaction technology.The frequent use of voice recognition function not only greatly optimizes user interactive experience,but also promot
6、es the development trend of fusing with other interactive modes such as touch and face recognition.For example,the full-cabin memory function of NIO Banyan 2.4.0 is based on face recognition,and NOMI actively greets occupants who have recorded information(e.g.,Good morning,Doudou);Zeekr 7X integrate