1、 Disclosures&Disclaimer This report must be read with the disclosures and the analyst certifications in the Disclosure appendix,and with the Disclaimer,which forms part of it.Issuer of report:The Hongkong and Shanghai Banking Corporation Limited View HSBC Global Research at:https:/ 25-27 March 2025F
2、ind out moreHSBC Global Investment Summit Figure AIs Helix Vision-Language-Action(VLA)model greatly increases training efficiency vs traditional imitation training Drives&motors,reducers,ball screws and bearings account for 55%of humanoid BOM,followed by sensors and AI chips Lack of pure listed huma
3、noid players drives market interest in supply chain names;we identify all related companies Model innovation offers cheaper and quicker training:In Exhibit 1,we weigh cost reduction potential in training.In traditional imitation training,c500 hours of training data is needed for a robot to learn a s
4、ingle task(for example,folding clothes).The same processes need to be repeated for every new task,making it hard to do at scale.However,Figure AIs“Helix”VLA(Vision-Language-Action)model could offer more efficient training.Helix can pick up any small household objects that it has never encountered be
5、fore,using only one pre-trained neutral network weighting,without specific task fine-tuning.The entire Helix model is trained using only 500 hours of supervised dataset,similar to the training time required for a single task previously.Helix achieves this through bridging previously separated models
6、 for 1)decision making(using Visual Language Model,VLM)and 2)robot action(through imitation learning).We believe the new Helix VLA model helps humanoids learn more tasks faster.Humanoids achieve a longer battery life by utilising cloud computation,but this increases latency.On-board computing reduce