1、Frontiers in Perceptual AI:First-Person Video and Multimodal PerceptionKristen GraumanUniversity of Texas at AustinFAIR,Meta AIThe third-person Web perceptual experienceCaltech 101(2004),Caltech 256(2006)PASCAL(2007-12)ImageNet(2009)LabelMe(2007)MS COCO(2014)SUN(2010)Places(2014)BSD(2001)Visual Geno
2、me(2016)AVA(2018)Kinetics(2017)ActivityNet(2015)A curated“disembodied”moment in time from a spectators perspectiveKristen Grauman,FAIR&UT AustinFirst-person“egocentric”perceptual experienceUncurated long-form video stream driven by the agents goals,interactions,and attentionKristen Grauman,FAIR&UT A
3、ustinFirst-person perception and learningStatus quo:Learning and inference with“disembodied”images/videos.On the horizon:Visual learning in the context of agent goals,interaction,andmulti-sensory observations.Kristen Grauman,FAIR&UT AustinWhy egocentric video?Robot learningAugmented realityKristen G
4、rauman,FAIR&UT AustinExisting first-person video datasetsInspire our effort,but call for greater scale,content,diversityEPIC Kitchens Damen et al.202045 people,100 hrskitchens onlyUT EgoLee et al.20124 people,17 hrsdaily life,in/outdoorsEGTEA Gaze+Li et al.201832 people,28 hrskitchens onlyCharades-E
5、goSigurdsson 201871 people,34 hrsindoorADLPirsiavash 201220 people,10 hrsapartmentKristen Grauman,FAIR&UT AustinExisting first-person video datasetsInspire our effort,but call for greater scale,content,diversityEPIC Kitchens Damen et al.202045 people,100 hrskitchens onlyUT EgoLee et al.20124 people,
6、17 hrsdaily life,in/outdoorsEGTEA Gaze+Li et al.201832 people,28 hrskitchens onlyCharades-EgoSigurdsson 201871 people,34 hrsindoorADLPirsiavash 201220 people,10 hrsapartment#people#hours#scenesEgo4D GOALKristen Grauman,FAIR&UT Austin#Hours#ParticipantsEPIC-Kitchens-100Combining all prior egocentric