1、How high-order informationtheneural networkacceleratestraining?Xunpeng HuangBytedance AILab MLNLCNov 2020#page#OutlinePreliminariesAccelerate neural network training with second momentsApproximate Hessian with a low computational complexitySummaryReferences#page#PreliminariesAccelerate neural networ
2、k training with second momentsApproximate Hessian with a low computational complexitySummaryReferences#page#An brieFintroduction to optimizersand theirapplicationsiteration20000:810Figure: Logistic regression and SVM classificationMost optimization problems are solved by different optimizersMany tas
3、ks in CV and NLP can be Formulated as some optimizationproblems.#page#An brieFintroduction to optimizers and their applicationsOptimization problemsStochastic problems:arg minx F(x)= EeF(x,e)Finite sum problems:arg minx F(x)=1/nZiF(x)where x Fx) denote the parameters and the objective Functions.Opti
4、mizersUpdate rulesGDX+1=Xt-VF(x)Newton methodXt+1=xt-2F(xt)F(xtTable: Deterministic optimizersOptimizersUpdate rulesGDXt+1=X-Vf,(Xe)SVRG6xm+1=xcm-7:(Vfam(xm)-VFam(xt,o)+VF(XC,0)Stochastic Newton7-”ia,(xe)/i.(xeXt+1=X-7VTable: Stochastic optimizers#page#High-order optimization in convex settingsCore
5、ProblemWhy can we accelerate the convergence via high-order inFormation?In most First-order optimizers, the step size depends on the gradientLipschitz continuous constant L in the objective Function, which defined as:)(x-y)+L/2.1x-y1VxywehaveF(x)fy)+VFy)0806-0.40.206081122214-0.2-0.4-0.6-0.8#page#Hi
6、gh-order optimization in convex settingsAnswer to the core problemHigh-order information provides adaptive step sizes during iterationsFor GD:Xt+1=X-7:Vf(xe)f(x)+F(xe)(x-x)台Xt+1=argminX+1/(2me)lx-xe|1For Newton method:xt+1=x-2f(x:)fx)xt+1=argminf(xe)+Vf(xe)(x-xe)+(x-x)f(xe)(x-x)#page#PreliminariesAc