M. Gürbüzbalaban, A. Ozdaglar, P. Parrilo
Gürbüzbalaban, M., Ozdaglar, A. & Parrilo, P. Math. Program. (2015) 151: 283.
Publication year: 2015

Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is linear for both variable and constant stepsize rules. By means of an example, we show that without the gradient growth condition, incremental Newton method cannot achieve linear convergence. Our analysis can be extended to study other incremental methods: in particular, we obtain a linear convergence rate result for the incremental Gauss–Newton algorithm under a variable stepsize rule.