Jump to Content

Affine-Invariant Online Optimization

Roi Livni
NIPS (2017) (to appear)
Google Scholar

Abstract

We present a new affine-invariant optimization algorithm called \emph{Online Lazy Newton}. The algorithm is a modification of the Online Newton Step algorithm. The convergence rate of Online Lazy Newton is independent of conditioning. Instead, the algorithm performance depends on the best possible preconditioning of the problem in retrospect and on its \emph{intrinsic} dimensionality. As an application, we show how the Online Lazy Newton can achieve the optimal $\widetildet{\Theta}\sqrt{rT})$ regret for the low rank experts problem, improving by a $\sqrt{r}$ factor over the previously best known bound and resolving an open problem posed by \citet{hazan2016online}. Our rate is also applicable in the \emph{supervised low rank expert} proposed in \cite{foster2017zigzag} that obtained regret of $O(\sqrt{rT}+\log N\log r)$