School of Technology and Computer Science Seminars

Optimization in Very High Dimensions

by Mr. Gugan Thoppe (School of Technology and Computer Science, TIFR)

Friday, November 1, 2013 from to (Asia/Kolkata)
at Colaba Campus ( D-405 (D-Block Seminar Room) )
Description
Optimization problems where the number of variables $n$ is $10^4$ or more are now ubiquitous in all big data applications such as machine learning, social network science, signal processing, etc. For such problems, even basic methods such as the steepest descent and conjugate gradient are not favored because of their $O(n^2)$ or more cost per iteration. Inspired by Kaczmarz, this article proposes a simple framework to develop randomized descent methods for optimization in very high dimensions. In each iteration, the new estimate is the optimum of the objective function restricted to a random $k-$dimensional affine space passing through the old estimate. If these affine spaces are chosen cleverly enough, then one can simultaneously have a) cheap iterations, b) fast convergence and c) the ability to tradeoff between the two by varying $k.$ To demonstrate the efficacy of this framework, we present here a novel algorithm to solve quadratic programs. The per iteration cost of this algorithm is $O(nk) + O(k^3)$ and its expected convergence rate is $(1 - \Omega(\frac{k}{n})).$