School of Technology and Computer Science Seminars

Lower Bounds for Best Arm Identification and Regret Minimization for Bandit Strategie

by Shubhada Agrawal (STCS, TIFR)

Friday, September 7, 2018 from to (Asia/Kolkata)
at A-201 STCS Seminar Room
Description
Abstract : The stochastic multi-armed bandit model is a simple abstraction that has been proven useful in many different contexts in statistics and machine learning. The problem is studied in a number of settings. In 1985, Lai and Robbins proposed a lower bound for the regret incurred by any strategy trying to minimize the cumulative regret. Their idea of the proof can be used to prove lower bounds for other bandit problems as well. In 2016, Kaufmann et al. encapsulated their main idea in terms of an inequality, which can be directly used to prove the lower bounds. In this talk, we will look at this inequality and derive the lower bounds for the best arm identification problem and the regret minimization setting.