School of Technology and Computer Science Seminars

Statistical Inference Based on a Parametric Family of Divergences

by Mr. M. Ashok Kumar (Indian Statistical Institute, Bangalore)

Monday, March 2, 2015 from to (Asia/Kolkata)
at D-405 (D-Block Seminar Room)
Description
We study minimization problems with respect to a one-parameter family of generalized divergences (denoted I_{\alpha}(P,Q)). These I_{\alpha}-divergences are a generalization of the so-called Kullback-Leibler divergence (KL-divergence). Just like KL-divergence, these I_{\alpha}-divergences also behave like squared Euclidean distance and satisfy the Pythagorean property. This talk is about the usefulness of these geometric properties in robust statistics. The talk is organized in three parts.

In the first part, we study minimization of I_{\alpha}(P,Q) as the first argument varies over a family of probability distributions that satisfy linear statistical constraints. Such a constraint set is called a linear family. This minimization problem generalizes the maximum Renyi or Tsallis entropy principle of statistical physics. The structure of the minimizing probability distribution naturally suggests a statistical model of power-law probability distributions, which we call an \alpha-power-law family. This is analogous to the exponential family that arises when relative entropy is minimized subject to the same linear statistical constraints.

In the second part, we study minimization of I_{\alpha}(P,Q) over the second argument. This minimization is generally on parametric families such as the exponential family or the \alpha-power-law family, and is of interest in robust statistical estimation.

In the third part, we show an orthogonality relationship between an \alpha-power-law family and an associated linear family. As a consequence of this, the minimization of I_{\alpha} over the second argument on an \alpha-power-law family can be shown to be equivalent to a minimization of I_{\alpha} over the first argument on a linear family. The latter turns out to be a simpler problem of minimization of a quasi-convex objective function subject to linear constraints (this is a joint work with Rajesh Sundaresan).