Title:
Non-asymptotic bounds for prediction problems and density estimation.

dc.contributor.advisor Koltchinskii, Vladimir
dc.contributor.author Minsker, Stanislav en_US
dc.contributor.committeeMember Bakhtin, Yuri
dc.contributor.committeeMember Balcan, Maria-Florina
dc.contributor.committeeMember Houdre, Christian
dc.contributor.committeeMember Romberg, Justin
dc.contributor.department Mathematics en_US
dc.date.accessioned 2012-09-20T18:20:29Z
dc.date.available 2012-09-20T18:20:29Z
dc.date.issued 2012-07-05 en_US
dc.description.abstract This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning. Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem. The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance. en_US
dc.description.degree PhD en_US
dc.identifier.uri http://hdl.handle.net/1853/44808
dc.publisher Georgia Institute of Technology en_US
dc.subject Active learning en_US
dc.subject Sparse recovery en_US
dc.subject Oracle inequality en_US
dc.subject Confidence bands en_US
dc.subject Infinite dictionary en_US
dc.subject.lcsh Estimation theory Asymptotic theory
dc.subject.lcsh Estimation theory
dc.subject.lcsh Distribution (Probability theory)
dc.subject.lcsh Prediction theory
dc.subject.lcsh Active learning
dc.subject.lcsh Algorithms
dc.subject.lcsh Mathematical optimization
dc.subject.lcsh Chebyshev approximation
dc.title Non-asymptotic bounds for prediction problems and density estimation. en_US
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.advisor Koltchinskii, Vladimir
local.contributor.corporatename College of Sciences
local.contributor.corporatename School of Mathematics
relation.isAdvisorOfPublication 343bf98c-e255-48f7-aa23-3efbbf0ef175
relation.isOrgUnitOfPublication 85042be6-2d68-4e07-b384-e1f908fae48a
relation.isOrgUnitOfPublication 84e5d930-8c17-4e24-96cc-63f5ab63da69
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
minsker_stanislav_201208_phd.pdf
Size:
1.03 MB
Format:
Adobe Portable Document Format
Description: