Title:
Syntactic foundations for machine learning

dc.contributor.advisor Gray, Alexander G.
dc.contributor.advisor Isbell, Charles L.
dc.contributor.author Bhat, Sooraj en_US
dc.contributor.committeeMember Agarwal, Ashish
dc.contributor.committeeMember Shan, Chung-chieh
dc.contributor.committeeMember Vuduc, Richard
dc.contributor.department Computer Science en_US
dc.date.accessioned 2013-06-15T02:58:24Z
dc.date.available 2013-06-15T02:58:24Z
dc.date.issued 2013-04-08 en_US
dc.description.abstract Machine learning has risen in importance across science, engineering, and business in recent years. Domain experts have begun to understand how their data analysis problems can be solved in a principled and efficient manner using methods from machine learning, with its simultaneous focus on statistical and computational concerns. Moreover, the data in many of these application domains has exploded in availability and scale, further underscoring the need for algorithms which find patterns and trends quickly and correctly. However, most people actually analyzing data today operate far from the expert level. Available statistical libraries and even textbooks contain only a finite sample of the possibilities afforded by the underlying mathematical principles. Ideally, practitioners should be able to do what machine learning experts can do--employ the fundamental principles to experiment with the practically infinite number of possible customized statistical models as well as alternative algorithms for solving them, including advanced techniques for handling massive datasets. This would lead to more accurate models, the ability in some cases to analyze data that was previously intractable, and, if the experimentation can be greatly accelerated, huge gains in human productivity. Fixing this state of affairs involves mechanizing and automating these statistical and algorithmic principles. This task has received little attention because we lack a suitable syntactic representation that is capable of specifying machine learning problems and solutions, so there is no way to encode the principles in question, which are themselves a mapping between problem and solution. This work focuses on providing the foundational layer for enabling this vision, with the thesis that such a representation is possible. We demonstrate the thesis by defining a syntactic representation of machine learning that is expressive, promotes correctness, and enables the mechanization of a wide variety of useful solution principles. en_US
dc.description.degree PhD en_US
dc.identifier.uri http://hdl.handle.net/1853/47700
dc.publisher Georgia Institute of Technology en_US
dc.subject Probabilistic programming en_US
dc.subject Type theory en_US
dc.subject Formal languages en_US
dc.subject Probability en_US
dc.subject Optimization en_US
dc.subject.lcsh Semantics
dc.subject.lcsh Machine learning
dc.subject.lcsh Stochastic models
dc.subject.lcsh Computer programming
dc.title Syntactic foundations for machine learning en_US
dc.type Text
dc.type.genre Dissertation
dspace.entity.type Publication
local.contributor.advisor Isbell, Charles L.
local.contributor.corporatename College of Computing
local.contributor.corporatename School of Computer Science
relation.isAdvisorOfPublication 3f357176-4c4b-402c-8b61-ec18ffb083a6
relation.isOrgUnitOfPublication c8892b3c-8db6-4b7b-a33a-1b67f7db2021
relation.isOrgUnitOfPublication 6b42174a-e0e1-40e3-a581-47bed0470a1e
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
bhat_sooraj_b_201305_phd.pdf
Size:
761.17 KB
Format:
Adobe Portable Document Format
Description: