Person:
Theodorou, Evangelos A.

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    The Science of Autonomy: A "Happy" Symbiosis Among Control, Learning and Physics
    ( 2018-03-28) Theodorou, Evangelos A.
    In this talk I will present an information theoretic approach to stochastic optimal control and inference that has advantages over classical methodologies and theories for decision making under uncertainty. The main idea is that there are certain connections between optimality principles in control and information theoretic inequalities in statistical physics that allow us to solve hard decision making problems in robotics, autonomous systems and beyond. There are essentially two different points of view of the same "thing" and these two different points of view overlap for a fairly general class of dynamical systems that undergo stochastic effects. I will also present a holistic view of autonomy that collapses planning, perception and control into one computational engine, and ask questions such as how organization and structure relates to computation and performance. The last part of my talk includes computational frameworks for uncertainty representation and suggests ways to incorporate these representations within learning and control.
  • Item
    Stochastic Control: From Theory to Parallel Computation and Applications
    ( 2016-02-24) Theodorou, Evangelos A.
    For autonomous systems to operate in stochastic environments, they have to be equipped with fast decision-making processes to reason about the best possible action. Grounded on first principles in stochastic optimal control theory and statistical physics, the path integral framework provides a mathematically sound methodology for decision making under uncertainty. It also creates opportunities for the development of novel sampling-based planning and control algorithms that are highly parallelizable. In this talk, I will present results in the area of sampling-based control that go beyond classical formulations and show applications to robotics and autonomous systems for tasks such as manipulation, grasping, and high-speed navigation. In addition to sampling-based stochastic control, alternative methods that rely on uncertainty propagation using stochastic variational integrators and polynomial chaos theory will be presented and their implications to trajectory optimization and state estimation will be demonstrated. At the end of this talk, and towards closing the gap between high-level reasoning/decision making and low-level organization/computation, I will highlight the interdependencies between theory, algorithms, and forms of computation and discuss future computational technologies in the area of autonomy and robotics.
  • Item
    Information-Theoretic Stochastic Optimal Control via Incremental Sampling-based Algorithms
    (Georgia Institute of Technology, 2014-12) Arslan, Oktay ; Theodorou, Evangelos A. ; Tsiotras, Panagiotis
    This paper considers optimal control of dynamical systems which are represented by nonlinear stochastic differential equations. It is well-known that the optimal control policy for this problem can be obtained as a function of a value function that satisfies a nonlinear partial differential equation, namely, the Hamilton-Jacobi-Bellman equation. This nonlinear PDE must be solved backwards in time, and this computation is intractable for large scale systems. Under certain assumptions, and after applying a logarithmic transformation, an alternative characterization of the optimal policy can be given in terms of a path integral. Path Integral (PI) based control methods have recently been shown to provide elegant solutions to a broad class of stochastic optimal control problems. One of the implementation challenges with this formalism is the computation of the expectation of a cost functional over the trajectories of the unforced dynamics. Computing such expectation over trajectories that are sampled uniformly may induce numerical instabilities due to the exponentiation of the cost. Therefore, sampling of low-cost trajectories is essential for the practical implementation of PI-based methods. In this paper, we use incremental sampling-based algorithms to sample useful trajectories from the unforced system dynamics, and make a novel connection between Rapidly-exploring Random Trees (RRTs) and information-theoretic stochastic optimal control. We show the results from the numerical implementation of the proposed approach to several examples.