Organizational Unit:
Daniel Guggenheim School of Aerospace Engineering

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Stability, dissipativity, and optimal control of discontinuous dynamical systems
    (Georgia Institute of Technology, 2015-04-06) Sadikhov, Teymur
    Discontinuous dynamical systems and multiagent systems are encountered in numerous engineering applications. This dissertation develops stability and dissipativity of nonlinear dynamical systems with discontinuous right-hand sides, optimality of discontinuous feed-back controllers for Filippov dynamical systems, almost consensus protocols for multiagent systems with innaccurate sensor measurements, and adaptive estimation algorithms using multiagent network identifiers. In particular, we present stability results for discontinuous dynamical systems using nonsmooth Lyapunov theory. Then, we develop a constructive feedback control law for discontinuous dynamical systems based on the existence of a nonsmooth control Lyapunov function de fined in the sense of generalized Clarke gradients and set-valued Lie derivatives. Furthermore, we develop dissipativity notions and extended Kalman-Yakubovich-Popov conditions and apply these results to develop feedback interconnection stability results for discontinuous systems. In addition, we derive guaranteed gain, sector, and disk margins for nonlinear optimal and inverse optimal discontinuous feedback regulators that minimize a nonlinear-nonquadratic performance functional for Filippov dynamical systems. Then, we provide connections between dissipativity and optimality of nonlinear discontinuous controllers for Filippov dynamical systems. Furthermore, we address the consensus problem for a group of agent robots with uncertain interagent measurement data, and show that the agents reach an almost consensus state and converge to a set centered at the centroid of agents initial locations. Finally, we develop an adaptive estimation framework predicated on multiagent network identifiers with undirected and directed graph topologies that identifies the system state and plant parameters online.
  • Item
    Finite-time partial stability, stabilization, semistabilization, and optimal feedback control
    (Georgia Institute of Technology, 2015-04-03) L'afflitto, Andrea
    Asymptotic stability is a key notion of system stability for controlled dynamical systems as it guarantees that the system trajectories are bounded in a neighborhood of a given isolated equilibrium point and converge to this equilibrium over the infinite horizon. In some applications, however, asymptotic stability is not the appropriate notion of stability. For example, for systems with a continuum of equilibria, every neighborhood of an equilibrium contains another equilibrium and a nonisolated equilibrium cannot be asymptotically stable. Alternatively, in stabilization of spacecraft dynamics via gimballed gyroscopes, it is desirable to find state- and output-feedback control laws that guarantee partial-state stability of the closed-loop system, that is, stability with respect to part of the system state. Furthermore, we may additionally require finite-time stability of the closed-loop system, that is, convergence of the system's trajectories to a Lyapunov stable equilibrium in finite time. The Hamilton-Jacobi-Bellman optimal control framework provides necessary and sufficient conditions for the existence of state-feedback controllers that minimize a given performance measure and guarantee asymptotic stability of the closed-loop system. In this research, we provide extensions of the Hamilton-Jacobi-Bellman optimal control theory to develop state-feedback control laws that minimize nonlinear-nonquadratic performance criteria and guarantee semistability, partial-state stability, finite-time stability, and finite-time partial state stability of the closed-loop system.