Proprioceptive State Estimation for Legged Robots with Probabilistic and Hybrid Kinodynamics

Author(s)
Agrawal, Varun
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
School of Interactive Computing
School established in 2007
Supplementary to:
Abstract
Legged robots hold immense promise for robot navigation within both human-centric and unstructured environments, due to their ability to perform dynamic motions, giving them high potential for widespread use and a myriad of applications. To be effective though, legged robot controllers require estimates of the robot's state at significantly high frequencies in order to plan and execute dynamic motions. Exteroceptive sensors, such as cameras and LiDAR, are unable to provide measurements at these desired rates, thus becoming a limiting factor. On the other hand, proprioceptive sensors, such as the Inertial Measurement Unit (IMU), are capable of providing measurements at immense frequencies and are not affected by changing environmental factors. However, low cost versions of these sensors are encumbered with significant and varied types of noise, yielding poor results when used directly. In this thesis, we propose algorithms and techniques for improving state estimation of legged robots using only proprioceptive sensing. We leverage and improve upon existing works on legged robot state estimation and probabilistic modeling to tackle various sources of noise and inaccuracies, improving the accuracy and robustness of the state estimates. Our first contribution is a novel factor graph model for performing smoothing-based Maximum A Posteriori estimation, which takes advantage of the legged robot form factor. Building upon IMU preintegration theory and the use of environmental contact during leg stance phases for constraining sensor noise, we probabilistically model the kinematic chain of each leg within the robot, which compensates for nonlinear deformations and leads to improved accuracy. Furthermore, we show how leveraging M-estimators allows for automatic slip rejection, providing our estimator with greater robustness. We then introduce a new group-theoretic metric for measuring the performance of state estimators. Our metric combines the evaluation of the pose and the unobservable linear velocity of the robot into a singular form, providing a unified way to measure state estimator performance. This contrasts with existing metrics which measure the accuracy of the pose and the velocity estimates individually, thus leading to potential trade-offs and, consequently, subpar results. We further demonstrate the use of Chebyshev polynomial-based differentiation-via-interpolation to compute the true velocity from ground truth pose values, alleviating the need for expensive systems to measure the ground truth linear velocity. Our third contribution is the development of a hybrid factor graph framework for modeling discrete and continuous estimation problems simultaneously, and a novel variable elimination algorithm for converting the hybrid factor graph into a hybrid Bayesian network. The proposed elimination algorithm results in exact posterior probabilities, overcoming the drawbacks of existing approximation based approaches. We further propose novel methods for constraining the computational and runtime complexities, making large scale deployment of our framework viable. This is demonstrated through the development of a hybrid legged robot state estimator which is capable of simultaneously estimating continuous robot states and discrete leg contact events using only kinematic information and measurements. We present results on both simulated robots and real-world hardware, showcasing the efficacy of our hybrid factor graph framework.
Sponsor
Date
2025-12
Extent
Resource Type
Text
Resource Subtype
Dissertation (PhD)
Rights Statement
Rights URI