Spiking neural networks enabled circuits and systems for edge robots

Author(s)
Lele, Ashwin Sanjay
Editor(s)
Associated Organization(s)
Supplementary to:
Abstract
Robotic computing at the edge needs to meet multiple constraints on power and form factor while delivering the required performance for power-hungry neural network kernels. This work proposed spiking neural network (SNN) alternatives and augmentations for algorithms and circuits for edge robots. We show SNN-driven locomotion for power-constrained hexapod robots, SNN-augmented target tracking for high-speed aerial robots and SNN-assisted visual navigation for size-critical micro-robots. The first part of the work extends the rhythmic leg movement of insects to an SNN-based gait generator to demonstrate an online reward-based training method for autonomous learning to walk. We then utilize an event-based vision sensor as the sensory front-end to the hexapod locomotion to show the first spike-only closed-loop robotic platform. In the second part, we observe that SNN and event-camera forms a sensor-processor pair well-suited for high-speed processing while frame-camera with convolutional neural network (CNN) suits the applications with the high-accuracy requirement. This trade-off between accuracy vs. latency in the event and frame-based visual processing arises from the detailed temporal and spatial resolutions captured by event and frame cameras respectively. We utilize these complementary strengths to build high-speed target identification and tracking system with SNN providing high-speed but noisy target estimates with CNN preserving the lost accuracy by providing reliable periodic anchors. We build a heterogeneous SoC with low-power RRAM compute-in-memory mapping CNN and high-speed SRAM compute-near-memory accelerating SNN. We also extend this framework of fused event and frame processing to optical flow to generalize it beyond target tracking applications. The final part of the work generalizes the idea of multi-modal processing applied in the previous chapters to divide the robotic computing workloads between CNN perception front-end and SNN localization back-end. Our SoC uses RRAM compute-near-memory kernels to accelerate CNN-based perception while SRAM compute-in-memory carries out SNN-based localization for micro-robots. To summarize, this work attempted to substitute and augment compute-constrained robotic computing with SNN for energy saving and performance improvement.
Sponsor
Date
2023-07-30
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI