Enabling Edge-Intelligence in Resource-Constrained Autonomous Systems
Loading...
Author(s)
Anwar, Malik Aqeel
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
The objective of this research is to shift Machine Learning algorithms from resource-extensive server/cloud to compute-limited edge nodes by designing energy-efficient ML systems. Multiple sub-areas of research in this domain are explored for the application of drone autonomous navigation. Our principal goal is to enable the UAV to autonomously navigate using Reinforcement Learning, without incurring any additional hardware or sensor cost. Most of the lightweight UAVs are limited in their resources such as compute capabilities and onboard energy source, and the conventional state-of-the-art ML algorithms cannot be directly implemented on them. This research addresses this issue by devising energy-efficient ML algorithms, modifying existing ML algorithms, designing energy-efficient ML accelerators, and leveraging the hardware-algorithm co-design. RL is notorious for being data-hungry and requires trials and error for it to converge. Hence it cannot be directly implemented on real drones until the issues of safety, data limitations, and reward generation is addressed. Instead of learning the task from scratch, just like humans, RL algorithms can benefit from prior knowledge which can help them converge to their goals in less time and consume less energy. Multiple drones can be collectively used to help each other by sharing their locally learned knowledge. Such distributive systems can help agents learn their respective local tasks faster but may become vulnerable to attacks in the presence of adversarial agents which needs to be addressed. Finally, the improvement in the energy efficiency of RL-based systems achieved from the algorithmic approaches is limited by the underlying hardware and computing architectures. Hence, these need to be redesigned in an application-specific way exploring and exploiting the nature of the most used ML operators This can be done by exploring new computing devices and considering the data reuse and dataflow of ML operators within the architectural design. This research discusses these issues by addressing them and presenting better alternatives. It is concluded that energy consumption at multiple levels of hierarchy needs to be addressed by exploring algorithmic, hardware-based, and algorithm-hardware co-design approaches.
Sponsor
Date
2021-06-21
Extent
Resource Type
Text
Resource Subtype
Dissertation