Organizational Unit:
Aerospace Design Group

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    Visual Marker Detection In The Presence Of Colored Noise for Unmanned Aerial Vehicles
    (Georgia Institute of Technology, 2010-04) Shah, Syed Irtiza Ali ; Wu, Allen D. ; Johnson, Eric N.
    This paper develops a vision-based algorithm to detect a visual marker in real time and in the presence of excessive colored noise for Unmanned Aerial Vehicles. After using various image analysis techniques, including color histograms, filtering techniques and color space analyses, typical pixel-based characteristics of the visual marker were established. It was found that not only various color space based characteristics were significant, but also relationships between various channels across different color spaces were of great consequence. A block based search algorithm was then used to search for those established characteristics in real-time image data stream from a colored camera. A low cost noise and interference filter was also devised to handle excessive noise that was encountered during flight tests. The specific implementation scenario is that of detection of a Blue LED for GeorgiaTech's participating aircraft into the International Aerial Robotics competition. The final algorithm that was implemented on GTAR lama aircraft, used both multiple thresholding and linear confidence level calculations and was successfully used in the competition in 2009.
  • Item
    Methods for Localization and Mapping Using Vision and Inertial Sensors
    (Georgia Institute of Technology, 2008-08) Wu, Allen D. ; Johnson, Eric N.
    The problems of vision-based localization and mapping are currently highly active areas of research for aerial systems. With a wealth of information available in each image, vision sensors allow vehicles to gather data about their surrounding environment in addition to inferring own-ship information. However, algorithms for processing camera images are often cumbersome for the limited computational power available onboard many unmanned aerial systems. This paper therefore investigates a method for incorporating an inertial measurement unit together with a monocular vision sensor to aid in the extraction of information from camera images, and hence reduce the computational burden for this class of platforms. Feature points are detected in each image using a Harris corner detector, and these feature measurements are statistically corresponded across each captured image using knowledge of the vehicle's pose. The investigated methods employ an Extended Kalman Filter framework for estimation. Real-time hardware results are presented using a baseline configuration in which a manufactured target is used for generating salient feature points, and vehicle pose information is provided by a high precision motion capture system for comparison purposes.
  • Item
    Guidance, Navigation, Control, and Operator Interfaces for Small Rapid Response Unmanned Helicopters
    (Georgia Institute of Technology, 2008-04) Christmann, Hans Claus ; Christophersen, Henrik B. ; Wu, Allen D. ; Johnson, Eric N.
    This paper focuses on the development of small rapid response reconnaissance unmanned helicopters (1 to 3 kg, electric), for use by the military in urban areas and by civilian first responders, in terms of system architecture, automation (including navigation, flight control, and guidance), and operator interface designs. Design objectives include an effective user interface, a vehicle capable of smooth and precise motion control, an ability to display clear images to an operator, and a vehicle that is capable of safe and stable flight.
  • Item
    Flight-Test Results of Autonomous Airplane Transitions Between Steady-Level and Hovering Flight
    (Georgia Institute of Technology, 2008-03) Johnson, Eric N. ; Wu, Allen D. ; Neidhoefer, James C. ; Kannan, Suresh K. ; Turbe, Michael A.
    Linear systems can be used to adequately model and control an aircraft in either ideal steady-level flight or in ideal hovering flight. However, constructing a single unified system capable of adequately modeling or controlling an airplane in steady-level flight and in hovering flight, as well as during the highly nonlinear transitions between the two, requires the use of more complex systems, such as scheduled-linear, nonlinear, or stable adaptive systems. This paper discusses the use of dynamic inversion with real-time neural network adaptation as a means to provide a single adaptive controller capable of controlling a fixed-wing unmanned aircraft system in all three flight phases: steady-level flight, hovering flight, and the transitions between them. Having a single controller that can achieve and transition between steady-level and hovering flight allows utilization of the entire low-speed flight envelope, even beyond stall conditions. This method is applied to the GTEdge, an eight-foot wingspan, fixed-wing unmanned aircraft system that has been fully instrumented for autonomous flight. This paper presents data from actual flight-test experiments in which the airplane transitions from high-speed, steady-level flight into a hovering condition and then back again.
  • Item
    Flight Results of Autonomous Fixed-Wing UAV Transitions to and from Stationary Hover
    (Georgia Institute of Technology, 2006-08) Johnson, Eric N. ; Turbe, Michael A. ; Wu, Allen D. ; Kannan, Suresh K. ; Neidhoefer, James C.
    Fixed-wing unmanned aerial vehicles (UAVs) with the ability to hover have significant potential for applications in urban or other constrained environments where the combination of fast speed, endurance, and stable hovering flight can provide strategic advantages. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller capable of transitioning a fixed-wing UAV to and from hovering flight in a nearly stationary position. This approach allows utilization of the entire low speed flight envelope even beyond stall conditions. The method is applied to the GTEdge, an 8.75 foot wing span fixed-wing aerobatic UAV which has been fully instrumented for autonomous flight. Results from actual flight test experiments of the system where the airplane transitions from high speed steady flight into a stationary hover and then back are presented.
  • Item
    Vision-Aided Inertial Navigation for Flight Control
    (Georgia Institute of Technology, 2005-09) Wu, Allen D. ; Johnson, Eric N. ; Proctor, Alison A.
    Many onboard navigation systems use the Global Positioning System to bound the errors that result from integrating inertial sensors over time. Global Positioning System information, however, is not always accessible since it relies on external satellite signals. To this end, a vision sensor is explored as an alternative for inertial navigation in the context of an Extended Kalman Filter used in the closed-loop control of an unmanned aerial vehicle. The filter employs an onboard image processor that uses camera images to provide information about the size and position of a known target, thereby allowing the flight computer to derive the target's pose. Assuming that the position and orientation of the target are known a priori, vehicle position and attitude can be determined from the fusion of this information with inertial and heading measurements. Simulation and flight test results verify filter performance in the closed-loop control of an unmanned rotorcraft.