Person:
Kira, Zsolt

Associated Organization(s)
Organizational Unit
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 7 of 7
  • Item
    A Design Process for Robot Capabilities and Missions Applied to Microautonomous Platforms
    (Georgia Institute of Technology, 2010) Kira, Zsolt ; Arkin, Ronald C. ; Collins, Thomas R.
    As part of our research for the ARL MAST CTA (Collaborative Technology Alliance) [1], we present an integrated architecture that facilitates the design of microautonomous robot platforms and missions, starting from initial design conception to actual deployment. The framework consists of four major components: design tools, mission-specification system (MissionLab), case-based reasoning system (CBR Expert), and a simulation environment (USARSim). The designer begins by using design tools to generate a space of missions, taking broad mission-specific objectives into account. For example, in a multi-robot reconnaissance task, the parameters varied include the number of robots used, mobility capabilities (e.g. maximum speeds), and sensor capabilities. The design tools are used to intelligently carve out the space of all possible parameter combinations to produce a smaller set of mission configurations. Quantitative assessment of this design space is then performed in simulation to determine which particular configuration would yield an effective team before actual deployment. MissionLab, a mission-specification platform, is used to incorporate the input parameters, generate the underlying robot missions, and control the robots in simulation. It also provides logging mechanisms to measure a range of quantitative performance metrics, such as mission completion rates, resource utilization, and time to completion, which are then used to determine the best configuration for a particular mission. These metrics can also provide guidance for the refinement of the entire design process. Finally, a case-based reasoning system allows users to maximize successful deployment of the robots by retrieving proven configurations and determine the robot capabilities necessary for success in a particular mission.
  • Item
    Mission Specification and Control for Unmanned Aerial and Ground Vehicles for Indoor Target Discovery and Tracking
    (Georgia Institute of Technology, 2010) Ulam, Patrick D. ; Kira, Zsolt ; Arkin, Ronald C. ; Collins, Thomas R.
    This paper describes ongoing research by Georgia Tech into the challenges of tasking and controlling heterogonous teams of unmanned vehicles in mixed indoor/outdoor reconnaissance scenarios. We outline the tools and techniques necessary for an operator to specify, execute, and monitor such missions. The mission specification framework used for the purposes of intelligence gathering during mission execution are first demonstrated in simulations involving a team of a single autonomous rotorcraft and three ground-based robotic platforms. Preliminary results including robotic hardware in the loop are also provided.
  • Item
    Transferring Embodied Concepts Between Perceptually Heterogeneous Robots
    (Georgia Institute of Technology, 2009) Kira, Zsolt
    This paper explores methods and representations that allow two perceptually heterogeneous robots, each of which represents concepts via grounded properties, to transfer knowledge despite their differences. This is an important issue, as it will be increasingly important for robots to communicate and effectively share knowledge to speed up learning as they become more ubiquitous.We use Gӓrdenfors’ conceptual spaces to represent objects as a fuzzy combination of properties such as color and texture, where properties themselves are represented as Gaussian Mixture Models in a metric space. We then use confusion matrices that are built using instances from each robot, obtained in a shared context, in order to learn mappings between the properties of each robot. These mappings are then used to transfer a concept from one robot to another, where the receiving robot was not previously trained on instances of the objects. We show in a 3D simulation environment that these models can be successfully learned and concepts can be transferred between a ground robot and an aerial quadrotor robot.
  • Item
    Mapping Grounded Object Properties Across Perceptually Heterogeneous Embodiments
    (Georgia Institute of Technology, 2009) Kira, Zsolt
    As robots become more common, it becomes increasingly useful for them to communicate and effectively share knowledge that they have learned through their individual experiences. Learning from experiences, however, is oftentimes embodiment-specific; that is, the knowledge learned is grounded in the robot’s unique sensors and actuators. This type of learning raises questions as to how communication and knowledge exchange via social interaction can occur, as properties of the world can be grounded differently in different robots. This is especially true when the robots are heterogeneous, with different sensors and perceptual features used to define the properties. In this paper, we present methods and representations that allow heterogeneous robots to learn grounded property representations, such as that of color categories, and then build models of their similarities and differences in order to map their respective representations. We use a conceptual space representation, where object properties are learned and represented as regions in a metric space, implemented via supervised learning of Gaussian Mixture Models. We then propose to use confusion matrices that are built using instances from each robot, obtained in a shared context, in order to learn mappings between the properties of each robot. Results are demonstrated using two perceptually heterogeneous Pioneer robots, one with a web camera and another with a camcorder.
  • Item
    Modeling Cross-Sensory and Sensorimotor Correlations to Detect and Localize Faults in Mobile Robots
    (Georgia Institute of Technology, 2007) Kira, Zsolt
    We present a novel framework for learning crosssensory and sensorimotor correlations in order to detect and localize faults in mobile robots. Unlike traditional fault detection and identification schemes, we do not use a priori models of fault states or system dynamics. Instead, we utilize additional information and possible source of redundancy that mobile robots have available to them, namely a hierarchical graph representing stages of sensory processing at multiple levels of abstractions and their outputs. We learn statistical models of correlations between elements in the hierarchy, in addition to the control signals, and use this to detect and identify changes in the capabilities of the robot. The framework is instantiated using Self-Organizing Maps, a simple unsupervised learning algorithm. Results indicate that the system can detect sensory and motor faults in a mobile robot and identify their cause, without using a priori models of the robot or its fault states.
  • Item
    Modeling Robot Differences by Leveraging a Physically Shared Context
    (Georgia Institute of Technology, 2007) Kira, Zsolt ; Long, Kathryn
    Knowledge sharing, either implicit or explicit, is crucial during development as evidenced by many studies into the transfer of knowledge by teachers via gaze following and learning by imitation. In the future, the teacher of one robot may be a more experienced robot. There are many new difficulties, however, with regard to knowledge transfer among robots that develop embodiment-specific knowledge through individual solo interaction with the world. This is especially true for heterogeneous robots, where perceptual and motor capabilities may differ. In this paper, we propose to leverage similarity, in the form of a physically shared context, to learn models of the differences between two robots. The second contribution we make is to analyze the cost and accuracy of several methods for the establishment of the physically shared context with respect to such modeling. We demonstrate the efficacy of the proposed methods in a simulated domain involving shared attention of an object.
  • Item
    Spatio-Temporal Case-Based Reasoning for Efficient Reactive Robot Navigation
    (Georgia Institute of Technology, 2005) Likhachev, Maxim ; Kaess, Michael ; Kira, Zsolt ; Arkin, Ronald C.
    This paper presents an approach to automatic selection and modification of behavioral assemblage parameters for autonomous navigation tasks. The goal of this research is to make obsolete the task of manual configuration of behavioral parameters, which often requires significant knowledge of robot behavior and extensive experimentation, and to increase the efficiency of robot navigation by automatically choosing and fine-tuning the parameters that fit the robot task-environment well in real-time. The method is based on the Case-Based Reasoning paradigm. Derived from incoming sensor data, this approach computes spatial features of the environment. Based on the robot’s performance, temporal features of the environment are then computed. Both sets of features are then used to select and fine-tune a set of parameters for an active behavioral assemblage. By continuously monitoring the sensor data and performance of the robot, the method reselects these parameters as necessary. While a mapping from environmental features onto behavioral parameters, i.e., the cases, can be hard-coded, a method for learning new and optimizing existing cases is also presented. This completely automates the process of behavioral parameterization. The system was integrated within a hybrid robot architecture and extensively evaluated using simulations and indoor and outdoor real world robotic experiments in multiple environments and sensor modalities, clearly demonstrating the benefits of the approach.