Organizational Unit:
Institute for Robotics and Intelligent Machines (IRIM)

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Includes Organization(s)
Organizational Unit
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 5 of 5
  • Item
    Vulnerabilities in SNMPv3
    (Georgia Institute of Technology, 2012-07-10) Lawrence, Nigel Rhea
    Network monitoring is a necessity for both reducing downtime and ensuring rapid response in the case of software or hardware failure. Unfortunately, one of the most widely used protocols for monitoring networks, the Simple Network Management Protocol (SNMPv3), does not offer an acceptable level of confidentiality or integrity for these services. In this paper, we demonstrate two attacks against the most current and secure version of the protocol with authentication and encryption enabled. In particular, we demonstrate that under reasonable conditions, we can read encrypted requests and forge messages between the network monitor and the hosts it observes. Such attacks are made possible by an insecure discovery mechanism, which allows an adversary capable of compromising a single network host to set the keys used by the security functions. Our attacks show that SNMPv3 places too much trust on the underlying network, and that this misplaced trust introduces vulnerabilities that can be exploited.
  • Item
    Navigation among movable obstacles in unknown environments
    (Georgia Institute of Technology, 2011-04-05) Levihn, Martin
    This work presents a new class of algorithms that extend the domain of Navigation Among Movable Obstacles (NAMO) to unknown environments. Efficient real-time algorithms for solving NAMO problems even when no initial environment information is available to the robot are presented and validated. The algorithms yield optimal solutions and are evaluated for real-time performance on a series of simulated domains with more than 70 obstacles. In contrast to previous NAMO algorithms that required a pre-specified environment model, this work considers the realistic domain where the robot is limited by its sensor range. It must navigate to a goal position in an environment of static and movable objects. The robot can move objects if the goal cannot be reached or if moving the object significantly shortens the path. The robot gains information about the world by bringing distant objects into its sensor range. The first practical planner for this exponentially complex domain is presented. The planner reduces the search-space through a collection of techniques, such as upper bound calculations and the maintenance of sorted lists with underestimates. Further, the algorithm is only considering manipulation actions if these actions are creating a new opening in the environment. In the addition to the evaluation of the planner itself is each of this techniques also validated independently.
  • Item
    Vision-based place categorization
    (Georgia Institute of Technology, 2010-11-18) Bormann, Richard Klaus Eduard
    In this thesis we investigate visual place categorization by combining successful global image descriptors with a method of visual attention in order to automatically detect meaningful objects for places. The idea behind this is to incorporate information about typical objects for place categorization without the need for tedious labelling of important objects. Instead, the applied attention mechanism is intended to find the objects a human observer would focus first, so that the algorithm can use their discriminative power to conclude the place category. Besides this object-based place categorization approach we employ the Gist and the Centrist descriptor as holistic image descriptors. To access the power of all these descriptors we employ SVM-DAS (discriminative accumulation scheme) for cue integration and furthermore smooth the output trajectory with a delayed Hidden Markov Model. For the classification of the variety of descriptors we present and evaluate several classification methods. Among them is a joint probability modelling approach with two approximations as well as a modified KNN classifier, AdaBoost and SVM. The latter two classifiers are enhanced for multi-class use with a probabilistic computation scheme which treats the individual classifiers as peers and not as a hierarchical sequence. We check and tweak the different descriptors and classifiers in extensive tests mainly with a dataset of six homes. After these experiments we extend the basic algorithm with further filtering and tracking methods and evaluate their influence on the performance. Finally, we also test our algorithm within a university environment and on a real robot within a home environment.
  • Item
    Joint attention in human-robot interaction
    (Georgia Institute of Technology, 2010-07-07) Huang, Chien-Ming
    Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
  • Item
    Task transparency in learning by demonstration : gaze, pointing, and dialog
    (Georgia Institute of Technology, 2010-07-07) dePalma, Nicholas Brian
    This body of work explores an emerging aspect of human-robot interaction, transparency. Socially guided machine learning has proven that highly immersive robotic behaviors have yielded better results than lesser interactive behaviors for performance and shorter training time. While other work explores this transparency in learning by demonstration using non-verbal cues to point out the importance or preference users may have towards behaviors, my work follows this argument and attempts to extend it by offering cues to the internal task representation. What I show is that task-transparency, or the ability to connect and discuss the task in a fluent way implores the user to shape and correct the learned goal in ways that may be impossible by other present day learning by demonstration methods. Additionally, some participants are shown to prefer task-transparent robots which appear to have the ability of "introspection" in which it can modify the learned goal by other methods than just demonstration.