Person:
Turk,
Greg
Turk,
Greg
Permanent Link
Associated Organization(s)
Organizational Unit
School of Interactive Computing
School established in 2007
ORCID
ArchiveSpace Name Record
Publication Search Results
Now showing
1 - 10 of 14
-
ItemMultidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing(Georgia Institute of Technology, 2019-05-24) Erickson, Zackory ; Clever, Henry M. ; Gangaram, Vamsee ; Turk, Greg ; Liu, C. Karen ; Kemp, Charles C.Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot’s end effector. A neural network model estimates the position of the closest point on a person’s limb and the orientation of the limb’s central axis relative to the sensor’s frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living— dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants’ right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants’ limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.
-
ItemHaptic Simulation for Robot-Assisted Dressing(Georgia Institute of Technology, 2017) Yu, Wenhao ; Kapusta, Ariel ; Tan, Jie ; Kemp, Charles C. ; Turk, Greg ; Liu, C. KarenThere is a considerable need for assistive dressing among people with disabilities, and robots have the potential to fulfill this need. However, training such a robot would require extensive trials in order to learn the skills of assistive dressing. Such training would be time-consuming and require considerable effort to recruit participants and conduct trials. In addition, for some cases that might cause injury to the person being dressed, it is impractical and unethical to perform such trials. In this work, we focus on a representative dressing task of pulling the sleeve of a hospital gown onto a person’s arm. We present a system that learns a haptic classifier for the outcome of the task given few (2-3) real-world trials with one person. Our system first optimizes the parameters of a physics simulator using real-world data. Using the optimized simulator, the system then simulates more haptic sensory data with noise models that account for randomness in the experiment. We then train hidden Markov Models (HMMs) on the simulated haptic data. The trained HMMs can then be used to classify and predict the outcome of the assistive dressing task based on haptic signals measured by a real robot’s end effector. This system achieves 92.83% accuracy in classifying the outcome of the robot-assisted dressing task with people not included in simulation optimization. We compare our classifiers to those trained on real-world data. We show that the classifiers from our system can categorize the dressing task outcomes more accurately than classifiers trained on ten times more real data.
-
ItemData-Driven Haptic Perception for Robot-Assisted Dressing(Georgia Institute of Technology, 2016-08) Kapusta, Ariel ; Yu, Wenhao ; Bhattacharjee, Tapomayukh ; Liu, C. Karen ; Turk, Greg ; Kemp, Charles C.Dressing is an important activity of daily living (ADL) with which many people require assistance due to impairments. Robots have the potential to provide dressing assistance, but physical interactions between clothing and the human body can be complex and difficult to visually observe. We provide evidence that data-driven haptic perception can be used to infer relationships between clothing and the human body during robot-assisted dressing. We conducted a carefully controlled experiment with 12 human participants during which a robot pulled a hospital gown along the length of each person’s forearm 30 times. This representative task resulted in one of the following three outcomes: the hand missed the opening to the sleeve; the hand or forearm became caught on the sleeve; or the full forearm successfully entered the sleeve. We found that hidden Markov models (HMMs) using only forces measured at the robot’s end effector classified these outcomes with high accuracy. The HMMs’ performance generalized well to participants (98.61% accuracy) and velocities (98.61% accuracy) outside of the training data. They also performed well when we limited the force applied by the robot (95.8% accuracy with a 2N threshold), and could predict the outcome early in the process. Despite the lightweight hospital gown, HMMs that used forces in the direction of gravity substantially outperformed those that did not. The best performing HMMs used forces in the direction of motion and the direction of gravity.
-
ItemEasyZoom: Zoom-in-Context Views for Exploring Large Collections of Images(Georgia Institute of Technology, 2013) Chen, Jiajian ; Xu, Yan ; Turk, Greg ; Stasko, John T.Image browsing and searching are some of the most common tasks in daily computer use. Zooming techniques are important for image searching and browsing in a large collection of thumbnail images in a single screen. In this paper we investigate the design and usability of different zoom-in-context views for image browsing and searching. We present two new zoom-in-context views, sliding and expanding views, that can help users explore a large collection of images more efficiently and enjoyably. In the sliding view the zoomed image moves its neighbors away vertically and horizontally. In the expanding view, the nearby images are pushed away in all directions, and this method uses a Voronoi diagram to compute the positions of the neighbors. We also present the results of a user study that compared the usability of the two zoom-in-context views and an overlapping, non-context zoom in the tasks of searching to match an image or a text description, and the task of brochure making. Although the task completion times were not significantly different, users expressed a preference for the zoom-in-context methods over the standard non-context zoom for text-matching image search and for image browsing tasks.
-
ItemVessel Segmentation Using a Shape Driven Flow(Georgia Institute of Technology, 2004-09) Nain, Delphine ; Yezzi, Anthony ; Turk, GregWe present a segmentation method for vessels using an implicit deformable model with a soft shape prior. Blood vessels are challenging structures to segment due to their branching and thinning geometry as well as the decrease in image contrast from the root of the vessel to its thin branches. Using image intensity alone to deform a model for the task of segmentation often results in leakages at areas where the image information is ambiguous. To address this problem, we combine image statistics and shape information to derive a region-based active contour that segments tubular structures and penalizes leakages. We present results on synthetic and real 2D and 3D datasets.
-
ItemVector Field Design on Surfaces(Georgia Institute of Technology, 2004) Zhang, Eugene ; Mischaikow, Konstantin Michael ; Turk, GregVector field design on surfaces is necessary for many graphics applications: example-based texture synthesis, non-photorealistic rendering, and fluid simulation. A vector field design system should allow a user to create a large variety of complex vector fields with relatively little effort. In this paper, we present a vector field design system for surfaces that allows the user to control the number of singularities in the vector field and their placement. Our system combines basis vector fields to make an initial vector field that meets the user's specifications. The initial vector field often contains unwanted singularities. Such singularities cannot always be eliminated, due to the Poincar'e-Hopf index theorem. To reduce the effect caused by these singularities, our system allows a user to move a singularity to a more favorable location or to cancel a pair of singularities. These operations provide topological guarantees for the vector field in that they only affect the user-specified singularities. Other editing operations are also provided so that the user may change the topological and geometric characteristics of the vector field. We demonstrate our vector field design system for several applications: example-based texture synthesis, painterly rendering of images, and pencil sketch illustrations of smooth surfaces.
-
ItemFeature-Based Surface Parameterization and Texture Mapping(Georgia Institute of Technology, 2003) Zhang, Eugene ; Mischaikow, Konstantin Michael ; Turk, GregSurface parameterization is necessary for many graphics tasks: texture-preserving simplification, remeshing, surface painting, and pre-computation of solid textures. The stretch caused by a given parameterization determines the sampling rate on the surface. In this paper, we propose an automatic parameterization method that segments a surface into patches that are then flattened with little stretch. We observe that many objects consist of regions of relative simple shapes, each of which has a natural parameterization. Therefore, we propose a three-stage feature based patch creation method for manifold mesh surfaces. The first two stages, genus reduction and feature identification, are performed with the help of distance-based Morse functions. In the last stage, we create one or two patches for each feature region based on a covariance matrix of the feature's surface points. To reduce the stretch during patch unfolding, we notice that the stretch is a 2x2 tensor which in ideal situations is the identity. Therefore, we propose to use the Green-Lagrange tensor to measure and to guide the optimization process. Furthermore, we allow the boundary vertices of a patch to be optimized by adding scaffold triangles. We demonstrate our feature identification and patch unfolding methods for several textured models. Finally, to evaluate the quality of a given parameterization, we propose an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency and visibility.
-
ItemReconstructing Surfaces by Volumetric Regularization(Georgia Institute of Technology, 2000) Dinh, Huong Quynh ; Turk, Greg ; Slabaugh, Gregory G.We present a new method of surface reconstruction that generates smooth and seamless models from sparse, noisy, and non-uniform range data. Data acquisition techniques from computer vision, such as stereo range images and space carving, produce three dimensional point sets that are imprecise and non-uniform when compared to laser or optical range scanners. Traditional reconstruction algorithms designed for dense and precise data cannot be used on stereo range images and space carved volumes. Our method constructs a three dimensional implicit surface, formulated as a summation of weighted radial basis functions. We achieve three primary advantages over existing algorithms: (1) the implicit functions we construct estimate the surface well in regions where there is little data; (2) the reconstructed surface is insensitive to noise in data acquisition because we can allow the surface to approximate, rather than exactly interpolate, the data; and (3) the reconstructed surface is locally detailed, yet globally smooth, because we use radial basis functions that achieve multiple orders of smoothness.
-
ItemImage-Driven Mesh Optimization(Georgia Institute of Technology, 2000) Lindstrom, Peter ; Turk, GregWe describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.
-
ItemSimplification and Repair of Polygonal Models Using Volumetric Techniques(Georgia Institute of Technology, 1999) Nooruddin, Fakir S. ; Turk, GregTwo important tools for manipulating polygonal models are simplification and repair, and we present voxel-based methods for performing both of these tasks. We describe a method for converting polygonal models to a volumetric representation in a way that handles models with holes, double walls and intersecting parts. This allows us to perform polygon model repair simply by converting a model to and from the volumetric domain. We also describe a new topology-altering simplification method that is based on 3D morphological operators. Visually unimportant features such as tubes and holes may be eliminated from a model by the open and close morphological operators. Our simplification approach accepts polygonal models as input, scan converts these to create a volumetric description, performs topology modification and then converts the results back to polygons. We then apply a topology-preserving polygon simplification technique to produce a final model. Our simplification method produces results that are everywhere manifold.