Organizational Unit:
School of Psychology

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 10 of 21
  • Item
    Emotion and motion: age-related differences in recognizing virtual agent facial expressions
    (Georgia Institute of Technology, 2011-10-05) Smarr, Cory-Ann
    Technological advances will allow virtual agents to increasingly help individuals with daily activities. As such, virtual agents will interact with users of various ages and experience levels. Facial expressions are often used to facilitate social interaction between agents and humans. However, older and younger adults do not label human or virtual agent facial expressions in the same way, with older adults commonly mislabeling certain expressions. The dynamic formation of facial expression, or motion, may provide additional facial information potentially making emotions less ambiguous. This study examined how motion affects younger and older adults in recognizing various intensities of emotion displayed by a virtual agent. Contrary to the dynamic advantage found in emotion recognition for human faces, older adults had higher emotion recognition for static virtual agent faces than dynamic ones. Motion condition did not influence younger adults' emotion recognition. Younger adults had higher emotion recognition than older adults for the emotions of anger, disgust, fear, happiness, and sadness. Low intensities of expression had lower emotion recognition than medium to high expression intensities.
  • Item
    Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition
    (Georgia Institute of Technology, 2010-04-08) Beer, Jenay Michelle
    An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
  • Item
    The manipulation of user expectancies: effects on reliance, compliance, and trust using an automated system
    (Georgia Institute of Technology, 2008-03-31) Mayer, Andrew K.
    As automated technologies continue to advance, they will be perceived more as collaborative team members and less as simply helpful machines. Expectations of the likely performance of others play an important role in how their actual performance is judged (Stephan, 1985). Although user expectations have been expounded as important for human-automation interaction, this factor has not been systematically investigated. The purpose of the current study was to examine the effect older and younger adults expectations of likely automation performance have on human-automation interaction. In addition, this study investigated the effect of different automation errors (false alarms and misses) on dependence, reliance, compliance, and trust in an automated system. Findings suggest that expectancy effects are relatively short lived, significantly affecting reliance and compliance only through the first experimental block. The effects of type of automation error indicate that participants in a false alarm condition increase reliance and decrease compliance while participants in a miss condition do not change their behavior. The results are important because expectancies must be considered when designing training for human-automation interaction. In addition, understanding the effects of type of automation errors is crucial for the design of automated systems. For example, if the automation is designed for diverse and dynamic environments where automation performance may fluctuate, then a deeper understanding of automation functioning may be needed by users.
  • Item
    Effects of mental model quality on collaborative system performance
    (Georgia Institute of Technology, 2008-03-31) Wilkison, Bart D.
    As the tasks humans perform become more complicated and the technology manufactured to support those tasks becomes more adaptive, the relationship between humans and automation transforms into a collaborative system. In this system each member depends on the input of the other to reach a predetermined goal beneficial to both parties. Studying the human/automation dynamic as a social team provides a new set of variables affecting performance previously unstudied by automation researchers. One such variable is the shared mental model (Mathieu, Heffner, Goodwin, Salas, & Cannon-Bowers, 2000). This study examined the relationship between mental model quality and collaborative system performance within the domain of a navigation task. Participants navigated through a simulated city with the help of a navigational system performing at two levels of accuracy; 70% and 100%. Participants with robust mental models of the task environment identified automation errors when they occurred and optimally navigated to destinations. Conversely, users with vague mental models were less likely to identify automation errors, and chose inefficient routes to destinations. Thus, mental model quality proved to be an efficient predictor of navigation performance. Additionally, participants with no mental model performed as well as participants with vague mental models. The difference in performance was the number and type of errors committed. This research is important as it supports previous assertions that humans and automated systems can work as teammates and perform teamwork (Nass, Fog, & Moon, 2000). Thus, other variables found to impact human/human team performance might also affect human/automation team performance just as this study explored the effects of a primarily human/human team performance variable, the mental model. Additionally, this research suggests that a training program creating a weak, inaccurate, or incomplete mental model in the user is equivalent to no training program in terms of performance. Finally, through a qualitative model, this study proposes mental model quality affects the constructs of user self confidence and trust in automation. These two constructs are thought to ultimately determine automation usage (Lee & Moray, 1994). To validate the model a follow on study is proposed to measure automation usage as mental model quality changes.
  • Item
    Toward an understanding of optimal performance within a human-automation collaborative system: Effects of error and verification costs
    (Georgia Institute of Technology, 2006-11-20) Ezer, Neta
    Automated products, especially automated decision aids, have the potential to improve the lives of older adults by supporting their daily needs. Although automation seems promising in this arena, there is evidence that humans, in general, tend to have difficulty optimizing their behavior with a decision aid, and older adults even more so. In a human-automation collaborative system, the ability to balance costs involved in relying on the automation and those involved in verifying the automation is essential for optimal performance and error minimization. Thus, this study was conducted to better understand the processes associated with balancing these costs and also to examine age differences in these processes. Cost of reliance on automation was evaluated using an object counting task. Participants were required to indicate the number of circles on a display, with support coming from a computer estimate decision aid. They were instructed to rely on the aid if they believed its answer or verify the aid by manually counting the circles on the screen if they did not believe the aid to be correct. Manipulations in this task were the cost of a wrong answer, either -5, -10, -25, or -50 points and the cost of verification, either high or low. It was expected that participants would develop a general pattern of appropriate reliance across the cost conditions, but would not change their reliance behavior enough to reach optimality. Older adults were expected to rely on the decision aid to a lesser extent than younger adults in all conditions, yet rate the automation as being more reliable. It was found that older and younger adults did not show large differences in reliance, although older adults tend to be more resistant to changing their reliance due to costs than younger adults. Both age groups significantly underutilized the computer estimate, yet overestimated its reliability. The results are important because it may be necessary to design automated devices and training programs differently for older adults than for younger adults, to direct them towards an optimal strategy of reliance.
  • Item
    Privacy Perceptions of Visual Sensing Devices: Effects of Users' Ability and Type of Sensing Device
    (Georgia Institute of Technology, 2006-07-17) Caine, Kelly E.
    Homes that can collaborate with their residents rather than simply provide shelter are becoming a reality. These homes such as Georgia Techs Aware Home and MITs house_n can potentially provide support to their residents. Because aging adults may be faced with increasing mental and/or physical limitation(s) they may stand to benefit, in particular, from supports provided by these homes if they utilize the technologies they offer. However, the advanced technology in these aware homes often makes use of sensing devices that capture some kind of image-based information. Image-based information capture has previously been shown to elicit privacy concerns among users, and even lead to disuse of the system. The purpose of this study was to explore the privacy concerns that older adults had about a home equipped with visual sensing devices. Using a scenario-based structured interview approach I investigated how the type of images the home captures as well as the physical and mental health of the residents of the home affected privacy concerns as well as perceived benefits. In addition, responses to non-scenario-based open ended structured interview questions were used to gain an understanding of the characteristics of the influential variables. Results suggest that although most older adults express some concerns about using a visual sensing device in their home, the potential benefits of having such a device in specific circumstances outweigh their concerns. These findings have implications in privacy and technology acceptance theory as well as for designers of home based visual monitoring systems.
  • Item
    Perceived Product Hazard Norms in Younger and Older Adults
    (Georgia Institute of Technology, 2004-12-02) Bowles, C. Travis (Christopher Travis)
    Designers and researchers have often assumed that individuals rely to some degree on individual perceptions of a product's hazard when interacting with warning systems that accompany the product. However, few investigations have been made to determine what precisely these perceptions are, and how they may differ across diverse populations (such as age). Younger and older adults were tested for perceived product hazards over a diverse group of products using a Battig and Montague (1969) style procedure. Participants were presented with a total of 78 products, and asked to list the first hazards that came to their mind (up to 7 per product) for each. Comparisons revealed age-related differences between the most commonly perceived hazards for 28 of the products, with many of the age-related differences not predicted prior to data collection. The resulting data additionally form a tool for designing warning systems and research stimuli based on the products or classes of products represented in this sample.
  • Item
    Type of automation failure: the effects on trust and reliance in automation
    (Georgia Institute of Technology, 2004-12-01) Johnson, Jason D.
    Past automation research has focused primarily on machine-related factors (e.g., automation reliability) and human-related factors (e.g., accountability). Other machine-related factors such as type of automation errors, misses or false alarms, have been noticeably overlooked. These two automation errors correspond to potential operator errors, omission (misses) and commission (false alarms), which have proven to directly affect operators trust in automation. This research examined how automation-error-type affects operator trust and reliance in and perceived reliability of automated decision aids. This present research confirmed that perceived reliability is often lower than actual system reliability and that false alarms significantly reduced operator trust in the automation more so than do misses. In addition, this study found that there does not appear to be an effect on the level of subjective trust within each experimental condition (i.e., type of automation error) based on age. There does, however, appear to be a significant difference in the reliance on automation between older and younger adult participants attributed to differences in perceived workload.
  • Item
  • Item
    The attention attraction characteristics of signal words under division of attention
    (Georgia Institute of Technology, 2002-05) Lin, Chao-Chung