Organizational Unit:
School of Psychology

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)
Organizational Unit

Publication Search Results

Now showing 1 - 2 of 2
  • Item
    Social responses to virtual humans: the effect of human-like characteristics
    (Georgia Institute of Technology, 2009-07-07) Park, Sung Jun
    A framework for understanding the social responses to virtual humans suggests that human-like characteristics (e.g., facial expressions, voice, expression of emotion) act as cues that lead a person to place the agent into the category "human" and thus, elicit social responses. Given this framework, this research was designed to answer two outstanding questions that had been raised in the research community (Moon&Nass, 2000): 1) If a virtual human has more human-like characteristics, will it elicit stronger social responses from people? 2) How do the human-like characteristics interact in terms of the strength of social responses? Two social psychological (social facilitation and politeness norm) experiments were conducted to answer these questions. The first experiment investigated whether virtual humans can evoke a social facilitation response and how strong that response is when participants are given different cognitive tasks (e.g., anagrams, mazes, modular arithmetic) that vary in difficulty. They did the tasks alone, in the company of another person, or in the company of a virtual human that varied in terms of features. The second experiment investigated whether people apply politeness norms to virtual humans. Participants were tutored and quizzed either by a virtual human tutor that varied in terms of features or a human tutor. Participants then evaluated the tutor's performance either directly by the tutor or indirectly via a paper and pencil questionnaire. Results indicate that virtual humans can produce social facilitation not only with facial appearance but also with voice recordings. In addition, performance in the presence of voice synced facial appearance seems to elicit stronger social facilitation (i.e., no statistical difference compared to performance in the human presence condition) than in the presence of voice only or face only. Similar findings were observed with the politeness norm experiment. Participants who evaluated their tutor directly reported the tutor's performance more favorably than participants who evaluated their tutor indirectly. In addition, this valence toward the voice synced facial appearance had no statistical difference compared to the valence toward the human tutor condition. The results suggest that designers of virtual humans should be mindful about the social nature of virtual humans.
  • Item
    Two stage process model of learning from multimedia: guidelines for design
    (Georgia Institute of Technology, 2008-03-31) Zolna, Jesse S.
    Theories of learning from multimedia suggest that when media include two modal forms (e.g., visual and auditory), learning is improved by activating modally segregated working memory subsystems, thereby expanding the total cognitive resource available for learning (Mayer, 2001; Sweller, 1999). However, a recent meta-analysis suggests that the typical modality effect (use of narrations and diagrams [i.e., multimodal] leads to better learning than use of text and diagrams [i.e., unimodal]) might be limited to situations in which presentations are matched to the time it takes for the narration to play (Ginns, 2005). This caveat can be accounted for by the differences in ways that people process unimodal and multimodal information, but not by the expansion of working memory explanation for modality effects (Tabbers, 2002). In this paper, I propose a framework for conceptualizing how people interact with multimedia instructional materials. According to this approach, learning from multimedia requires (1) creating mental codes to represent to-be-learned information and (2) forming a network of associations among these mental codes to characterize how this information is related. The present research confirms, in two between-subjects experiments, predictions from this model when presentation pace and verbal presentation modality are manipulated to accompany static (Experiment 1) and animated (Experiment 2) diagrams. That is, the data suggest that learning from unimodal presentations improved as presentation pace was slowed, whereas learning from multimodal presentations did not change as presentation pace was slowed. A third experiment also confirmed predicted patterns of eye movement behavior, demonstrating patterns of increasing dwell time on pictures and switches between media as pace was slowed for unimodal presentations but not multimodal presentations. It is concluded that the parallel patterns of learning outcomes and eye-movement behavior support the proposed model and are not predicted by other models of learning from multimedia instructions. This improvement in predictions of the effects of manipulating design elements (e.g., presentation pace and verbal presentation modality) on learning can help designers as they consider what combination of resources (e.g., classroom time or equipment for multimodal presentation) to devote to instructional design.