Organizational Unit:
Undergraduate Research Opportunities Program

Research Organization Registry ID
Description
Previous Names
Parent Organization
Parent Organization
Organizational Unit
Includes Organization(s)

Publication Search Results

Now showing 1 - 8 of 8
  • Item
    Examinator: Detecting Exam Cheating Via Comparison of Question Answering Timings
    (Georgia Institute of Technology, 2022-05) Sonubi, Imanuel Oluseyi
    Cheating is an issue that affects more than just the student doing it, no matter the format of the assessment being cheated on. Take-home exams provide more flexibility for the instructor and student than regular proctored exams, but it is that lack of proctoring during the exam that makes cheating trickier to detect -- students may meet up outside of the classroom and inappropriately collaborate on these tests even though they are to be done individually. Examinator aims to detect cheating on Canvas take-home exams by examining the times at which students view questions and comparing them with other students' times to find any exam attempts that have suspiciously similar timestamps for each question.
  • Item
    CopyCat: Leveraging American Sign Language Recognition in Educational Games for Deaf Children
    (Georgia Institute of Technology, 2022-05) Ravi, Prerna
    Deaf children born to hearing parents lack continuous access to language, leading to weaker working memory compared to hearing children and deaf children born to deaf parents. CopyCat is a game where children communicate with the computer via American Sign Language (ASL), and it has been shown to improve language skills and working memory. Previously, CopyCat depended on unscalable hardware such as custom gloves for sign verification, but modern 4K cameras and pose estimators present new opportunities. This thesis focuses on the current version of the CopyCat game using off-the-shelf hardware, as well as the state-of-the-art sign language recognition system we have developed to augment game play. Using Hidden Markov Models (HMMs), user independent word accuracies were 90.6%, 90.5%, and 90.4% for AlphaPose, Kinect, and MediaPipe, respectively. Transformers, a state- of-the-art model in natural language processing, performed 17.0% worse on average. Given these results, we believe our current HMM-based recognizer can be successfully adapted to verify children’s signing while playing CopyCat.
  • Item
    A Personalized American Sign Language Game to Improve Short-Term Memory for Deaf Children
    (Georgia Institute of Technology, 2022-05) Agrawal, Pranay
    95% of deaf children are born to hearing parents and lack continuous exposure to language, which often inhibits learning. We are developing Adaptive CopyCat, an educational game where Deaf children communicate with the computer via American Sign Language (ASL) in order to improve their language skills and working memory. While previous versions of CopyCat relied on custom hardware such as colored gloves with accelerometers for sign verification, our current version of the game utilizes off-the-shelf 4K RGB depth cameras and pose estimators. Before re-creating the game for Deaf children, we evaluate the efficacy of our current CopyCat ASL recognition system with 12 adults. Average user-independent sentence and word accuracies were 85.1% and 95.4%, respectively. To improve the accuracy when new users are introduced, we developed a progressive training model that can adapt to a new user's signing as they play the game. This approach produced a 5% absolute increase in sentence accuracy. To test for generality, a 13th user was recruited six months after the initial experiment and achieved similarly high accuracies. These promising results suggest that our recognizer will be sufficiently accurate for verifying children's signing while playing Adaptive CopyCat.
  • Item
    Evaluating Off-Center Head-Worn Display
    (Georgia Institute of Technology, 2019-12) Ramakrishnan, Rohan
    Several studies have highlighted the advantages of using mobile augmented reality systems to assist with various tasks over traditional paper-based methods. However, these interfaces are often located in users’ primary field of view which causes interference with users’ vision and presents several disruptions. In this paper, a new ”off-center” display type is prototyped and compared across other displays using a coloring task. Metrics such as completion time, errors, and workload are collected and used to find tradeoffs between different display types and determine their feasibility.
  • Item
    Teaching American Sign Language to Hearing Parents of Deaf Children with Games
    (Georgia Institute of Technology, 2019-05) Goebel, Madeleine Elizabeth
    More than 95% of deaf children in the United States are born to hearing parents (Mitchell & Karchmer 2004). With the majority of hearing parents having little to no exposure to American Sign Language (ASL) prior to the birth of their deaf child, many struggle to learn sign language while also beginning to use it to communicate with their new infant. The language deprivation experienced by deaf children as a result of their parents’ inability to communicate delays their development (Kusche 1984). With the advent of smartphones and the rising popularity of movements such as BabySign, many different portable ASL lessons have been developed. It has been shown that these lessons are more effective at teaching vocabulary than classroom lessons (Lu 2008). However, these lessons struggle with a high attrition rate of students after a few weeks (Summet 2010). Recent developments in student-centered education indicate that incorporating achievement goals leads to a lower attrition rate in language classes (Oberg & Daniels 2013). To reduce the rate of attrition, I used a popular, multi-level game as a framework for the lessons and incorporated ASL phrases into the game play.
  • Item
    Deception Detection Tests Along with Brain-Computer Interface
    (Georgia Institute of Technology, 2018-08) Talati, Aatmay S
    Primarily polygraphs (AKA lie detectors) are being used for deception detection tests (DDT) in terms of knowing hidden and crucial information associated with crime or an incidence. Often, other factors, i.e., nervousness, fear, anger, etc. affect polygraph data negatively. In some cases, the subject could be highly trained to intentionally falsify the DDT results by practicing yoga, meditation, etc. Using few event-related potentials (ERP), along with continually measuring few polygraph test parameters on the subject would be helpful as an ERP component elicited in the process of decision making. The failure rate of polygraph is 60%, as recorded by the U.S. Customs and Border Protection (CBP) pre-employment polygraph screening program. This work will help investigators to augment polygraph sensing to determine deception.
  • Item
    Synchronous Interfaces for Wearable Computers
    (Georgia Institute of Technology, 2018-05) Wu, Jason
    Synchronous interfaces provide a new input modality for wearable devices requiring minimal user learning and calibration. We present SeeSaw, a synchronous gesture interface for commodity smartwatches to support rapid, one-handed input with no additional hardware. Our algorithm introduces methods for minimizing false-trigger events while facilitating fast and expressive input. Results from a live evaluation of the system as a one-handed notification response gesture show comparable speed and accuracy to two-handed touch-based interfaces on smartwatches. The SeeSaw input interaction is also evaluated as an input interface for smartwatches and head-worn display systems, showing that the interface enables rapid and accurate interaction. Thus, we find that the SeeSaw synchronous gesture offers a compelling alternative to existing input methods on wearable computers. Finally, a suite of demo applications are presented to show SeeSaw's support of binary, multi-target, and activation input.
  • Item
    Wearable Gesture Recognition with Heterogeneous Cameras
    (Georgia Institute of Technology, 2016-08) Labean, Tyler J.
    The purpose of this research was to create a wearable system that recognizes gestures of the user, allowing interaction through hand gestures. The user wears a hat mounted with a regular optical camera and a thermal camera. The combination of these two heterogeneous video streams was used to recognize the user’s gestures in many conditions and environments. First, corners were detected from contrast stretched images using the Shi-Tomasi method. The movement of these corners was then tracked using Lucas-Kanade optical flow analysis. Groups of corners that moved together were defined using hierarchical cluster linkage analysis. To determine how these groups moved with time, a connected components analysis was employed. The motion path was reduced into its cardinal and semi cardinal vector components to encode the motion vector. Subsequently, this data was used to train hidden Markov models for each gesture and each camera. After the evaluation of gesture priority over all hidden Markov models, principal components analysis was performed on this gesture prioritized set to train a one vs one Multiclass recognizer. Finally, a confusion matrix was generated indicating a recognition success rate of 87%. An analysis was performed on the robustness of the algorithm under various luminance, heat and image variance conditions. The contribution of combining optical and thermal video streams vs utilizing either as a single video stream input and found to be a great advantage. Additionally, a video database of gestures was created and will be released so that other researchers can compare algorithms and benchmarks using the same data-set.