Person:
Clements, Mark A.

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 2 of 2
Thumbnail Image
Item

Automated Assessment of Surgical Skills Using Frequency Analysis

2015 , Zia, Aneeq , Sharma, Yachna , Bettadapura, Vinay , Sarin, Eric L. , Clements, Mark A. , Essa, Irfan

We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.

Thumbnail Image
Item

Decoding Children’s Social Behavior

2013-06 , Rehg, James M. , Abowd, Gregory D. , Rozga, Agata , Romero, Mario , Clements, Mark A. , Sclaroff, Stan , Essa, Irfan , Ousley, Opal Y. , Li, Yin , Kim, Chanho , Rao, Hrishikesh , Kim, Jonathan C. , Presti, Liliana Lo , Zhang, Jianming , Lantsman, Denis , Bidwell, Jonathan , Ye, Zhefan

We introduce a new problem domain for activity recognition: the analysis of children’s social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1–2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3–5 minute child-adult interaction. In each session, the adult examiner followed a semistructured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.