Dissertations, Theses, and Capstone Projects
Date of Degree
2-2014
Document Type
Dissertation
Degree Name
Ph.D.
Program
Computer Science
Advisor
Matt Huenerfauth
Committee Members
Vicki Hanson
Liang Huang
Andrew Rosenberg
Subject Categories
Computer Sciences
Abstract
Techniques for producing realistic and understandable animations of American Sign Language (ASL) have accessibility benefits for signers with lower levels of written language literacy. Previous research in sign language animation didn’t address the specific linguistic issue of space use and verb inflection, due to a lack of sufficiently detailed and linguistically annotated ASL corpora, which is necessary for modern data-driven approaches. In this dissertation, a high-quality ASL motion capture corpus with ASL-specific linguistic structures is collected, annotated, and evaluated using carefully designed protocols and well-calibrated motion capture equipment. In addition, ASL animations are modeled, synthesized, and evaluated based on samples of ASL signs collected from native-signer animators or from signers recorded using motion capture equipment.
Part I of this dissertation focuses on how an ASL corpus is collected, including unscripted ASL passages and ASL inflecting verbs, signs in which the location and orientation of the hands is influenced by the arrangement of locations in 3D space that represent entities under discussion. Native signers are recorded in a studio with motion capture equipment: cyber-gloves, body suit, head tracker, hand tracker, and eye tracker. Part II describes how ASL animation is synthesized using our corpus of ASL inflecting verbs. Specifically, mathematical models of hand movement are trained on animation data of signs produced by a native signer.
This dissertation work demonstrates that mathematical models can be trained and built using movement data collected from humans. The evaluation studies with deaf native signer participants show that the verb animations synthesized from our models have similar understandability in subjective-rating and comprehension-question scores to animations produced by a human animator, or to animations driven by a human’s motion capture data. The modeling techniques in this dissertation are applicable to other types of ASL signs and to other sign languages used internationally. These models’ parameterization of sign animations can increase the repertoire of generation systems and can automate the work of humans using sign language scripting systems.
Recommended Citation
Lu, Pengfei, "Data-driven Synthesis of Animations of Spatially Inflected American Sign Language Verbs Using Human Data" (2014). CUNY Academic Works.
https://academicworks.cuny.edu/gc_etds/1418