Abstract: Described is a system, method, and computer program product that substantially advances the art of animating Lip Sync in 3D computer animated characters by automatically producing data from a Phoneme Transcription of a dialog audio file, which data results in Lip Sync animation that is more realistic, smooth, and aesthetically pleasing than that produced by current Phoneme-Target Lip Sync systems. This Invention works by converting a Phoneme Transcription of a recorded dialog audio file into KeyFrame Data which dynamically controls 16 independent animation Parameters, each associated with a different part of the animated character's mouth, then algorithmically modifying that data such that it conforms to the previously unknown complex, subtle and context-specific relationships between audible phonemes and visible mouth movements.
Abstract: Described is a system, method, and computer program product that substantially advances the art of animating Lip Sync in 3D computer animated characters by automatically producing data from a Phoneme Transcription of a dialog audio file, which data results in Lip Sync animation that is more realistic, smooth, and aesthetically pleasing than that produced by current Phoneme-Target Lip Sync systems. This Invention works by converting a Phoneme Transcription of a recorded dialog audio file into KeyFrame Data which dynamically controls 16 independent animation Parameters, each associated with a different part of the animated character's mouth, then algorithmically modifying that data such that it conforms to the previously unknown complex, subtle and context-specific relationships between audible phonemes and visible mouth movements.