Project AVATAR aims to develop 3D animated face models that are synchronized with speech. The input to the system can be either speech or text. In the case of speech input, a recognizer is used followed by a converter from phonemes to visemes. In the case of text input, a synthesizer is used to create both the audio (speech) and the sequence of visemes. 3D face animations, created after the sequence of visemes, need to run on browser to allow the development of rich and interactive WEB applications.
The project is funded by QREN where IT participates as a subcontractor.
|Start Date: 01-10-2009|
|End Date: 01-10-2011|
|Team: Luis Alberto da Silva Cruz, Fernando Manuel Santos Perdigao|
|Groups: Multimedia Signal Processing – Co|
|Local Coordinator: Luis António Serralva Vieira de Sá|
|Links: Internal Page|