Talking avatar for web-based interfaces
Nunes, J.
;
Sá, L. V.
;
Perdigão, F.
Talking avatar for web-based interfaces, Proc Conf. on Telecommunications - ConfTele, Lisbon, Portugal, Vol. 1, pp. 1 - 4, April, 2011.
Digital Object Identifier: 0
Abstract
In this paper we present an approach for creating interactive and speaking avatar models, based on standard face images. We have started from a 3D human face model that can be adjusted to a particular face. In order to adjust the 3D model from a 2D image, a new method with 2 steps is presented. First, a process based on Procrustes analysis is applied in order to find the best match for input key points, obtaining the rotation, translation and scale needed to best fit the model to the photo. Then, using the resulting model we refine the face mesh by applying a linear transform on each vertex. In terms of visual speech animation, we have considered a total of 15 different positions to accurately model the articulation of Portuguese language - the visemes. For normalization purposes, each viseme is defined from the generic neutral face. The animation process is visually represented with linear time interpolation, given a sequence of visemes and its instants of occurrence.