Assessing Facial Expressions in Virtual Reality Environments
Assessing Facial Expressions in Virtual Reality Environments, Proc International Conf. on Computer Vision Theory and Applications - VISAPP, Rome, Italy, Vol. 0, pp. 0 - 0, February, 2015.
Digital Object Identifier:
Humans rely on facial expressions to transmit information, like mood and intentions, usually not provided by the verbal communication channels. The recent advances in Virtual Reality (VR) at consumer-level (Oculus VR 2014) created a shift in the way we interact with each other and digital media. Today, we can enter a virtual environment and communicate through a 3D character. Hence, to the reproduction of the users’ facial expressions in VR scenarios, we need the on-the-fly animation of the embodied 3D characters. However,
current facial animation approaches with Motion Capture (MoCap) are disabled due to persistent partial occlusions
produced by the VR headsets. The unique solution available for this occlusion problem is not suitable for consumer-level applications, depending on complex hardware and calibrations. In this work, we propose consumer-level methods for facial MoCap under VR environments. We start by deploying an occlusion ssupport method for generic facial MoCap systems. Then, we extract facial features to create Random Forests algorithms that accurately estimate emotions and movements in occluded facial regions. Through our novel
methods, MoCap approaches are able to track non-occluded facial movements and estimate movements in occluded regions, without additional hardware or tedious calibrations. We deliver and validate solutions to facilitate face-to-face communication through facial expressions in VR environments.