Creating and sharing knowledge for telecommunications

IT researchers´ 3D facial animation technology distinguished at the Web3D 2018

on 17-07-2018

... Verónica Orvalho and Ozan Cetinaslan, both researchers from IT in Porto were distinguished with the 2nd Best Paper Award in the 23rd international ACM conference on 3D web technology conference (Web3D 2018), held in Poznan, Poland between 20-22 June. With attendees from all over the world, the Web3D is a well-known and prestigious conference in the field of computer graphics, with an acceptance rate of around 40%.

Computer generated characters are ubiquitous in movies, cartoons, advertisements, computer games, medical simulators, etc. Facial animation plays a critical role for the digital character to carry emotions and personality. In common practice, rig-based methods are used to create content for facial animation. However, rig-based animation is a challenging task due to the complexity of authoring the rig to create convincing and meaningful facial poses for each key-frame of animation sequence. Besides, popular state-of-the-art animation and modeling software packages support the rig structures with their external interfaces which are not practical for the users. In their paper “Direct Manipulation of Blendshapes Using a Sketch-Based Interface", Veronica and Ozan built a framework which allows artists and animators to create their animation sequences by just sketching on the 3D facial model. “Besides, we optimized the underneath mathematical algorithm for stable and intuitive end results. We tested our framework on many different facial models and obtained the desired keyframe facial poses for animation in a quite short time period. Except for the ultimate extreme cases, we did not observe any erroneous or singular poses that can surprise animators or digital artist”, explained Ozan Cetinaslan. In building the framework an extreme attention was given to simplicity and clarity, making it suitable not just for the more experienced users but also the more novice.

Deconstructing the heartbeat to improve automatic cardiac auscultation

on 27-03-2018

... A team of IT researchers working in project SmartHeart aims to leverage and develop novel deep learning architectures that can solve fundamental inverse problems in heart sound analysis. The research team wants to explore promising research avenues in the application of powerful computational tools that are expected to boost the performance and robustness of current signal processing approaches in the field of automatic cardiac auscultation with relevant impact on society.

Heart sounds are difficult to identify and analyze by the human listener, with some studies showing that only around 20% of medical interns can effectively perform cardiac auscultation. This motivates exploring the powerful combination of electronic stethoscopes and portable computer technology to create usable point-of-care Computer-Assisted Decision (CAD) systems for auscultation. The practical implementation of CAD systems for auscultation encounter some key challenges related to the processing and analysis of the heart sound signal, i.e., the phonocardiogram (PCG), due to various reasons: presence of different kinds of noise due to uncontrolled auscultation conditions in real environments, non-stationarity of the signal features, inter and intra-patient variability of the PCG characteristics, etc. In fact, a relevant part of the information contained in the PCG can only be unlocked if different components of the heart sound signal are reliably detected and separated.

The inverse problem consisting in the separation of components from a heart sound recorded with a single microphone turns out extremely challenging, due to the large time-frequency overlap of some components, as well as their similar morphological signatures. In this sense, more sophisticated processing algorithms for the solutions of inverse problems for heart sound analysis are in order. Recent results have shown that deep learning architectures can represent a valuable tool in solving inverse problems.

The most crucial hardware requirement needed to perform the simulations and experiments is represented by a powerful GPU. Recently, the project has received an important backup from the NVIDIA Corporation, which donated an NVIDIA GeForce Titan Xp. Mounted on a desktop workstation, the Titan Xp GPU represents an ideal setup for the SmartHeart team experiments.

SmartHeart joins three of IT research groups, which have independently created technology and accumulated knowledge in the field of cardiac sensing and signal processing. This project builds upon original work partially developed by the team in the field of cardiac sensing and signal processing within projects such as “DigiScope”, “HeartSafe”, “ICT for Future Health”, “HeartBIT” and “BITalino” (