Acronym: ARoundVision |
Main Objective: This project aims to investigate new coding and processing methods for panoramic visual information, captured with 360-degree field of view, and ultra-high definition, i.e. All-Around UHD (AA-UHD) video, promoting the development of new applications in this area. The research tasks are based on AA-UHD video content, acquired by a system with multiple high-definition cameras, synchronized at frame level, each one responsible for acquiring part of the 360- degree field of view. The AA-UHD video images are acquired in the form of multiple fields of view and then combined with stitching algorithms to create the all-around image. The project will investigate computational methods for joining adjacent fields of view (stitching), combined with real-time compression, in order to obtain high coding efficiency with low computational complexity based on machine learning and parallel processing models with distributed data. Part of the research addressed in the project is to define, develop and evaluate multi-dimensional scalable formats through efficient processing and coding algorithms, both in the non-compressed domain and in the compressed domain. In addition to expanding the scalability dimensions currently known (e.g. at the spatial and temporal resolution levels), this research aims to support scalability dimensions that have not been explored so far, such as content scalability according to the level of user visual attention, relevance of the visual objects that constitute events in relevant application contexts and the scalability of the field of view. The identification and coding of these different types of regions of interest, using new scalability dimensions applicable to non-compressed and compressed formats, is a significant part of the research work. The project will also investigate new attention models for AA-UHD video, based on others recently developed by the research team. Another important focus of research in this project are new computational methods to reduce coding complexity, without loss of efficiency in coding the huge amount of data used in AA-UHD video. The main research outcomes will be used in the project to develop, test and evaluate new applications for intelligent city contexts, where users have flexible and interactive access to 360-degree ultra high-definition widescreen video content. Through mobile networks, and also in smart surveillance and monitoring of natural resources and maritime areas, the use of ultra high-definition video is a fundamental requirement for the identification, recognition of objects and to detect changes of the environment with high precision. |
Reference: CENTRO-01-0145-FEDER-030652 |
Funding: FCT/POCI, PO Centro |
Start Date: 19-06-2018 |
End Date: 18-06-2022 |
Team: Pedro Antonio Amado Assunção, António Navarro Rodrigues, João Filipe Monteiro Carreira, Jose Nunes dos Santos Filipe, Sérgio Manuel Maciel de Faria |
Groups: Multimedia Signal Processing – Lr, Radio Systems – Av |
Partners: Critical Software, SA |
Local Coordinator: Pedro Antonio Amado Assunção |
|
Associated Publications
|