This is a joint project of IT and Critical Software which aims to investigate new coding and processing methods for panoramic visual information, captured with 360-degree field-of-view (FoV) and ultra-high definition, i.e. All-aRound Vision (ARoundVision). The project investigates fast computational methods to obtain high compression efficiency with low computational complexity, relying on machine learning and parallel processing models with distributed data. The concept of scalability is pushed towards the development of efficient and flexible coding algorithms to enable embedding of multiple 360-degree visual content dimensions into a single scalable stream, yet partially decodable and/or possible to deliver through highly constrained channels.
The ongoing research in ARoundVision project aims at expanding current scalability dimensions (e.g., spatial and temporal resolution) to support new functionalities in 360-degree video, such as content scalability driven by the level of visual attention or relevance of the visual objects along with flexible representations of different FoV. The research results of ARoundVision project will find application in omnidirectional video acquisition and compression for multimedia, smart surveillance and environmental monitoring, particularly targeting systems with reduced resources.