Creating and sharing knowledge for telecommunications

WaveE2VID: Frequency-Aware Event-Based Video Reconstruction

Maqsood, R. ; Nunes, P. ; Conti, C. ; Soares, L. D.

WaveE2VID: Frequency-Aware Event-Based Video Reconstruction, Proc IEEE International Conf. on Image Processing - ICIP, Anchorage, United States, Vol. , pp. 570 - 575, September, 2025.

Digital Object Identifier: 10.1109/ICIP55913.2025.11084548

Download Full text PDF ( 727 KBs)

 

Abstract
Event cameras, which detect local brightness changes instead of capturing full-frame images, offer high temporal resolution and low latency. Although existing convolutional neural networks (CNNs) and transformer-based methods for event-based video reconstruction have achieved impressive results, they suffer from high computational costs due to their linear operations. These methods often require 10M-30M parameters and inference times of 30-110 ms per forward pass at a resolution of 640 × 480 on modern GPUs. Furthermore, to reduce computational costs, these methods apply CNN-based downsampling, which leads to the loss of fine details. To address these challenges, we propose an efficient hybrid model, WaveE2VID, which combines the frequency-domain analysis of the wavelet transform with the spatio-temporal context modeling of a deep convolutional recurrent network. Our model achieves 50% faster inference speed and lower GPU memory usage than CNN and transformer-based methods, maintaining reconstruction performance on par with state-of-the-art approaches across benchmark datasets.