Creating and sharing knowledge for telecommunications

Long Short-Term Memory with Gate and State Level Fusion for Light Field-Based Face Recognition

Moghaddam, A. ; Etemad, A. ; Pereira, F. ; Correia, P.L.

IEEE Transactions on Information Forensics and Security Vol. -, Nº -, pp. 1 - 1, July, 2020.

ISSN (print): 1556-6013
ISSN (online):

Journal Impact Factor: 2,230 (in 2008)

Digital Object Identifier: 10.1109/TIFS.2020.3036242

Download Full text PDF ( 8 MBs)

Downloaded 4 times

Abstract
Long Short-Term Memory (LSTM) is a prominent
recurrent neural network for extracting dependencies from sequential data such as time-series and multi-view data, having achieved impressive results for different visual recognition tasks. A conventional LSTM network, hereafter referred only as LSTM
network, can learn a model to posteriorly extract information from one input sequence. However, if two or more dependent sequences of data are simultaneously acquired, the LSTM networks
may only process those sequences consecutively, not taking benefit of the information carried out by their mutual dependencies. In this context, this paper proposes two novel LSTM cell architectures that are able to jointly learn from multiple sequences simultaneously acquired, targeting to create richer
and more effective models for recognition tasks. The efficacy of the novel LSTM cell architectures is assessed by integrating them into deep learning-based methods for face recognition with
multi-view, light field images. The new cell architectures jointly learn the scene horizontal and vertical parallaxes available in a light field image, to capture richer spatio-angular information
from both directions. A comprehensive evaluation, with the ISTEURECOM LFFD dataset using three challenging evaluation protocols, shows the advantage of using the novel LSTM cell
architectures for face recognition over the state-of-the-art light field-based methods. These results highlight the added value of the novel cell architectures when learning from correlated input
sequences.