Recent advances in Deep Neural Networks (DNNs) provided important breakthroughs in many domains and are seen as the likely solution towards the development of next generations intelligent systems. This has been achieved by relying on deeper and sparse networks, by integrating new layer types and novel activation functions, and through new training methodologies. However, DNNs are characterized by long execution times, a problem which is expected to worsen as networks become more complex and less susceptible to GPU acceleration. To overcome this issue, a new scalable hardware accelerator will be proposed and deployed on FPGA-technology to support state-of-the-art DNNs by: alleviating its computational complexity and memory bandwidth problem; supporting new complex DNN layers and activation functions; and supporting sparse and non-uniform neural networks. The envisaged solutions will be compared with state-of-the-art off-the-shelf alternatives regarding performance and energy-efficiency.
|Start Date: 12-07-2018|
|End Date: 11-07-2021|
|Team: Gabriel Falcao Paiva Fernandes, Óscar Almeida Ferraz, Nuno Filipe Simões Santos Moraes Neves, Nuno José Matos Pereira, André Filipe Torres Martins, Luís Filipe Barbosa de Almeida Alexandre|
|Groups: Multimedia Signal Processing – Co, Pattern and Image Analysis – Lx, Pattern and Image Analysis – Cv|
|Partners: Babel, Fabristec, INESC-ID, Priberam|
|Local Coordinator: Gabriel Falcao Paiva Fernandes|