Creating and sharing knowledge for telecommunications

HAnDLE: Hardware Accelerated Deep Learning Framework

on 24-03-2019

... Recent advances in Deep Neural Networks (DNNs) provided important breakthroughs in many domains (medical diagnosis, autonomous driving, natural language processing) and are seen as the likely solution towards the development of next generation’s intelligent systems. This has been achieved by relying on deeper and sparse networks, by integrating new layer types and novel activation functions, and through new training methodologies. However, DNNs are characterized by long execution times, a problem which is expected to worsen as networks become more complex and less susceptible to GPU acceleration. To overcome this issue, a new scalable hardware accelerator is being developed by the HanDLE research team to support the execution of state-of-the-art DNNs using FPGA technology. It aims to alleviate the involved computational complexity and memory bandwidth; support new complex DNN layers and activation functions; as well as sparse and non-uniform neural networks. The envisaged solutions will be compared with state-of-the-art off-the-shelf alternatives regarding performance and energy-efficiency.

The HAnDLE project is funded by FCT and is coordinated by INESC-ID. Its research team includes researchers and professors of Instituto de Telecomunicações/UC (Gabriel Falcão), INESC-ID/IST (Nuno Roma, Leonel Sousa, Pedro Tomás), IT Lisbon (André Martins, Luís Alexandre) and several Portuguese corporations.

Photo: Google’s TPU processor for deep learning