Alexander Ya Polyakov, National University of Science and Technology “MISiS”, Moscow
Date & time: Tuesday, June 4th, 15:00h
Location: Amphitheatre of the Instituto de Telecomunicações - Pólo de Aveiro, Building 19
Deep traps responsible for nonradiative recombination in GaN films, in green, blue, NUV LEDs, both as-grown and subjected to electron irradiation or to long time operation, will be discussed. Possibilities arising from nanopillar structures formation and interaction with localized surface plasmons will be described. Deep traps in the barriers and buffers of AlGaN/GaN, InAlN/GaN HEMTs giving rise to long-time gate and drain current transients will be analyzed, approaches to treatment of non-exponential current relaxations originating in variations of local electric field strength, local potential fluctuations will be described.
Since 2014 he is a Professor at the Department of Semiconductor Electronics and Physics of Semiconductors of the National University of Science and Technology “MISiS” in Moscow and Head of the laboratory "Wide-Bandgap Semiconductors and Devices" at this university. He is an author or co-author of more than 300 research papers, 2 monographs, multiple invited chapters in books on III-V semiconductors, and multiple review articles. His areas of expertise include deep traps in compound semiconductors, properties of heterojunctions and quantum wells, hydrogen passivation effects, radiation defects studies, interaction of III-N LEDs with localized surface plasmons. More Information..
Wilker Aziz, University of Amsterdam
Date & time: From 27-29 May, 9:00h – 12:30h (each session)
Location: Room 11.26, 11th floor, North Tower, IST
Neural networks are taking NLP by storm. Yet they are mostly applied in contexts were we have complete supervision. Many real-world NLP problems require unsupervised or semi-supervised models, however, because annotated data is hard to obtain. This is where generative models shine. Through the use of latent variables they can be applied in missing data settings. Furthermore they can complete missing entries in partially annotated data sets.
This tutorial is about how to use neural networks inside generative models, thus giving us Deep Generative Models (DGMs). The training method of choice for these models is variational inference (VI). We start out by introducing VI on a basic level. From there we turn to DGMs that employ discrete and/or continuous latent variables. We justify them theoretically and give concrete advise on how to implement them. For continuous latent variables, we review the variational autoencoder and use Gaussian reparametrisation to show how to sample latent values from it. For discrete latent variables, for which no reparametrisation exists, we explain how to use the score-function or REINFORCE gradient estimator. More Information..