Tzofnat Greenberg has presented “Accelerating DNN Applications with Emerging Memory Technologies” at IMVC 2019
Deep Neural Networks (DNNs) are usually executed by commodity hardware, mostly FPGA and GPU platforms, and accelerators, such as Google’s TPU. However, when executing DNN algorithms, the conventional von Neumann architectures, where the memory and computation are separated, pose significant limitations on performance and energy efficiency, as DNN algorithms are compute and memory intensive. Emerging memory technologies, known as memristors, enable in-place, highly parallel, and energy efficient analog multiply-accumulate operations. This is known as processing-near-memory (PNM). This talk will present the potential and opportunities of integrating memristors into DNN accelerators design.