Research Nonvolatile Processors

Overview

In the era of Internet-of-Things (IoT), the computing performance demand (vision, cognitive tasks, etc.) has to be adapted with the available energy supply resources. In this context, many computing architectures have been proposed to provide significant computing power over a large range of supply voltage. Techniques involve highly-energy-efficient cores, minimization of data transfers, near-threshold and sub-threshold operation of transistors to reach the minimal energy per operation.

While efficient, these solutions still rely on conventional volatile memory technologies, and the general store energy required to store the information is not compatible anymore with the needs of deep IoT.

Emerging nonvolatile memories allow to create high-energy efficiency computing architectures operated in an extremely-wide-voltage-range, i.e., from complete turn-off to nominal voltage, as well as in near-Vt and sub-Vt conditions.

Bringing non-volatility to such systems can enable Instant-on Normally-off operation with multi-context switch can tremendously reduce the power trace of such system.

We focus on a holistic approach augmenting recent efforts towards high-performance ultra-low-power computing with non-volatile resistive memories.

Our research requires a strong interaction between design, architecture, software and devices.

This research includes collaborations with Prof. Pierre-Emmanuel Gaillardon (University of Utah), Prof. Pedram Khalili (Northwestern University), and Prof. Josiah Hester (Georgia Tech) and is funded by the United States – Israel Binational Science Foundation and by the National Science Foundation.

Selected Papers

[1] A. Eliahu, R. Ronen, P. E. Gaillardon, and S. Kvatinsky, “multiPULPly: A Multiplication Engine for Accelerating Neural Networks on Ultra-Low-Power Architectures”, ACM Journal on Emerging Technologies in Computing Systems, Vol. 1, No. 1, Article 1, January 2020

[2] E. Giacomin, T. Greenberg, S. Kvatinsky, and P.-E. Gaillardon, “A Robust Digital RRAM-based Convolutional Block for Low-Power Image Processing and Learning Applications”, IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 62, No. 2, pp. 643-654, February 2019

[3] J. Vieira, E. Giacomin, Y. Qureshi, M. Zapater, X. Tang, S. Kvatinsky, D. Atienza, and P.-E. Gaillardon, “A Product Engine for Energy-Efficient Execution of Binary Neural Networks Using Resistive Memories”, Proceedings of the IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), October 2019

[4] T. Greenberg-Toledo, B. Perach, I. Hubara, D. Soudry, S. Kvatinsky, “Training of Quantized Deep Neural Networks using a Magnetic Tunnel Junction-Based Synapse”, Semiconductor Science and Technology, Vol. 36, No. 11, October 2021

[5] W. Wang, B. Hoffer, T. Greenberg-Toledo, Y. Li, E. Herbelin, R. Ronen, X. Xu, Y. Zhao, J. Yang, and S. Kvatinsky, “Efficient Training of the Memristive Deep Belief Net Immune to Non-Idealities of the Synaptic Devices”, Advanced Intelligent Systems, Vol. 4, No. 5, pp. 22100249, May 2022

[6] W. Wang, L. Danial, Y. Li, E. Herbelin, E. Pikhay, Y. Roizin, B. Hoffer, Z. Wang, and S. Kvatinsky, “A memristive deep belief neural network based on silicon synapses,” Nature Electronics, Vol. 5, pp. 870-880, December 2022