In recent years, the range and number of solutions based on Deep Neural Networks (DNN) have been increasing. These computation algorithms are inspired by how the brain solves problems, particularly how it learns. The algorithm complexity evolves with the size of DNNs, especially at the training stage, causing them to have intensive computation and memory access with immense energy consumption. Thus, the performance potential of DNN algorithms is limited by conventional hardware platforms such as CPU and GPU.
Using emerging nonvolatile memory devices, such as memristors, to implement one of the essential components of DNN, the synapses, will enable DNN computation in an analog manner [1]. The first analysis shows that using such an accelerator can reduce the training stage from weeks to minutes and improve power consumption by orders of magnitude [2]. However, the non-idealities of memristor devices, such as noise, faults, and variations, might result in low performance and hinder the deployment of DNN algorithms into analog-based memristive computing units [3]. Besides, storing the synapses of DNN models may face potential theft attacks because of the non-volatility of memristor devices [4]. Once in possession of the DNN synapses, the adversary may reverse-engineer the well-trained DNN models stored in the memristor devices.
This research aims to design hardware accelerators to efficiently, robustly, fault-tolerant, and securely execute DNN algorithms. At ASIC2, we investigate techniques to mitigate the performance degradation resulting from non-idealities of memristor devices. We also explore novel protection methods for the DNN synapses stored on the memristor devices without incurring significant extra power consumption and system latency.
Analog-to-digital converters (ADC) and digital-to-analog converters (DAC) are ubiquitous components that exist in every data-driven acquisition system and mixed-signal circuit. Data converters are the link between the digital domain of signal processing and the real world of analog transducers. This research is based on brain-inspired approaches to design smart ADC and DAC that could be reconfigured in real-time for general purpose applications [5].
The research uses emerging memory (memristors) integrated with conventional CMOS technology, and encouraged by artificial intelligent neural networks architecture, to break through the speed-power-accuracy tradeoff in modern data converters beyond Moore’s law.
Novel ADC/DAC configuration that is calibrated using an artificial-intelligence neural network technique are designed in the ASIC2 lab and fabricated with Tower Semiconductors collaboration. The proposed technique is demonstrated on an adaptive and self-calibrated ADC/DAC that can be configured on-chip in real time. These circuits use online supervised machine learning algorithms. These algorithms fit multiple full-scale voltage ranges, and sampling frequencies by iterative synaptic adjustments, while inherently providing mismatch calibration and noise tolerance. The findings constitute a promising milestone towards scalable data-driven converters using deep neural networks.
[1] Wang, L. Danial, Y. Li, E. Herbelin, E. Pikhay, Y. Roizin, B. Hoffer, Z. Wang, and S. Kvatinsky, “A memristive deep belief neural network based on silicon synapses”, Nature Electronics, December 2022
[2] T. Greenberg-Toledo, B. Perach, I. Hubara, D. Soudry, S. Kvatinsky, “Training of Quantized Deep Neural Networks using a Magnetic Tunnel Junction-Based Synapse”, Semiconductor Science and Technology, Vol. 36, No. 11, October 2021
[3] E. Giacomin, T. Greenberg, S. Kvatinsky, and P.-E. Gaillardon, “A Robust Digital RRAM-based Convolutional Block for Low-Power Image Processing and Learning Applications”, IEEE Transactions on Circuits and Systems I: Regular Papers, Vol. 62, No. 2, pp. 643-654, February 2019
[4] M.Zou, J.Zhou, X.Cui, W.Wang, and S.Kvatinsky, “Enhancing Security of Memristor Computing System Through Secure Weight Mapping”, Proceedings of the IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 182-187, July 2022
[5] L. Danial, K. Sharma, and S. Kvatinsky, “A Pipelined Memristive Neural Network Analog-to-Digital Converter”, Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1-5, October 2020