Research Energy Efficient Architectures

Non Volatile Processor

In the era of Internet-of-Things (IoT), the computing performance demand (vision, cognitive tasks, etc.) has to be adapted with the available energy supply resources. In this context, many computing architectures have been proposed to provide significant computing power over a large range of supply voltage. Techniques involve highly-energy-efficient cores, minimization of data transfers, near-threshold and sub-threshold operation of transistors to reach the minimal energy per operation.

While efficient, these solutions still rely on conventional volatile memory technologies, and the general store energy required to store the information is not compatible anymore with the needs of deep IoT.

Emerging nonvolatile memories allow to create high-energy efficiency computing architectures operated in an extremely-wide-voltage-range, i.e., from complete turn-off to nominal voltage, as well as in near-Vt and sub-Vt conditions.

Bringing non-volatility to such systems can enable Instant-on Normally-off operation with multi-context switch can tremendously reduce the power trace of such system.

We propose in this project a holistic approach augmenting recent efforts towards high-performance ultra-low-power computing with non-volatile resistive memories. Several innovations are proposed both from a memory perspective (design of multi-context NVFF, design of multi-context multi-bank registers) and methodology (device/circuit co-optimization to optimize the energy and area of proposed blocks). To achieve this goal, a strong interaction between design, architecture, software and device will be achieved.

This research is done in collaboration with Prof. Pierre-Emmanuel Gaillardon (University of Utah) and is funded by the United States – Israel Binational Science Foundation.

Deep Neural Network Accelerators

In recent years, the range and number of solutions that are based on Deep Neural Networks (DNN) has been increasing. Nowadays, the most sophisticated healthcare, finance and security algorithms are based on DNN. These computation models are inspired by the way the brain solves problems and particularly from the way it learns. To work with those models, two stages are needed: training stage and inference stage.

The size and complexity of the algorithms involved with those models, especially at the training stage, caused them to have intensive amount of computation and memory access with immense energy consumption. Thus, the performance potential of DNN is limited by conventional hardware platforms such as CPU and GPU.

The goal of this research is to design a hardware accelerator to efficiently execute DNN algorithms. Using the memristor to implement one of the basic components of DNN, the synapse, will enable to perform near memory computation in analog manner. First analysis shows that, using such accelerator, the training stage can be reduce from weeks to minutes and improve power consumption by orders of magnitude.

back to index