
Recently, several dif ferent NVM memory technologies (NAND Flash, PCM, ReRAM, STT-MRAM)
have emerged as promising candidates for digital and analog in-memory computation.
Tower Jazz’s Y-Flash Non-Volatile Memory can be used as a building block which can be used in
many ANN applications.
In this project you learn the FPGA environment and use it to build an emulator which functions as an
ideal Y-Flash cell and presents a multilevel output current.

Recently, several different nonvolatile memory (NVM) technologies, such as NAND Flash, PCM,
ReRAM, and STT-MRAM, have emerged as promising candidates for digital and analog in-memory
computation. Tower’s Y-Flash NVM, fully compatible with standard CMOS process, can be used as a
building block which can be used in many artificial neural networks (ANNs) applications owing its
multilevel characteristic.
In this project, you develop an experimental setup to write the weights of an ANN to a Y-Flash memory
array. This project is part of the NEMO project, where we aim to implement ultra-low-power ANN
hardware accelerators.

SNNs are the next generation neural networks with the ability to perform complex brain-like computations with very low power. SNNs use discrete ON/OFF signals called action potentials or spikes for data communication and processing.
In this project, you will simulate various building blocks of SNNs including the Hodgkin-Huxley Neuron, Leaky Integrate and Fire Neuron and its variants using memristors. You will also demonstrate concepts like Spike Time Dependent Plasticity (STDP), long term potentiation (LTP), long term depression (LTD) with memristive synapses.
You will also build feed forward spiking neural networks using these basic components. These circuits will be behaviourally implemented in MATLAB and then implemented in cadence virtuoso circuit simulator.

Neural networks have become significant and are used in many areas of research and industry. Today’s computer
architecture has a major bottleneck in the transfer of data between memory and computation units, which makes it
difficult to perform the necessary computations. One of the solutions that has been proposed is Compute-in-memory
(CIM) architectures, to accelerate the computations. This project aims to design a controller for an ANN accelerator
system and its peripherals.
The main goal of this project is to develop a controller for an ANN accelerator, that can process input data, adapt it to
the network structure, and provide all necessary data and signals for the ANN.
Specific objectives:

Neural networks have become significant computation models in many areas of research and industry (e.g., computer
vision, speech recognition, etc.). As a result, there is a need for energy-efficient accelerators for edge applications like IoT,
requiring optimized hardware designs. One of the critical components of an ANN accelerator is the data interface. The
system bus is an internal communication path that allows information to be transferred between the main components
of the system. In our case, the system bus needs to link between the controller, ANN and other sub modules. It needs to
contain data, address and control buses and to have high-speed parallel communication, ensuring optimal throughput
and low latency. Also, this system bus must deal with challenges like priorities between modules, masking, and
interruptions.
The main goal of this project is to develop a system bus for an ANN Accelerator, enabling data exchange between
control unit, the ANN core and other sub modules.
Specific objectives:

Compute-In-Memory (CiM) accelerates Deep Neural Networks (DNNs) by reducing energy-intensive weight
movement and enabling low-energy, high-density computation within memory arrays. While research spans
the CiM stack, most works focus on a single level (device, circuit, architecture, workload, or mapping) or design
point (e.g., one chip). A full-stack modeling tool is needed to evaluate system-level impacts and enable rapid
early-stage co-design.
MIT researchers developed CiMLoop, an open-source tool for modeling diverse CiM systems and exploring
cross-stack decisions. CiMLoop provides:
(1) a flexible specification for mapping workloads to circuits and architectures,
(2) an accurate energy model capturing interactions among DNN data, hardware representations, and
circuit behavior, and
(3) a fast statistical model for rapid design-space exploration.
CiMLoop enables researchers to evaluate, co-design, compare, and explore CiM designs efficiently across all
levels of the stack, fairly compare different implementations and rapidly explore the design space.
Project Goals:
In this project, the students will get acquainted with the CiMLoop tool, understand its usage and benefits, and
will use it to implement several research designs at varying levels of complexity.
The students will:
● Gain practical experience by installing and working with the CiM tool
● Implement couple of design examples to evaluate its usefulness
● Implement and evaluate an ongoing research design (e.g., GNN/Transformers) using CimLoop.

Emerging memristors are novel circuit elements, originally described as the “fourth missing circuit element” and considered today as the future of nonvolatile memory. Different memristors have been developed and simulatively characterized by the Technion’s ASIC² research group, headed by Prof. Shahar Kvatinsky.
Some of the memristor devices have been manufactured by semiconductor companies (such as Tower Semiconductor, Winbond, and Weebit) and some of them were fabricated in academia by our collaborators from universities such as Stanford, Aachen, and Arizona State.
Our target is to experimentally measure and characterize memristors and to demonstrate their functionality for novel circuits in applications such as artificial intelligence, memory, and logic.