Design of RRAM-based fully analog compute-in-memory architecture for Neural ODEs

SUN Yuli, YAN Bonan, TAO Yaoyu, YANG Yuchao

Integrated Circuits and Embedded Systems ›› 2025, Vol. 25 ›› Issue (10) : 1-9.

PDF(12287 KB)
PDF(12287 KB)
Integrated Circuits and Embedded Systems ›› 2025, Vol. 25 ›› Issue (10) : 1-9. DOI: 10.20193/j.ices2097-4191.2025.0066
Cover Article

Design of RRAM-based fully analog compute-in-memory architecture for Neural ODEs

Author information +
History +

Abstract

Neural ordinary differential equation (ODE) network inference in Von Neumann architectures faces problems like the "power wall" and "memory wall". Traditional in-memory computing architectures also suffer from excessive time and power consumption due to numerous digital-to-analog and analog-to-digital conversions. To address these issues, we propose a fully analog compute-in-memory architecture for Neural ODEs based on RRAM to achieve fully analog data flow in network inference. The simulation is completed on the Cadence Virtuoso platform, which includes RRAM device, array, and the peripheral circuits. The test is performed on a 40 nm RRAM test platform and differential input/output PCB, achieving functional verification of the entire system. Experiments and evaluations of the classification tasks of Neural ODEs are conducted with testing errors, ultimately proving the functionality and reliability of the architecture. This lays a solid foundation for subsequent hardware implementation and application deployment.

Key words

RRAM-based compute-in-memory / neural ordinary differential equation / fully analog data flow / architecture design

Cite this article

Download Citations
SUN Yuli , YAN Bonan , TAO Yaoyu , et al. Design of RRAM-based fully analog compute-in-memory architecture for Neural ODEs[J]. Integrated Circuits and Embedded Systems. 2025, 25(10): 1-9 https://doi.org/10.20193/j.ices2097-4191.2025.0066

References

[1]
LI Y. Research and application of deep learning in image recognition[C]// 2022 IEEE 2nd international conference on power,electronics and computer applications (ICPECA). IEEE, 2022:994-999.
[2]
PATWARDHAN N, MARRONE S, SANSONE C. Transformers in the real world:A survey on nlp applications[J]. Information, 2023, 14(4):242.
[3]
SZE V, CHEN Y H, YANG T J, et al. Efficient processing of deep neural networks:A tutorial and survey[J]. Proceedings of the IEEE, 2017, 105(12):2295-2329.
[4]
LANZA M, SEBASTIAN A, LU W D, et al. Memristive technologies for data storage,computation,encryption,and radio-frequency communication[J]. Science, 2022, 376(6597):eabj9979.
[5]
CHEN R T Q, RUBANOVA Y, BETTENCOURT J, et al. Neural ordinary differential equations[J]. Advances in neural information processing systems, 2018,31.
[6]
YAO P, WU H, GAO B, et al. Fully hardware-implemented memristor convolutional neural network[J]. Nature, 2020, 577(7792):641-646.
[7]
SAXENA V. A mixed-signal convolutional neural network using hybrid cmos-rram circuits[C]// 2021 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2021:1-5.
[8]
HUNG J M, WEN T H, HUANG Y H, et al. 8-b precision 8-Mb ReRAM compute-in-memory macro using direct-current-free time-domain readout scheme for AI edge devices[J]. IEEE Journal of Solid-State Circuits, 2022, 58(1):303-315.
[9]
WEN T H, HSU H H, KHWA W S, et al. 34.8 a 22nm 16mb floating-point reram compute-in-memory macro with 31.2 tflops/w for ai edge devices[C]// 2024 IEEE International Solid-State Circuits Conference (ISSCC).IEEE, 2024,67:580-582.
[10]
GONG H, HE H, GAO B, et al. A 1-Mb Programming Configurable ReRAM Fully Integrating into a 32-Bit Microcontroller Unit[J]. IEEE Transactions on Circuits and Systems II:Express Briefs, 2023, 70(8):2734-2738.
PDF(12287 KB)

Accesses

Citation

Detail

Sections
Recommended

/