面向投票类AI分类器的零冗余存储器容错设计

柳姗姗, 金辉, 刘思佳, 王天琦, 周彬, 马瑶, 王碧, 常亮, 周军

集成电路与嵌入式系统 ›› 2024, Vol. 24 ›› Issue (6) : 1-8.

PDF(3612 KB)
PDF(3612 KB)
集成电路与嵌入式系统 ›› 2024, Vol. 24 ›› Issue (6) : 1-8. DOI: 10.20193/j.ices2097-4191.2024.06.001
封面文章

面向投票类AI分类器的零冗余存储器容错设计

作者信息 +

Redundancy-free error-tolerant memory design for voting-based AI classifiers

Author information +
文章历史 +

摘要

投票类分类器广泛应用于多种人工智能(A.pngicial Intelligence,AI)场景,在其电路系统中,用于存储已知样本信息的存储器易受到辐射、物理特性变化等多种效应影响,引发软错误,继而可能导致分类失败。因此,在高安全性领域应用的AI分类器,其存储电路需要进行容错设计。现有存储器容错技术通常采用错误纠正码,但面向AI系统,其引入的冗余会进一步加剧本就面临挑战的存储负担。因此本文提出一种零冗余存储器容错技术,采用纠正错误对分类结果的负面影响而非纠正错误本身的设计思想,利用错误造成的数据翻转现象恢复出正确的分类结果。通过对k邻近算法进行实验验证,本文提出的技术在不引入任何冗余的情况下可达到近乎完全的容错能力,且相比于现有技术,节省了大量硬件开销。

Abstract

Voting-based classifiers are widely used in many A.pngicial Intelligence (AI) applications.In their implementation,memories that store all known data are prone to suffer different effects like radiation and physical variations,causing soft errors and can even change the classification results.Therefore,error-tolerance must be achieved in these memories for safety-critical applications.Existing error-tolerant techniques commonly utilize error correction codes,however,the memory redundancy they introduce further increases the burden of storage.In this paper,a redundancy-free technique is proposed by focusing on the impact of errors on the classification performance,instead of the error itself.It can recover the error-free classification results under errors by exploiting the flipped data.A k nearest neighbor classifier is taken as a case study to evaluate the proposed technique.The simulation results show that the proposed scheme offers almost full error tolerance without incurring any memory redundancy,moreover,it significantly reduces the hardware overheads for protection circuits compared to existing techniques.

关键词

存储器 / 软错误 / 人工智能 / 分类器 / 错误纠正码 / k邻近算法

Key words

memory / soft errors / a.pngicial intelligence / classifiers / error correction codes / k nearest neighbors

引用本文

导出引用
柳姗姗, 金辉, 刘思佳, . 面向投票类AI分类器的零冗余存储器容错设计[J]. 集成电路与嵌入式系统. 2024, 24(6): 1-8 https://doi.org/10.20193/j.ices2097-4191.2024.06.001
LIU Shanshan, JIN Hui, LIU Sijia, et al. Redundancy-free error-tolerant memory design for voting-based AI classifiers[J]. Integrated Circuits and Embedded Systems. 2024, 24(6): 1-8 https://doi.org/10.20193/j.ices2097-4191.2024.06.001
中图分类号: TN492 (专用集成电路)   

参考文献

[1]
DAS S, DEY A, PAL A, et al. Applications of a. pngicial intelligence in machine learning:review and prospect[J]. International Journal of Computer Applications, 2015, 115(9):31-41.
[2]
CUI L, YANG S, CHEN F, et al. A survey on application of machine learning for internet of things[J]. International Journal of Machine Learning and Cybernetics, 2018, 9(8):1399-1417.
[3]
IBE E, TANIGUCHI H, YAHAGI Y, et al. Impact of scaling on neutron-induced soft error in SRAMs from a 250 nm to a 22 nm design rule[J]. IEEE Transactions on Electron Devices, 2010, 57(7):1527-1538.
[4]
赵元富, 王亮, 岳素格, 等. 纳米级 CMOS 集成电路的单粒子效应及其加固技术[J]. 电子学报, 2018, 46(10):2511-2518.
摘要
空间应用的集成电路受到辐射效应的影响,会出现瞬态干扰、数据翻转、性能退化、功能失效甚至彻底毁坏等问题.随着器件特征尺寸进入到100nm以下(以下简称纳米级),这些问题的多样性和复杂性进一步增加,单粒子效应成为集成电路在空间可靠性应用的主要问题,给集成电路的辐射效应评估和抗辐射加固带来了诸多挑战.本文以纳米级CMOS集成电路为研究对象,结合近年来国内外的主要技术进展,介绍研究团队在65nm集成电路单粒子效应和加固技术方面的研究成果,包括首次提出的单粒子时域测试和分析方法、单粒子多节点翻转加固方法和单粒子瞬态加固方法等.
ZHAO Y F, WANG L, YUE S G, et al. Single event effect and its hardening technique in nano-scale CMOS integrated circuits[J]. Acta Electronica Sinica, 2018, 46(10):2511-2518. (in Chinese)
[5]
郭靖, 李强, 宿晓慧, 等. 新型RHBD抗多节点翻转锁存器设计[J]. 计算机辅助设计与图形学报, 2021, 33(6):963-973.
GUO J, LI Q, SU X H, et al. Novel RHBD design for latch against multiple node upsets[J]. Journal of Computer-Aided Design & Computer Graphics[J], 2021, 33(6):963-973. (in Chinese)
[6]
JUNSANGSRI P, LOMBARDI F. A new comprehensive model of a phase change memory (PCM) cell[J]. IEEE Transactions on Nanotechnology, 2014, 13(6):1213-1225.
[7]
HU Q, YU D, XIE Z. Neighborhood classifiers[J]. Elsevier Systems with Applications, 2008, 34(2):866-876.
[8]
ZHANG S, LI X, ZONG M, et al. Efficient kNN classification with different numbers of nearest neighbors[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(5):1774-1785.
nearest neighbor (kNN) method is a popular classification method in data mining and statistics because of its simple implementation and significant classification performance. However, it is impractical for traditional kNN methods to assign a fixed value (even though set by experts) to all test samples. Previous solutions assign different values to different test samples by the cross validation method but are usually time-consuming. This paper proposes a kTree method to learn different optimal values for different test/new samples, by involving a training stage in the kNN classification. Specifically, in the training stage, kTree method first learns optimal values for all training samples by a new sparse reconstruction model, and then constructs a decision tree (namely, kTree) using training samples and the learned optimal values. In the test stage, the kTree fast outputs the optimal value for each test sample, and then, the kNN classification can be conducted using the learned optimal value and all training samples. As a result, the proposed kTree method has a similar running cost but higher classification accuracy, compared with traditional kNN methods, which assign a fixed value to all test samples. Moreover, the proposed kTree method needs less running cost but achieves similar classification accuracy, compared with the newly kNN methods, which assign different values to different test samples. This paper further proposes an improvement version of kTree method (namely, k*Tree method) to speed its test stage by extra storing the information of the training samples in the leaf nodes of kTree, such as the training samples located in the leaf nodes, their kNNs, and the nearest neighbor of these kNNs. We call the resulting decision tree as k*Tree, which enables to conduct kNN classification using a subset of the training samples in the leaf nodes rather than all training samples used in the newly kNN methods. This actually reduces running cost of test stage. Finally, the experimental results on 20 real data sets showed that our proposed methods (i.e., kTree and k*Tree) are much more efficient than the compared methods in terms of classification tasks.
[9]
BREIMAN L. Random forests[J]. Springer Machine Learning, 2001, 45(1):5-32.
[10]
SHABAN W M, RABIE A H, SALEH A I, et al. A new COVID-19 patients detection strategy (CPDS) based on hybrid feature selection and enhanced KNN classifier[J]. Elsevier Knowledge-Based Systems, 2020, 205(12):1-18.
[11]
LIN S, COSTELLO D J. Error control coding second edition[M]. Scarborough: Prentice hall, 2004.
[12]
LIU S, REVIRIEGO P, LOMBARDI F. Codes for limited magnitude error correction in multilevel cell memories[J]. IEEE Trans-actions on Circuits and Systems I:Regular Papers, 2020, 67(5):1615-1626.
[13]
ATWOOD G, CHAE S I, SHIM S S. Next-generation memory[J]. IEEE Computer Magazine, 2013, 46(8):21-22.
[14]
LIU S, REVIRIEGO P, HERNÁNDEZ J A, et al. Voting margin:a scheme for error-tolerant k nearest neighbors classifiers for machine learning[J]. IEEE Transactions on Emerging Topics in Computing, 2019, 9(4):2089-2098.
[15]
KAHAN W. IEEE standard 754 for binary floating-point arithmetic[R]. Berkeley: University of California. Department of Electrical Engineering and Computer Sciences, 1996.
[16]
SAPORTA A, GUI X, AGRAWAL A, et al. Benchmarking saliency methods for chest X-ray interpretation[J]. Nature Machine Intelligence, 2022, 4(10):867-878.
[17]
DUA D, GRAFF C. UCI machine learning repository[R]. Irvine: University of California. School of Information and Computer Science, 2019.
[18]
GORMAN R P, SEJNOWSKI T J. Analysis of hidden units in a layered network trained to classify sonar targets[J]. Elsevier Neural Networks, 1988, 1(1):75-89.
[19]
MOHAMMAD R M, THABTAH F, MCCLUSKEY L. Intelligent rule-based phishing websites classification[J]. IET Information Security, 2014, 8(3):153-160.
[20]
JOHNSON B, TATEISHI R, XIE Z. Using geographically-weighted variables for image classification[J]. Taylor & Francis Remote Sensing Letters, 2012, 3(6):491-499.
[21]
HIGUERA C, GARDINER K J, CIOS K J. Self-organizing feature maps ide.pngy proteins critical to learning in a mouse model of down syndrome[J]. Plos One, 2015, 10(6):e0129126.
[22]
KHUN M, JOHNSON K. Applied predictive modeling[M]. New York: Springer, 2013.

基金

国家自然科学基金(12075069)
国家自然科学基金(61771167)
国家自然科学基金(11775061)
国家自然科学基金(11805045)
四川省基金重点项目(2019YFSY0028)
强脉冲辐射环境模拟与效应国家重点实验室项目(SKLIPR1912)
强脉冲辐射环境模拟与效应国家重点实验室项目(SKPLIPR2015)

编辑: 薛士然
PDF(3612 KB)

Accesses

Citation

Detail

段落导航
相关文章

/