PDF(3612 KB)
面向投票类AI分类器的零冗余存储器容错设计
柳姗姗, 金辉, 刘思佳, 王天琦, 周彬, 马瑶, 王碧, 常亮, 周军
集成电路与嵌入式系统 ›› 2024, Vol. 24 ›› Issue (6) : 1-8.
PDF(3612 KB)
PDF(3612 KB)
面向投票类AI分类器的零冗余存储器容错设计
Redundancy-free error-tolerant memory design for voting-based AI classifiers
投票类分类器广泛应用于多种人工智能(A.pngicial Intelligence,AI)场景,在其电路系统中,用于存储已知样本信息的存储器易受到辐射、物理特性变化等多种效应影响,引发软错误,继而可能导致分类失败。因此,在高安全性领域应用的AI分类器,其存储电路需要进行容错设计。现有存储器容错技术通常采用错误纠正码,但面向AI系统,其引入的冗余会进一步加剧本就面临挑战的存储负担。因此本文提出一种零冗余存储器容错技术,采用纠正错误对分类结果的负面影响而非纠正错误本身的设计思想,利用错误造成的数据翻转现象恢复出正确的分类结果。通过对k邻近算法进行实验验证,本文提出的技术在不引入任何冗余的情况下可达到近乎完全的容错能力,且相比于现有技术,节省了大量硬件开销。
Voting-based classifiers are widely used in many A.pngicial Intelligence (AI) applications.In their implementation,memories that store all known data are prone to suffer different effects like radiation and physical variations,causing soft errors and can even change the classification results.Therefore,error-tolerance must be achieved in these memories for safety-critical applications.Existing error-tolerant techniques commonly utilize error correction codes,however,the memory redundancy they introduce further increases the burden of storage.In this paper,a redundancy-free technique is proposed by focusing on the impact of errors on the classification performance,instead of the error itself.It can recover the error-free classification results under errors by exploiting the flipped data.A k nearest neighbor classifier is taken as a case study to evaluate the proposed technique.The simulation results show that the proposed scheme offers almost full error tolerance without incurring any memory redundancy,moreover,it significantly reduces the hardware overheads for protection circuits compared to existing techniques.
存储器 / 软错误 / 人工智能 / 分类器 / 错误纠正码 / k邻近算法
memory / soft errors / a.pngicial intelligence / classifiers / error correction codes / k nearest neighbors
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
赵元富, 王亮, 岳素格, 等. 纳米级 CMOS 集成电路的单粒子效应及其加固技术[J]. 电子学报, 2018, 46(10):2511-2518.
空间应用的集成电路受到辐射效应的影响,会出现瞬态干扰、数据翻转、性能退化、功能失效甚至彻底毁坏等问题.随着器件特征尺寸进入到100nm以下(以下简称纳米级),这些问题的多样性和复杂性进一步增加,单粒子效应成为集成电路在空间可靠性应用的主要问题,给集成电路的辐射效应评估和抗辐射加固带来了诸多挑战.本文以纳米级CMOS集成电路为研究对象,结合近年来国内外的主要技术进展,介绍研究团队在65nm集成电路单粒子效应和加固技术方面的研究成果,包括首次提出的单粒子时域测试和分析方法、单粒子多节点翻转加固方法和单粒子瞬态加固方法等.
|
| [5] |
郭靖, 李强, 宿晓慧, 等. 新型RHBD抗多节点翻转锁存器设计[J]. 计算机辅助设计与图形学报, 2021, 33(6):963-973.
|
| [6] |
|
| [7] |
|
| [8] |
nearest neighbor (kNN) method is a popular classification method in data mining and statistics because of its simple implementation and significant classification performance. However, it is impractical for traditional kNN methods to assign a fixed value (even though set by experts) to all test samples. Previous solutions assign different values to different test samples by the cross validation method but are usually time-consuming. This paper proposes a kTree method to learn different optimal values for different test/new samples, by involving a training stage in the kNN classification. Specifically, in the training stage, kTree method first learns optimal values for all training samples by a new sparse reconstruction model, and then constructs a decision tree (namely, kTree) using training samples and the learned optimal values. In the test stage, the kTree fast outputs the optimal value for each test sample, and then, the kNN classification can be conducted using the learned optimal value and all training samples. As a result, the proposed kTree method has a similar running cost but higher classification accuracy, compared with traditional kNN methods, which assign a fixed value to all test samples. Moreover, the proposed kTree method needs less running cost but achieves similar classification accuracy, compared with the newly kNN methods, which assign different values to different test samples. This paper further proposes an improvement version of kTree method (namely, k*Tree method) to speed its test stage by extra storing the information of the training samples in the leaf nodes of kTree, such as the training samples located in the leaf nodes, their kNNs, and the nearest neighbor of these kNNs. We call the resulting decision tree as k*Tree, which enables to conduct kNN classification using a subset of the training samples in the leaf nodes rather than all training samples used in the newly kNN methods. This actually reduces running cost of test stage. Finally, the experimental results on 20 real data sets showed that our proposed methods (i.e., kTree and k*Tree) are much more efficient than the compared methods in terms of classification tasks.
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
/
| 〈 |
|
〉 |