During high-speed switching, Insulated Gate Bipolar Transistor (IGBT) power modules produce near-field magnetic radiation, creating significant interference issues. To understand this phenomenon, this paper investigates the spatial distribution of the magnetic field inside the module through a combination of simulation and experiment. The investigation begins with a simulation based on Magnetic Vector Potential (MVP) theory. Using a self-developed finite element solver, a 3D electromagnetic model of a GCV900 series IGBT is simulated to analyze its internal magnetic field characteristics at different frequencies. Following the simulation, experimental validation is performed. A three-phase reactive power test platform was built, allowing for measurements of the magnetic field on the IGBT chip surfaces with a high-precision near-field probe. Both approaches yielded consistent results. They confirm that the magnetic radiation intensity is non-uniformly distributed within the module. Specifically, the area near the DC input, located at the core of the main commutation path, experiences the strongest radiation. In contrast, the area near the AC output is minimally impacted. Ultimately, this research clarifies the magnetic radiation pattern inside IGBT modules, offering a solid theoretical foundation and valuable data for improving EMC design and mitigating near-field coupling interference.
Semiconductor equipment is a critical component in chip manufacturing, performing essential processes such as lithography, etching, and thin-film deposition. The efficiency of its scheduling directly impacts wafer production capacity and factory profitability. Therefore, designing an efficient and stable scheduling system is crucial for achieving optimal production output. On one hand, the high-precision, multi-step wafer processing procedures increase the complexity of designing equipment scheduling systems. On the other hand, the efficiency of wafer scheduling within the equipment directly affects production capacity, imposing stringent requirements on the system's computational efficiency. Traditional scheduling design methods, often based on genetic algorithms that search for optimal solutions within the solution space, struggle to meet real-time system demands. This study systematically analyzes five scheduling constraints in dual-cluster wafer processing semiconductor equipment: wafer discharge constraints, module usage constraints, prohibition of overloading, valve mutual exclusion constraints, and Just in Time requirements. Innovatively, the task scheduling problem for the processing chamber task pool and the robotic arm task pool is formulated as a Mixed Integer Programming (MIP) model. By leveraging the mathematical programming solver Gurobi for rapid solution, this approach achieves a computational speed improvement of an order of magnitude compared to traditional algorithms.
In high-reliability applications such as aerospace, satellite communication, and nuclear control systems, multiple node upsets (MNUs) induced by radiation have become a major threat to the stability of static random access memory (SRAM). In recent years, to address the double node upset (DNU) issue, various radiation-hardened-by-design (RHBD) structures have been proposed and extensively studied, including S8P8N, QUCCE12T, SARP12T, HRLP16T, RH20T, S6P8N, and RH14T.This paper provides a comprehensive review of RHBD-based SRAM designs with a focus on their fault-tolerance mechanisms against DNU events. The key design principles, performance metrics, and trade-offs among reliability, power consumption, area, access time, and static stability are summarized and compared. Finally, the paper points out that existing RHBD structures still face challenges in achieving fine-grained fault tolerance and balanced overall performance. Future development may focus on charge propagation suppression and feedback mechanism optimization to further enhance DNU resilience.
Aiming at the constraints of time and venue in FPGA-related experimental teaching, as well as the difficulty of collecting process data on teaching and learning in traditional offline on-site board-level experiments, this paper presents the design and implementation of a remote laboratory system for digital-circuit instruction based on the Unisoc FPGA platform. By adopting a hardware-software co-design philosophy, the system not only supports remote download/update via an emulated JTAG interface, bit-stream flashing, waveform capture and signal generation, but also extends to a dual-channel arbitrary waveform generator and spectrum analyzer. Integrating remote cameras and digital-twin technology panels, the system streams real-time experimental phenomena over Ethernet, enables remote interaction and continuous monitoring of experiment status, and thus establishes an immersive and scalable remote laboratory environment.
To address the challenges of traditional MOSFET testing, such as cumbersome procedures, reliance on bulky instruments, and a low degree of intelligence, this paper presents an automated test system integrating a Large Language Model (LLM) with the "Yuzhu S" portable hardware. Centered around the "Yuzhu S" instrument, the system performs characteristic curve, threshold voltage and conduction resistance tests using an integrated PCB carrier board. It innovatively leverages the Gemini API to empower the software, enabling automatic parsing of PDF datasheets, intelligent recommendation of test parameters, and in-depth error analysis of the results. The test results for an IRF7401 device demonstrate that the key static and dynamic parameters obtained by the system show excellent agreement with datasheet specifications and simulation values, thus validating the accuracy and feasibility of the proposed solution. This research provides an efficient, intelligent, and portable new method for end-users to evaluate device performance.
Automatic Test Equipment (ATE) for integrated circuits is a core device used to verify the functionality and performance of chips. Traditional testing methods suffer from limitations such as low efficiency and insufficient precision. To address these issues, this study proposes an automatic testing scheme based on the ST3020 ATE. This equipment features innovative characteristics including automation, high efficiency, high precision, wide measurement range, strong flexibility, and excellent scalability. Taking the UC2625 chip as the test object, automatic test codes are developed at the software level, and an interface printed circuit board (PCB) is designed at the hardware level. By integrating technologies such as cyclic testing, array storage, and data comparison, a systematic study is conducted on the logical functions and key parameter indicators of the chip, ultimately realizing a complete ATE automatic testing scheme. The test results are consistent with the specifications in the chip datasheet and meet the requirements of practical testing. This scheme conducts valuable explorations in automatic testing methods, provides a reference for the independent development of ATE testing technology in China.
This paper presents the design of a receiver AFE suitable for 100 Gb/s PAM-4 signals based on TSMC 65 nm CMOS technology. Employing a CTLE+VGA+TIA architecture, the CTLE compensates for channel loss while the VGA+TIA provides gain control. The CTLE section, incorporating a cascode structure, negative capacitance compensation, and tunable low-pass filtering, achieves a tunable gain range of 2.7 dB to 18 dB at the Nyquist frequency (25 GHz). The VGA, cascaded with an inverter-based transimpedance amplifier (TIA), enables precise gain adjustment in 1 dB steps from -3 dB to 12 dB through a 4-bit DAC. The continuous-time linear equalizer (CTLE) and variable gain amplifier (VGA) modules innovatively implement reverse-coupled inductive peaking technology to enhance bandwidth, improve gain, and optimize noise. Simultaneously, the TIA employs peaking inductor bandwidth extension and low-impedance path noise optimization techniques, extending the system's 1 dB bandwidth to 42.8 GHz while further optimizing noise. Additionally, this paper introduces a gm-boosting-based interstage magnetic feedback technique, forming a triple-coupled inductor structure between the VGA and TIA stages, effectively enhancing overall gain. The core layout area measures 0.175 mm2, and post-simulation results demonstrate that when compensating for 5/10/15 dB@25 GHz channel losses, the total power consumption remains below 18.7 mW, with root mean square noise not exceeding 1.08 mVrms. The system successfully opens previously closed eye diagrams, with all performance metrics meeting or exceeding design specifications.
To tackle the concurrent challenges of bandwidth, linearity, and integration in the analog front-end (AFE) of a 100 Gb/s PAM-4 wireline receiver for Chiplet interconnect applications, this paper presents a high-performance AFE architecture based on a transconductance-transimpedance amplifier (GM-TIA) continuous-time linear equalizer (CTLE). The proposed AFE efficiently compensates for channel loss while maintaining high linearity through an integrated broadband input matching network consisting of an asymmetric T-coil, a programmable attenuator, and an AC coupler. A two-stage cascaded GM-TIA-based CTLE enables wide-range gain tuning from low to high frequencies and also serves as a variable-gain amplifier (VGA). Designed in a 28-nm CMOS process, the AFE occupies a core area of 0.012 mm2 with the power dissipation of 9.94 mW. The equalization tuning range extends from 2.25 dB to 13.39 dB. After equalization, the 100 Gb/s PAM-4 output exhibits an eye height greater than 100 mV and an eye width exceeding 0.52 UI.
The global issue of “garbage encircling cities” is intensifying, making intelligent waste sorting a research hotspot for tackling this challenge. However, embedded platforms commonly face the trade-off dilemma of “limited computing power-high real-time requirements-optimal recognition accuracy”.The traditional approaches struggle to meet practical demands: cloud-based architectures suffer from high latency due to data transmission, pure embedded architectures lack sufficient computing power, and cloud-edge collaborative architectures still exhibit interaction delays. This paper proposes a heterogeneous collaborative computing architecture based on FPGA-STM32. The FPGA handles image preprocessing and parallel convolution computations, while the STM32 manages fully connected layer operations and classification decisions. Concurrently, a lightweight convolutional neural network is optimized through pruning into a “single convolution layer+three fully connected layers” structure, incorporating INT16 quantization and clipping mechanisms to balance accuracy and hardware adaptability. The experiments demonstrate that the system achieves an 83.33% accuracy rate in identifying ten categories of household waste. Compared to the MATLAB platform, it accelerates inference by 15.675 times with a processing latency of only 40.004 ms. The low FPGA core resource utilization enables efficient deployment in embedded waste sorting scenarios such as communities and households.
With the rapid development of artificial intelligence and deep learning applications, tensor computing urgently demands high-efficiency and multi-precision computing hardware accelerators. The traditional general-purpose processors face energy efficiency bottlenecks when processing large-scale matrix multiplication operations, while existing dedicated accelerators often lack flexibility in supporting diverse data precision and hybrid computing modes. This paper presents a multi-precision and mixed-precision tensor processing unit (TPU), designed based on a reconfigurable architecture, which supports five data formats (INT4, INT8, FP16, BF16, FP32) and two hybrid modes (FP16+FP32, BF16+FP32). It is capable of efficiently performing matrix multiplication and accumulation across three different dimensions (m16n16k16, m32n8k16, m8n32k16). By incorporating a reconfigurable computing array, dynamic data flow control, multi-mode buffer design, and a unified floating-point processing unit, the design achieves high hardware reuse and significantly improved computational efficiency. Synthesized on the VCU118 FPGA platform at 251.13 MHz, it delivers a peak theoretical performance of 257.16 GOPS/GFLOPS (INT4/INT8/FP16/BF16) and 64.29 GFLOPS (FP32). This design is well-suited for applications such as deep learning inference, autonomous driving, and medical imaging, where both computational efficiency and flexibility are critical.
This paper presents a low-noise, high-voltage, high-output-current power operational amplifier designed in a SMIC 180 nm BCD process. The architecture features a low-noise PMOS input stage, a voltage gain stage, and a Class AB output stage biased by a transconductance linear loop. Stability is ensured by Cascode frequency compensation, while integrated hysteretic over-temperature and current-limiting circuits provide robust protection against thermal and electrical damage. Combining 60 V DMOS and 1.8 V CMOS devices, the amplifier operates over a wide supply range of ±4 V to ±30 V and a temperature range of -55 ℃ to +125 ℃. Simulation results demonstrate an equivalent input voltage noise of 8.85 nV/$\sqrt{Hz}$, a 400 mA output current, 143.3 dB DC gain, a 6.804 MHz unity-gain bandwidth, a 33.7 V/μs slew rate, with the chip area of 1.79 × 1.12 mm2. The proposed amplifier is well-suited for vehicle electronic applications including precision battery sensing, sensor interfaces, and power device driving.