| Title |
ML-Driven Optimization of Standard Cell Performance and Timing in Advanced Nodes |
| Authors |
(HyunJoon Jeong) ; (Junha Suk) ; (Jeong-Taek Kong) ; (SoYoung Kim) |
| DOI |
https://doi.org/10.5573/JSTS.2026.26.2.130 |
| Keywords |
Standard cell; nanosheet field-effect transistor (NSFET); buried power rail (BPR); timing; performance; multi-objective Bayesian optimization (MOBO); artificial neural network (ANN) |
| Abstract |
Standard cell performance and timing optimization becomes increasingly challenging in advanced technology nodes such as sub-3 nm nanosheet FET (NSFET) with buried power rails (BPRs). In this paper, we propose a novel standard cell optimization methodology based on machine learning (ML) that simultaneously achieves performance improvement and timing balance while reducing simulation overhead. For INV/NAND2/NOR2 cell layouts designed with 3 nm NSFETs, we perform post-layout simulations using parasitic component extraction (PEX) to compute delays and power and generate a dataset. Using this dataset, we train an artificial neural network (ANN) model as an objective function and perform multi-objective Bayesian optimization (MOBO) under explicit design rules and cell height constraints to achieve 1:1 rise-fall delay symmetry across the cells. Within this framework, high performance (HP) applications target minimum propagation delay with 1:1 symmetry, while low power (LP) applications target minimum total power with the same symmetry. For 3 nm and beyond NSFET technology, delay is reduced by up to 23.2% for HP INV cells, and power is reduced by 10.3% for LP NAND2 cells. For NAND2/NOR2 cells, the rise-fall delay balance is improved by more than 15%. To evaluate the performance of the optimized standard cells, a 7-stage ring oscillator (RO) and a 4-bit ripple carry adder (RCA) were used as test circuits. The results show significant improvements in both delay and power efficiency. |