NVIDIA CEO Jensen Huang Discusses GAA Transistors and the Future of Performance Scaling with TSMC’s 2nm Process
In a recent interview with EE Times, NVIDIA CEO Jensen Huang offered a glimpse into how Gate-All-Around (GAA) transistor technology may influence the semiconductor industry—and more specifically, how NVIDIA plans to leverage it in the years ahead.
While Huang remains skeptical about the long-term viability of Moore’s Law, he was notably optimistic about the potential of GAA-focused processes. He believes that while the technology won't "change the world," it could still yield a significant 20% performance gain per generation. However, he stopped short of defining which workloads or components would benefit the most.
This cautious optimism reflects NVIDIA’s broader philosophy. Unlike other chipmakers that focus on strict transistor scaling, NVIDIA has long taken a more architectural approach to boosting performance—often dubbed the “Huang’s Law” by industry watchers. This strategy is rooted in enhancing parallelism, data throughput, and system-level integration, particularly in the realm of AI and data center computing.
NVIDIA’s Likely Use of GAA in the Feynman AI Architecture
While not yet officially confirmed, it’s highly likely that NVIDIA's forthcoming Feynman architecture, scheduled for 2028, will debut with TSMC’s N2 (2nm) process. TSMC’s N2 node, which includes its own GAA implementation, represents a key milestone in transistor evolution—shifting from traditional FinFETs to nanosheet GAA designs.
Unlike competitors such as Samsung, which launched its 3nm GAA process early but struggled with yields reportedly as low as 20%, TSMC is reportedly achieving yields over 60% with its N2 node. This progress positions TSMC’s technology as a mature and reliable candidate for NVIDIA’s long-term roadmap.
Historically, NVIDIA waits for nodes to mature before committing them to flagship products. With N2 anticipated to reach volume production in 2025-2026, aligning the Feynman launch in 2028 gives NVIDIA ample room for yield stabilization, supply scaling, and architectural optimization.
GAA: A Piece of a Bigger Puzzle
Though a 20% performance uplift from GAA might seem modest in isolation, Huang emphasized that real-world AI acceleration doesn’t hinge on transistor gains alone. Instead, NVIDIA’s focus remains on scaling entire AI systems—from GPU architecture, memory hierarchy, and interconnect bandwidth, to software stacks and developer tools.
Indeed, NVIDIA has demonstrated an astonishing 1,000x improvement in AI workloads over the last decade, largely without relying on aggressive process shrinkage. This aligns with Huang’s ongoing message: innovation in AI computing is architectural first, process second.
A Roadmap Shaped by Architectural Dominance
The forthcoming Feynman architecture, building on the momentum of Hopper and Blackwell, is expected to reflect this belief. While the transition to TSMC’s 2nm GAA node may help reduce power consumption and improve raw performance, the architectural innovations within Feynman are likely to define its success in AI and HPC workloads.
As the AI boom demands exponential performance growth, NVIDIA is strategically balancing cutting-edge process adoption with scalable architecture design. Whether through new interconnect fabrics, FP formats like FP4, or expanded software ecosystems like CUDA and NVIDIA ACE, the company aims to remain ahead of the curve—even if Moore's Law is no longer its guidepost.
How do you think GAA technology will shape the next wave of GPU and AI accelerator innovation? Will TSMC’s 2nm GAA be the defining factor, or will "Huang’s Law" continue to lead the way? Share your thoughts below!