Back to Articles

NVIDIA Bets $4 Billion on Photonics: Why Light Is the Future of AI Networking

March 4, 2026
8 min read
NVIDIA Bets $4 Billion on Photonics: Why Light Is the Future of AI Networking
NVIDIA invests $2 billion each in photonics leaders Lumentum and Coherent, signaling that optical networking -- not just faster chips -- is the key to scaling AI infrastructure to the next level.

NVIDIA Makes Its Biggest Optical Networking Play Yet

On March 2, 2026, NVIDIA announced a pair of strategic partnerships that signal a fundamental shift in how the AI industry thinks about infrastructure. The company is investing $2 billion each in Lumentum Holdings and Coherent Corp., two of the world's leading photonics and optical networking companies, for a combined $4 billion commitment. Alongside the equity investments, NVIDIA has secured multibillion-dollar purchase agreements for advanced laser components and optical networking products from both firms.

These are not speculative bets on an emerging technology. They represent NVIDIA's acknowledgment that the next frontier of AI scaling is not just about building faster chips — it is about moving data between them fast enough to keep pace. As Jensen Huang, NVIDIA's CEO, put it: “Computing has fundamentally changed. NVIDIA is pioneering next-generation silicon photonics to enable AI infrastructure at unprecedented scale, speed and energy efficiency.”

The deals come just two weeks before NVIDIA's marquee GTC 2026 conference (March 16–19 in San Jose), where the company is expected to unveil further details on its photonics roadmap alongside next-generation platforms like the Rubin Ultra NVL576 with co-packaged optical interconnects.

The Interconnect Bottleneck: Why Chips Alone Are Not Enough

For years, the AI industry's scaling story has been dominated by GPU performance — more CUDA cores, more memory bandwidth, more FLOPS. But as AI training clusters have grown from hundreds to tens of thousands of GPUs, a different constraint has emerged as the binding limitation: the network interconnect.

Modern large language models with hundreds of billions to over one trillion parameters require thousands of GPUs working in concert. Every training step involves massive all-reduce operations where gradients must be synchronized across the entire cluster. The speed at which data moves between GPUs — not the speed at which individual GPUs compute — increasingly determines overall training throughput.

Traditional electrical copper interconnects face fundamental physical limitations at these scales. Signal degradation over copper traces introduces errors at higher data rates, requiring increasingly complex digital signal processing (DSP) that adds latency and power consumption. In current data center architectures, cables and transceivers can account for half the cost of the network and well more than half of its power draw.

As Sanchit Vir Gogia of Greyhound Research observed: “This is the moment where the industry quietly admits that AI scaling is no longer primarily a chip story. It is a communication story.”

The Numbers Tell the Story

The scale of the problem is quantifiable. In conventional pluggable optics architectures, data signals traverse long electrical paths from the switch ASIC through the PCB, across connectors, and into external transceivers. This path introduces signal integrity losses of approximately 22 dB. Each port in a traditional switch consumes around 30 watts just for the optical interface.

When you multiply these per-port figures across a data center with hundreds of thousands of optical connections, the aggregate power consumption and heat generation become staggering — easily consuming megawatts that could otherwise power additional compute. The industry has reached what analysts are calling the definitive end of the “Copper Era” in high-performance data centers.

What NVIDIA Is Buying: Silicon Photonics and Co-Packaged Optics

The partnerships with Lumentum and Coherent target two complementary technology pillars that NVIDIA views as foundational to next-generation AI infrastructure.

Co-Packaged Optics (CPO)

Co-packaged optics integrates the electro-optical conversion directly onto the switch package, eliminating the lengthy electrical paths that plague traditional designs. Rather than routing signals through multiple electrical interfaces and connectors to reach an external transceiver, CPO places the photonic engine immediately adjacent to the switching silicon.

The results are dramatic. NVIDIA's Spectrum-X Photonics platform, which uses CPO technology, achieves a 3.5x power saving compared to traditional pluggable transceivers. Per-port power consumption drops from approximately 30 watts to as low as 9 watts. Signal integrity loss drops from roughly 22 dB to just 4 dB, eliminating the need for power-hungry DSP compensation.

NVIDIA has announced two CPO-enabled switch families:

  • Quantum-X Photonics (InfiniBand): 115 Tb/s switching capacity with 144 ports at 800 Gb/s each, slated for early 2026 deployment
  • Spectrum-X Photonics (Ethernet): The flagship SN6800 delivers 409.6 Tb/s with 512 ports at 800 Gb/s, planned for the second half of 2026

The reliability improvements are equally significant. By reducing the number of discrete optical components in the signal path, CPO achieves a 10x improvement in resiliency compared to pluggable architectures.

Advanced Laser Components and Optical Interconnects

Both Lumentum and Coherent are leading suppliers of the laser sources and photonic components that make CPO possible. Lumentum, which has an existing relationship supplying lasers for NVIDIA's CPO modules in both the Quantum-X and Spectrum-X switch families, will use NVIDIA's investment to expand its U.S.-based fabrication capacity. As Lumentum CEO Michael Hurlston stated: “This multiyear strategic agreement reflects our shared commitment to advancing the optics technologies that will power the next generation of AI infrastructure.”

Coherent, which brings a 20-year relationship with NVIDIA to the partnership, offers complementary capabilities across multiple photonic product families. Coherent CEO Jim Anderson emphasized the company's role as “a key enabler of next-generation AI data center infrastructure.”

Critically, both deals are structured as nonexclusive agreements, meaning NVIDIA maintains supply chain diversification while ensuring it has guaranteed access to the components it needs as demand scales.

Optical Circuit Switching: The Next Layer of Innovation

Beyond CPO, both Lumentum and Coherent are developing optical circuit switches (OCS) that could further transform data center networking. Unlike traditional electrical switches that convert optical signals to electrical for processing and back to optical for transmission, OCS routes light signals directly — eliminating conversion overhead entirely.

Lumentum's R300 is a 300x300 port optical circuit switch using micro-electro-mechanical systems (MEMS) mirror technology, similar to the approach Google has deployed internally with its “Palomar” devices. Lumentum claims the R300 can reduce power consumption by 65% across systems with 100,000 accelerators and deliver 5x to 10x lower latency versus electrical Ethernet switching.

Coherent's DLX series takes a different approach, using liquid crystal technology in configurations of 64x64, 320x320, or 512x512 ports. The DLX is currently in trials with seven customers.

While OCS has a limitation — reconfiguration takes tens of milliseconds, making it unsuitable for rapidly changing traffic patterns — it is well-suited for the relatively stable spine layers of large-scale AI training networks where traffic patterns are predictable.

GTC 2026: Where the Photonics Roadmap Gets Real

The timing of these announcements is no accident. NVIDIA's GTC 2026 conference, running March 16–19 at the SAP Center in San Jose, is expected to draw over 30,000 attendees and will showcase what NVIDIA describes as its “five-layer cake” framework for AI infrastructure: energy, chips, infrastructure, models, and applications.

Among the expected announcements are details on the Rubin Ultra NVL576 platform, which will feature orthogonal backplanes and co-packaged optical interconnects. This represents the first time NVIDIA will ship a major GPU platform with photonics built into the interconnect architecture from the ground up, rather than as an add-on networking layer.

Additionally, NVIDIA is expected to reveal enhanced LPX inference racks featuring 256 LPUs per rack — a fourfold increase over the first generation — paired with new 52-layer M9 Q-glass PCBs designed specifically for the signal integrity requirements of high-speed optical interconnects.

Jensen Huang will deliver the keynote on Monday, March 16, at 11 a.m. PT, outlining NVIDIA's vision for the full AI stack from accelerated compute through AI factories, open models, agentic systems, and physical AI.

The Competitive Landscape: Securing the Supply Chain

NVIDIA's photonics investments must be understood in the context of an intensifying AI infrastructure arms race. Hyperscalers are planning to spend nearly $700 billion on data center projects in 2026 alone, with between $3 trillion and $4 trillion expected to flow into AI infrastructure by the end of the decade.

NVIDIA currently commands a 92% share of the global GPU market, and the company reported fiscal 2026 revenue of $215.9 billion — a 65% year-over-year increase. But maintaining that dominance requires controlling not just the compute layer but the entire infrastructure stack.

Amazon, Google, and other hyperscalers are developing purpose-built AI silicon that could reduce their dependence on NVIDIA GPUs. By locking in the photonics supply chain, NVIDIA ensures that even if competitors close the gap on raw compute, NVIDIA's networking infrastructure — from NVLink to InfiniBand to Spectrum-X — remains the glue that holds large-scale AI clusters together.

As Brian Jackson of Info-Tech Research Group noted, NVIDIA is “laying the groundwork for its future as a competitive provider of AI training infrastructure” in an environment where purpose-built silicon from cloud providers represents a growing competitive threat.

The company has also recently announced a $5 billion investment in Intel to develop custom x86 CPUs for NVIDIA's AI infrastructure platforms, further demonstrating a strategy of vertically integrating and securing every layer of the AI data center stack.

Market Impact: Photonics Becomes a Trillion-Dollar Conversation

The optical interconnect market in AI data centers was valued at $9.94 billion in 2025 and is projected to reach $31.04 billion by 2033, growing at a compound annual growth rate of 15.3%. Co-packaged optics already accounts for 37% of the market as AI workloads exceed the bandwidth and power limits of traditional optical and electrical links.

NVIDIA's $4 billion commitment accelerates this trajectory significantly. Both Lumentum and Coherent saw their stock prices surge on the announcement, reflecting investor confidence that photonics has moved from a niche technology to a core component of AI infrastructure.

The emphasis on U.S.-based manufacturing in both deals also reflects the geopolitical dimensions of AI infrastructure, with semiconductor supply chain security becoming a strategic priority for both companies and governments. Both partnerships explicitly support expanding domestic fabrication capacity, aligning with broader U.S. policy goals around reshoring critical technology manufacturing.

What This Means for the Future of AI

NVIDIA's photonics investment marks an inflection point in how the industry thinks about AI scaling. The message is clear: the era when AI progress could be measured solely by GPU performance is over. The future belongs to systems-level thinking, where compute, memory, networking, and power must all scale in concert.

For AI researchers and engineers building next-generation models, the practical implications are significant. Photonics-enabled infrastructure promises to unlock larger training clusters with better utilization rates, lower power consumption per unit of compute, and more predictable performance at scale. The gap between a 10,000-GPU cluster and a 100,000-GPU cluster is not just a matter of adding more hardware — it requires fundamentally rethinking how those processors communicate.

As the industry moves toward gigawatt-scale AI factories — the term Jensen Huang used in describing the Lumentum partnership — optical interconnects are no longer optional. They are the enabling technology that determines how large AI systems can practically grow. NVIDIA's $4 billion bet on photonics is its clearest signal yet that the company sees light, not copper, as the medium through which the next generation of artificial intelligence will be built.

Key Takeaways

  • NVIDIA is investing $2 billion each in Lumentum and Coherent, totaling $4 billion, alongside multibillion-dollar purchase commitments for advanced laser and optical networking components.
  • Co-packaged optics (CPO) delivers 3.5x power savings over traditional pluggable transceivers, reducing per-port power from 30W to 9W and signal loss from 22 dB to 4 dB.
  • NVIDIA's Spectrum-X Photonics SN6800 switch achieves 409.6 Tb/s throughput with 512 ports at 800 Gb/s, while the Quantum-X Photonics InfiniBand switch delivers 115 Tb/s across 144 ports.
  • The optical interconnect market in AI data centers is projected to grow from $9.94 billion in 2025 to $31.04 billion by 2033, with CPO already holding 37% market share.
  • Optical circuit switches from both companies promise 65% power reduction and 5-10x lower latency compared to electrical switching in large AI clusters.
  • Both deals are nonexclusive and emphasize expanding U.S.-based manufacturing, reflecting supply chain security priorities alongside technical objectives.
  • The announcements precede NVIDIA's GTC 2026 conference (March 16-19), where the Rubin Ultra NVL576 platform with built-in co-packaged optical interconnects is expected to be unveiled.
S

Skila AI Editorial Team

The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.

About Skila AI →
NVIDIA
Photonics
AI Infrastructure
Data Center
Optical Networking
Lumentum
Coherent
Co Packaged Optics
Silicon Photonics
GTC 2026

Related Resources

Weekly AI Digest

Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.

Join 1,000+ AI enthusiasts. Free forever.