We’re making sure computers keep getting better. With light.
Founded by MIT researchers in 2017. $850M raised. $4.4B valuation. Building the photonic technologies that will power the next era of AI infrastructure.
Mission
AI is advancing exponentially. The models driving breakthroughs in science, medicine, and human knowledge require unprecedented computational scale—100,000+ GPUs training in coherent clusters.
But these systems are bandwidth-starved. GPU utilization sits at 40-60% because chips spend more time waiting for data than computing.
The interconnect is the bottleneck.
Lightmatter builds photonic interconnects that eliminate this constraint. Our 3D integrated photonic circuits deliver 114 Tbps bandwidth—the industry’s highest—enabling frontier AI models to train 8x faster.
We’re not incrementally improving existing technology. We’re creating a fundamentally new architecture—edgeless I/O—that will scale AI infrastructure for the next decade.
Locations
Mountain View (HQ)
Boston
Hsinchu
Toronto
Leadership
Scaling with Light
Lightmatter’s team combines expertise in silicon photonics, high-speed SerDes design, advanced packaging, AI systems architecture, and hyperscale data center infrastructure. We’re hiring across all functions.
Board of Directors
Nicholas Harris
Darius Bunandar
Olivia Nottebohm
Santo Politi
Erik Nordlander
Jeff Smith
Kushagra Vaid
Advisors
Why Lightmatter
Architectural Leadership
Edgeless I/O is not incremental—it’s a fundamental shift. While others optimize perimeter-constrained designs, we’ve eliminated the constraint entirely.
First-mover advantage in category-defining technology. Decade-scale roadmap for continued bandwidth scaling.
114 Tbps today. 1+ Pbps tomorrow.
Production Ready
Not research. Not vaporware. Production-ready technology shipping in 2025.
Partnerships with TSMC, GlobalFoundries, and Tower ensures high-volume capability. Reference platform in customer hands today.
$4.4B valuation validates market demand and technical achievement.
Solving Real Problems
GPU utilization at 40-60% due to bandwidth bottlenecks. Hundreds of millions in stranded compute capacity. Models limited by communication overhead, not compute.
Passage eliminates the constraint. 8x faster training. 100,000+ GPU clusters. Frontier AI becomes economically viable.
Team & Expertise
Deep expertise spanning silicon photonics, high-speed analog design, advanced packaging, AI systems, and hyperscale infrastructure.
Decades of combined experience from leading semiconductor, optics, and data center companies.
We understand both the physics and the product.
Join Us
We’re hiring engineers, researchers, and leaders across all functions to build the future of AI infrastructure.