Photonics powering AI data centres: The latest innovations

Written by James Bourne
Data centres remain one of the key applications for photonic computing; and with generative AI only ramping up in importance for consumers and organisations, a solution to mitigate the energy effects of AI compute power is increasingly necessary.
Energy is one of the major benefits of photonics compared with electronics. Replacing electrons with photons can offer, according to estimates from Cambridge Consultants, a tenfold increase in energy efficiency, along with a 10-50x bandwidth improvement over traditional computing. The UK-based Photonics Leadership Group, in its UK Photonics 2035: The Vision report, argues that innovation in integrated photonics will reduce data centre energy consumption by more than 50% by 2035.
Alongside this, the forecast for data centre energy consumption is gloomy. Goldman Sachs puts it at a 160% increase by 2030; the chief executive of the National Grid put power consumption at a six-fold increase over the next decade. The seven largest US tech companies, known as the Magnificent Seven, represent almost one third (32%) of the S&P 500, up from 12% a decade ago; yet according to the European Central Bank their energy consumption has grown much faster than their S&P 500 equivalents.
AI data centres, which house specific IT infrastructure for training, deploying and delivering AI applications and services, are available. Yet they require massive data storage systems, high-powered servers as opposed to CPU-powered racks, and liquid cooling technology. The latter offers greater capacity with enhanced energy efficiency, but complexity and upfront investment costs remain a barrier.
Photonics is therefore being seen as an alternative for AI data centres. As a result, there is a tranche of innovation in this space as companies ranging from major vendors to forward-thinking startups are looking to build. Here are a few interesting examples.
At the end of March, iPronics announced the launch of ONE-32, claimed as the first optical circuit switch (OCS) product based on silicon photonics. Optical circuit switching uses optical signals to establish direct communication paths between endpoints, eliminating the need for optical-to-electrical-to-optical conversions and reducing latency and power consumption.
iPronics says the ONE-32 leveraging a CMOS (complementary metal-oxide semiconductor) silicon photonics platform will cut switch power consumption by up to 50%. iPronics chief executive Christian Dupont said the ONE-32 ‘unlock[s] optical networking’s full potential for data centres.’
In the same week, Lumentum announced that its R300 OCS was being sampled by ‘multiple hyperscale customers’. The R300 is based on MEMS (micro-electro-mechanical systems) optical switching technology, with the company saying the product adds to its ‘broad portfolio of innovative photonic solutions that increase AI data centre scalability.’
NVIDIA, meanwhile, says it is ‘breaking new ground’ by integrating silicon photonics directly with its NVIDIA Quantum and Spectrum switch integrated circuits. The company’s silicon photonics switch systems, called co-packaged silicon photonics, offer 3.5x lower power consumption, lower latency and ‘dramatically’ improved network resiliency over more traditional pluggable optical transceivers. “This is the dawn of a new era where efficiency meets performance, accelerating AI breakthroughs and reshaping the data centre landscape for generations to come,” the company wrote a vision that aligns with ongoing industry discussions around silicon manufacturing, including an upcoming talk at Microelectronics UK on 24 - 25 September 2025, where experts will explore these emerging possibilities in depth.
)
On the same day as iPronics’ release, Lightmatter made two announcements. The company announced the release of Passage M1000 and L200. Designed for next-generation XPUs – a processing unit including various architectures – and switches, the latter looks to deliver the essential elements of co-packaged optics, integrating optics and electronics to reduce power and increase bandwidth in data networks.
“AI data centre interconnects face growing bandwidth and power challenges,” said Andrew Schmitt, founder and directing analyst at Cignal AI. “Co-packaged optics – integrating optics directly onto XPUs and switches – is the inevitable solution.”
Collaboration is the watchword going forward for integrating optical technologies into the data centre. Writing for Electronic Design, Adam Carter notes that “the future of AI and data centres is bright, but it requires collaboration across the industry – from chip designers to system integrators and data centre operators to energy providers.”
This is echoed by PhotonDelta, a non-profit organisation which supporting an end-to-end value chain for photonic chips, which notes in a recent editorial post that the ‘path to future AI data centres is paved with optical interconnect.’ Lightmatter is one of several companies cited in bringing a chip-level optical I/O solution to market.
“When viewed from the perspective of chip foundries that have had decades to refine their CMOS production processes, chip-level optical interconnect represents more of a revolution than an evolution,” PhotonDelta notes. “Fortunately, the revolution is well advanced and accelerating thanks to the combined efforts of fabs, chipmakers, designers, and industry accelerators.”
Want to find out more about topics such as this? The company's vision aligns with ongoing industry discussions around silicon manufacturing, such as the upcoming talk at Microelectronics UK where experts will explore these emerging possibilities in depth. Pre-register your place today.
)