Skip to Content
Henkel Adhesive Technologies

Henkel Adhesive Technologies

Custom silicon and photonics: powering the next wave of hyperscale AI

Hyperscale AI is driving a shift from traditional GPUs to fully co-designed systems built on custom silicon, silicon photonics, advanced cooling, and specialized materials to achieve sustainable, scalable performance.

Dhara Patel
Business development manager

4 min.

AI’s exponential growth is pushing conventional graphic processing unit (GPU) clusters to their physical and architectural limits. GPUs enabled the first wave of deep learning, but hyperscalers are now operating beyond what general-purpose accelerators and copper interconnects can sustain. To meet the demands of trillion-parameter models and global inference, operators are increasingly deploying custom silicon and silicon photonics. These technologies are designed for extreme bandwidth, efficiency, and scale.

This shift is not merely a performance upgrade; it is a system-level transformation. At hyperscale - compute, memory, interconnect, cooling, and materials must function as a unified, optimised system.

Image of inside hallway data center room

The system bottlenecks

GPUs remain essential, but hyperscale deployments expose structural constraints. Power density is rising sharply, with racks routinely exceeding 70–80 kW, far beyond the capacity of air cooling. Liquid and immersion cooling are essential to maintain thermals, reliability, and serviceability.

Electrical interconnects are also reaching physical limits. Copper signalling degrades over distance and can exceed 10 pJ/bit at 400G+, while large GPU clusters experience synchronization latencies across thousands of nodes. Dependence on a single GPU vendor restricts architectural flexibility and exposes operators to supply-chain risk. These pressures are accelerating investment in custom accelerators built specifically for hyperscale AI.

Custom silicon: a purpose-built architecture

Custom accelerators allow hyperscalers to design processors tuned to training, inference, search, and video workloads. Training silicon emphasises matrix throughput and high-bandwidth fabrics, while inference engines optimise latency, efficiency, and cost.

Chiplet-based architectures are replacing monolithic dies, enabling heterogeneous integration and higher yields. Advanced packaging, including 2.5D interposers and 3D stacking, boosts compute-memory bandwidth but increases thermal density and complicates power delivery. Interconnect standards such as CXL (Compute Express Link) allow shared memory pools and low-latency accelerator-to-accelerator communication beyond PCIe (Peripheral Component Interconnect Express). These advances let designers rethink memory hierarchies and cluster architecture, but they also heighten expectations for cooling, reliability, and materials.

Materials: the hidden driver of performance

Materials science has become a strategic driver of system performance and reliability. Modern hyperscale systems rely on thermal interface materials, adhesives, underfills, encapsulants, and coatings engineered to handle extreme heat, mechanical stress, and repeated thermal cycling. Co-packaged optics operating at 100–105°C require materials that maintain micron-level alignment and resist vibration and moisture, while rack-level liquid cooling demands materials that endure fluid exposure and support modular field service.

Key aspects of materials and system design include:

High-performance interconnects

Ensuring signal integrity from chips to optical modules while managing electrical and mechanical stress system-wide.

Rack-scale cooling

Encapsulants and interface materials engineered for repeated thermal cycles and fluid exposure, vital for uptime and serviceability.

Co-packaged optics reliability

Photonic modules operating at 105°C+ require materials that support precise optical alignment and robust long-term performance.

Modular integration

Adhesives and coatings enable dense, flexible module placement - minimizing loss, simplifying repair, and supporting next-generation maintainability.

These innovations ensure operational stability, preserve signal quality, and allow dense, high-power architectures to function reliably at scale.

Silicon photonics: breaking the bandwidth bottleneck

Silicon photonics is transforming hyperscale connectivity, offering bandwidth and energy efficiency beyond copper. AI clusters generate enormous east-west traffic, which copper cannot carry efficiently at distance. Optical signaling reduces latency and power, often operating below 5 pJ per bit, less than half that of equivalent copper links.

The photonics roadmap is compelling. Data centres are deploying 800G and 1.6T modules today, with 3.2T solutions on the horizon. Co-packaged optics integrate optical engines directly on the application-specific integrated circuit, shortening electrical paths and reducing latency and losses, which is critical for model-parallel training. Modern platforms also improve density, serviceability, and upgradeability, and when paired with advanced materials, can deliver up to 3.5× improvement in power efficiency while maintaining thermal resilience.

Moving toward co-designed infrastructure

Hyperscale AI now requires integrating silicon, optics, cooling, power, and materials. Memory disaggregation using photonics will enable shared rack-scale memory pools with near-local performance. On-chip wavelength division multiplexing multiplies bandwidth per fibre, extending optical scalability. Higher-voltage rack power architectures reduce conversion losses, while liquid cooling supports stacked dies, high-power co-packaged optic modules, and dense chiplet assemblies. Materials designed for reliability and serviceability ensure systems function dependably across global deployments.

Every subsystem influences the next: packaging shapes cooling, cooling determines reliability, materials govern lifespan, and optical-silicon integration sets the ceiling for bandwidth and performance. Success requires engineers to design the system as an integrated whole rather than optimising components in isolation.

A new era of hyperscale AI

This is the image of data center rack

The next wave of AI infrastructure will be built on custom accelerators, silicon photonics, advanced cooling, and robust materials designed for extreme thermal and mechanical environments. Leaders will not simply add compute; they will deploy balanced, energy-efficient, and reliable systems that scale sustainably.

AI has shifted hardware design from component-level optimization to full-stack system engineering. Those who master the integration of compute, optics, thermal management, power delivery, and materials will define the infrastructure that enables the next era of artificial intelligence.

Looking for solutions? We can help

Our experts are here to learn more about your needs.

  • A female call-center employee smiling and wearing a headset while working in an office.

    Request a consultation

  • A black female employee scans packages in a warehouse. In the foreground there is a woman with a yellow scanner; in the background scaffolding can be seen.

    Submit an order request

Looking for more support options?

Our support center and experts are ready to help you find solutions for your business needs.