Explore how 800G and 1.6T optics are helping turn AI evolution into revolution by enhancing data centre speed and efficiency. Discover how Henkel is driving innovation to meet the growing demands of AI applications.
Market Strategy Manager Growth
With artificial intelligence (AI) hitting the headlines, from crystal-ball-gazing about AI's potential impact on how we work, to the latest ins and outs of the AI corporate world, it seems a good time to reflect on how we got here, and what the future holds for AI.
In this blog, we leave the big questions to the experts and futurists, and focus instead on what AI is, how it developed, and the practical question of how data centre technologies are innovating to meet the explosive growth in demand for AI capacity.
Specifically, it considers the role of 800G/1.6T optics in delivering the speed and bandwidth that data centre networks need to deliver AI capabilities because, no matter how sophisticated it gets, the AI revolution can go nowhere without the network to deliver it.
Anyone familiar with the film "2001, A Space Odyssey" will know that the concept of AI goes back at least 56 years, to the HAL 9000 computer which did so much to spoil the astronauts' day.
The writer, Arthur C Clark, claimed the acronym HAL didn't come from shifting the letters ‘IBM’ one back in the alphabet, as rumoured, but stood for ‘Heuristically Programmed Algorithmic Computer’, which happens to be a pretty good starting point for defining an AI machine.
A traditional computer is programmed using fixed algorithms and instructions. AI sets itself apart by combining these with heuristics, or the ability to learn for itself and make decisions or create content based on what it has learnt, thus mimicking human intelligence.
The history of artificial intelligence goes back long before the fictional HAL 9000.
- Automata, human-like dolls and machines that seem to move of their own will, existed as long ago as the 17th century, as highlighted by a CNET1 article, "Before they were robots."
- In the summer of 1955, John McCarthy, a mathematics professor and pioneer in artificial intelligence, coined the term during a workshop at Dartmouth College, and it was in vogue through the 1950s.
- Fast forward to 1966, when Joseph Weizenbaum created the first chatbot, ELIZA, inspired by Eliza Doolittle in Pygmalion. ELIZA was a simulated psychotherapist that used natural language processing (NLP) to converse with humans.
- In the 1980s, Japan's Fifth Generation Computer Project (FGCP) set out to create a fifth generation of computers focused on inference-based processing and AI capabilities. While it is little discussed now, it represents an early recognition of the potential transformational role of AI.
- Chess grand master Gary Kasparov was famously and stunningly defeated by IBM's Deep Blue computer in 1997.
- A few years later, NASA's Mars Rovers Spirit and Opportunity started navigating the red planet on their own. Today, there are two Mars Rovers, Curiosity and Perseverance, active on Mars. Both are equipped with the AEGIS AI system, using deep learning, computer vision and neural networks to investigate geological targets.
- Apple Siri launched in 2011, with Amazon Alexa coming to market in 2014, and they have been helping us pick the best take-away on Friday night ever since.
These examples trace what was largely the evolutionary development of AI, until 30 November 2022, when the launch of ChatGPT, and the first widely generative AI capabilities, heralded what can be seen as the start of the AI revolution.
Whatever this revolution means for society, it is certain that it will have a big impact on the computing and data centre industry, and innovations like 800G/1.6T optics are helping it rise to the challenge.
Generative AI and Deep Learning training modules are extremely data-intensive. AI uses an iterative process to learn, where it repeatedly reviews large datasets to build its understanding. This requires significant compute and storage resources. A 2023 datacentral.com article highlights that ChatGPT-4 has 1.8 trillion parameters and a dataset that exceeds a petabyte for text-based generative AI – and this grows with each iteration as it continues to learn.
The graphic processing units (GPU) in AI servers, especially in 800G clusters, are well equipped for handling computational extensive AI training and inference module tasks. The very reason companies like Nvidia are doing so well is because their GPU and other products are crucial to advancing AI.
Generative AI training modules and datasets pose significant challenges in terms of large-scale data processing and low latency demands on the AI data centre networks. Articles from Juniper Networks2 and Network World3 detail how two major data centre network providers are addressing these, while a 2023 Keysight blog discusses the vital role of 1.6T networking for emerging technologies like AI.
To meet the growing bandwidth and speed demands of AI data centre networking, both 800G Ethernet and 800G/1.6T optics are required, with 800G Ethernet currently in development. Various technology options exist, from traditional Linear-drive Pluggable Optics (LPO) transceivers to Co-Packaged Optics (CPO). While each of these options has its own pros and cons, they all play a part in the overall movement to 800G/1.6T.
Priyank Shukla of Synopsys, a member of the Ethernet Alliance, predicts that AI data centre networks will be the first adopters and are likely to deploy 800G Ethernet by 2025 and 1.6T Ethernet by 2027. 800G optical transceivers have been released by many vendors, and 1.6T transceivers have also been demonstrated. A January 2024 Ethernet Alliance blog4 discusses these and other developments in more depth.
To support the AI data centre networks, equipment providers are bringing in 800G-capable 51.2 Tbps switches providing AI server connectivity. These switches can support 32X OSFP800, QSFP-DD800 ports, enabling the use of 800G/1.6T optics. 800G/1.6T optical transceivers address the challenges of the AI data centre by providing increased bandwidth and speeds to efficiently process data and reduce latency for AI applications.
As a market leader in materials technology and specialist in materials solutions for data centre applications, Henkel has a key role to play in building 800G/1.6T capability for the AI data centre.
For example, data centrer consume 1–2% of the world's energy, and AI is an energy-intensive application. 800G/1.6T optical transceiver designers and manufacturers have to speed up their efforts in designing products with lower power consumption, and Henkel is working closely with leading optical transceiver manufacturers to design lower power modules.
Read the 2024 Pulse Report here to see more insights into the state of data centre technology
- Article
- Brochure
- Case study
- Infographic
- White paper
Our experts are here to learn more about your needs.
Our support center and experts are ready to help you find solutions for your business needs.