close
close
How to design and build a data center for the new era of AI

Artificial intelligence (AI) continues to dominate the headlines in the data center sector. From optimizing workloads to improving customer satisfaction, the technology was quickly touted as an integral solution for the next era of data center operations.

However, AI has already led to data centers coming under pressure, especially in terms of energy consumption. As a result, companies in the industry are being forced to design and build facilities in new ways.

With insights from Black & White Engineering (B&W), Vertiv, atNorth and KPMG UK, we focus on how data center companies can design and build new data centers to accommodate future new and disruptive technologies.

We are facing a “new” era of AI

The vast majority of data centers currently in operation are not designed to support the high performance requirements of AI-driven workloads. New infrastructure requirements differ from traditional data centers because they generate larger amounts of heat that current facilities cannot dissipate quickly enough.

“The industry is now facing unprecedented demand for new infrastructure solutions to efficiently power, cool and support this next generation of computing. As a result, AI is fundamentally changing the architecture of IT infrastructure,” explains Rajesh Sennik, Head of Data Center Advisory at KPMG UK.

AI workloads also require near-instantaneous processing of large amounts of data, which also requires a significant amount of energy. This means data-intensive companies will be looking for more modern websites designed specifically for AI.

“A data center configured for typical enterprise applications may require 7-10 kilowatts (kW) of power per rack. But for AI, the power requirement increases to over 30 kW per rack,” says Anna Kristín Pálsdóttir, Chief Development Officer at North.

“As a result, existing data center campuses will need to be modernized – not only to accommodate the digital infrastructure associated with AI workloads, but also to enable significant cooling systems and power distribution units (PDUs), generators and uninterruptible power supplies (UPS). .”

Some of the key differences between traditional and AI data centers are: rack density, cooling technology and server technology. Air-cooled systems are no longer sufficient for modern workloads, leading companies to turn to on-chip liquid cooling systems to improve heat transfer and make cooling more efficient.

“Data centers now have to accommodate increasingly dense IT loads, making optimized power and cooling management even more important,” explains Alex Brew, regional director for Northern Europe at Vertiv. “With rack density forecast to exceed 100kW per rack, the design and deployment of power and cooling infrastructure has become significantly more complex.”

Adam Asquith, technical director at Black & White Engineering, added: “Given the projected growth rates and increases in chip TDP or rack density, new methods of cooling and power distribution will need to be introduced.” This may include immersion cooling methods and power distribution using higher current carrying conductors belong.

Leave a Reply

Your email address will not be published. Required fields are marked *