You’ve probably heard about the potential upsides of high power density: More power in a smaller footprint equals more efficiency. For applications that have specific, high-compute workloads, the benefits are apparent.
But high power density also brings with it some considerations, and not all environments can support it. For many, high-density colocation is a simple, straightforward way to reap the benefits of high power density, while also allowing you to lean on the expertise of a trusted colocation service provider.
This post will cover:
First things first: What constitutes “high” power density? For background, a lot of data centers—including many owned and operated by colocation service providers—are simply not designed for high density.
Many average anywhere from 90-150 watts per square foot. “Normal” deployed power density ranges from 5-6kW per rack, and anything above that could be considered high density, up to around 30kW per rack. Some data centers exceed this, even achieving ultrahigh density of up to 50-60kW per rack and beyond.
Two takeaways here: 1) Power density can greatly vary from one data center to another, and more importantly, 2) high density is something that a data center or colocation space must be built to support.
But even when data centers are provisioned for high density, there is often a discrepancy between what they can theoretically support per cabinet and what actually gets used by any given customer. Often, it can be hard for even savvy customers to understand how to set up their infrastructure to optimize power draw or even properly assess their power needs.
This is why working with the right colocation service provider is so crucial. They can help you rightsize your set-up, helping you avoid overprovisioning power that you don’t end up needing, and they will ensure that your equipment is properly configured to protect it from overheating.
Explore INAP's Data Centers.
This brings us to the primary limiting factor for power density: heat rejection, since high power density equipment configurations generate a tremendous amount of heat. Air flow management utilizing well-designed hot aisles and cold aisles are essential, forming a tightly controlled, closed-loop environment to prevent any inefficient mixing of hot and cold air. Alternatively, ultrahigh density equipment configuration may require further steps, including close-coupling the cooling or utilizing other technologies, like liquid cooling.
Deploying air flow management devices and containment technologies that maximize cooling efficiency are also needed for a trouble-free high-density configuration. So when it comes to physical infrastructure, an experienced data center and colocation service provider is invaluable; they will know how to set up and implement the right solution for your configuration.
At INAP data centers, our own specially engineered containment technology allows us to maintain tight control of our colocation environments, in addition to being highly scalable at lower costs. This empowers us to quickly deploy the right containment solutions for specific, high-density equipment configurations, while keeping us flexible enough to adapt our solutions as requirements shift and evolve over time.
Data centers that are enabled for high power density, like many of INAP’s, allow customers to achieve the same amount of compute in a smaller footprint, consolidating equipment needs. So any application with high-intensity, high-compute workloads will make great use of high power density data center space: e.g., cryptocurrency mining/blockchain, artificial intelligence or certain gaming applications.
But an expert data center and colocation service provider doesn’t just provide the space: They can ensure that your hardware is properly configured, while also ensuring equipment airflow is correct, equipment is racked properly and airflow augmentation devices are installed where needed.
And perhaps counterintuitively, higher power density also increases the effectiveness of cooling systems: More hot air being returned to the cooling system results in greater efficiency, since the primary source of inefficiency is low differential temperatures, leading to the staging up or down of cooling infrastructure. In other words, when the system is constantly running at a high temperature differential, it’s most efficient.