Prescriptive Tile Maintenance yields significant energy and cost savings at Aurora and Sterling data centers
CyrusOne’s recent Prescriptive Tile Maintenance efforts have proven to be just what the doctor ordered.
Our engineers work constantly to improve the efficiency of our data centers. While it’s one thing to design a data center that is efficient at full capacity, the reality of colocation data center management is that we can’t always predict the occupancy of our data halls. It’s a challenge to build data centers that can run efficiently under many conditions. By installing flexible cooling systems with Variable Frequency Drives (VFDs) we can vary the amount of cooling needed with different data hall capacities. But to make best use of this equipment we need to direct the chilled air they produce to exactly where it is needed. How do we do this? Through the strategies of Cold Aisle Containment and Prescriptive Tile Maintenance to get the most out of our air handling system (CRAH or CRAC) units.
And these strategies have already begun to pay off at two of our data centers:
- Our Sterling VI data center in Northern Virginia has 10 CRAH units running at 62%. That is expected to drop to 35%, cutting power consumption by 938,000 kWh annually for $52,500 in energy savings
- Aurora 1 data center near Chicago has seven CRAH units running at 90%. That is expected to drop to 65%, reducing power consumption reduction by 956,000 kWh annually for $50,800 in energy savings.
One of the largest power users in a data center is the air handling system that moves heat away from sensitive IT equipment. In a raised-floor design, these units push chilled air into the sub floor. From there it is released into the data hall through strategically located perforated tiles. These tiles have different sizes of openings to allow varying amounts of chilled air to enter the hall. If everything is set up properly, you achieve optimal efficiencies. But there are some challenges inherent in translating idealized design standards to the realities of day-to-day data hall operation.
Keep It Cool
The first challenge is to direct the cold air where it is needed – in other words, to ensure that the chilled air passes through the IT equipment in need of cooling instead of escaping around, above or below it. This air control, termed “Cold Aisle Containment,” typically entails installing physical barriers in and around the servers to keep the chilled air where we want it. These barriers might include roof panels (toppers), blanks installed in empty racks and other devices. Without this containment, the cooled air might bypass the air intakes on the servers where it is needed, wasting energy. Cold aisle containment is not a new concept. But in a colocation data hall with many different customers, it requires coordination and partnership to achieve good containment across the hall.
Keeping It Real
The second challenge is to deliver just the right amount of chilled air into the containment. But what is “just right”? That depends on the specific servers installed and their actual workload. This determines the actual amount of heat produced, rather than assuming an idealized full data hall running at full capacity.
The necessary cooling amount can be determined through a process that we call “Prescriptive Tile Maintenance.” This means using advanced analytics to model actual conditions, allowing us to choose the proper size and location of perforated tiles to optimize the amount and location of cooling distribution.
We use Computational Fluid Dynamics (CFD) modeling to simulate the projected airflow based upon temperature, pressure, airflow velocity and space considerations. Our models analyze the flow of air from under the floor, through the tiles and cabinets, and back to the air handler.
While across the industry this modeling process is sometimes done during the initial design of a data hall or cabinet set, we do it periodically after installation as a “reality check.” By monitoring actual conditions, we can understand how the air moves through the data hall and deliver just the right amount of cool air to the right places, reducing the power consumed by the air handlers.
So, by designing data centers that can be operated with flexibility and then being smart about how we operate them, we can achieve competitive PUEs for our customers without resorting to consuming water for cooling.
[Editor’s note: This blog is an update of the “Writing a Prescription for Efficiency” blog from November 2021.]