The Heat Is On

A strange thing happened when St. Louis, Mo.-based Sisters of Mercy Health System began populating its data center with new servers. “We ran out of power capacity,” says Bill Hodges, director of the data center at Mercy, the ninth largest Catholic healthcare system in the country. Mercy found that the servers were smaller and denser and took up less space. But on the flip side, they required more power and cooling, as well as a bigger generator to handle the increased load.

Then the domino effect kicked in. “Now I have to increase the capacity of the infrastructure that supports the data center,” says Hodges. “You fix one thing, but then there are four other areas you have to upgrade to leverage what the technology changes are allowing you to do.”

With a trend toward smaller servers and blade servers, enterprises are now able to get more hardware into a single rack than ever before. However, they’re finding that improvement to be a double-edged sword because the smaller server size equals greater power consumption. And if companies don’t upgrade to accommodate the new equipment, they’ll find themselves only able to use a fraction of their data center’s floor space.

“We’re seeing increased density in data centers because more CPUs are being packed into a unit of volume,” explains Dan Golding, a vice president and senior analyst with Tier1 Research in New York City. “Each chassis is taking up a tremendous amount of power to do its computing. One of the laws of engineering is if you take in a lot of power to do computation, something has to be done with that power, which is turned into heat eventually.”

Power, Power Everywhere

And it’s not just the processors that are getting more powerful — the same thing is happening to storage devices and Ethernet switches, which take more power to handle more bandwidth, Golding adds. There is more capacity in data centers “to the tune of fifty-times the processing power, and a hundred-times the storage and networking capacity in a single cabinet. And that means you’re using much more power and that’s generating much more heat,” Golding says. At the same time, because generators and air conditioning units are only so big, data centers are running out of power.

According to research firm International Data Corp., 40 percent of data center end users report that power demand is greater than the supply. What’s the answer? Ideally, says Golding, outsourcing or building a better data center designed with more power and greater cooling capacity per square foot. “Designers and electrical and mechanical engineers are designing for large enterprises two to three times the cooling and power capacity than what presently exists,” he says.

“The goal is to guarantee the inlet temperature to the IT equipment so the fans are always pulling in the same temperature as the air,” says Kevin Dunlap, director of business strategies, Cooling Group, at American Power Consumption Corp (APCC) in St. Louis, Mo. “The easiest way for us to guarantee that temperature is to remove heat from the back of the server and not give it a chance to mix with the air in the rest of room.”

It’s more efficient to cool at the row level, he says, because when air is blown from a source that’s much further away, the air has to be cooled down to a much lower temperature, which takes more energy.

Dunlap says for energy saving reasons, the new servers “pull back” when they’re not being asked to do lot of computing. But the cooling system has to be able to respond quickly and the power has to be there to support the equipment when it springs to life again.

“As computing loads moves around the data center, the power and cooling has to move around data center to mirror that compute load,” he adds. “That’s the next challenge we’re facing.”

Plan to Scale Based on Demand

When planning the layout of a data center, Dunlap recommends thinking about how much capacity is needed today and populating it for current needs, and then matching the energy consumption rack by rack. Then as racks are added, it will be possible to also add cooling units and scale as the computing needs grow, so you’re matching the capacity to the demand.

Hodges says APCC’s hardware enabled Mercy to tap into the building’s power supply and redirect capacity that wasn’t being used, which gave them the ability to extend the life of their existing data center. On top of that, APCC’s components are modular and can be moved when Mercy ultimately builds its new data center five years down the road.

Since most enterprises have a three- to seven-year equipment replacement cycle, experts suggest doing a usage inventory and then ensuring that power is supplied only to the racks that are being used. “We’ve seen a shift from cooling the room in general to looking at a room as a large heat source and trying to cool it with a big air conditioning system,” says Dunlap, “to targeted cooling solutions where each individual rack has its own row or rack. So it’s much more one to one.”