Vendors and consumers are considering water as the most efficient solution for cooling servers

May 9, 2008 15:30 GMT  ·  By

The fast growth of data-centers condition operators to turn to water cooling, as it seems to provide more efficiency than air conditioning, especially when it comes to large-scale server deployments. The idea of using water to cool servers is rather an old one, but it appeals more and more to datacenter operators and server vendors that search for new ways to increase efficiency, at a time marked by rising energy costs and fear of global warming.

As the University of Illinois' National Center for Supercomputing Applications started to build a machine that had more than 200,000 server cores, it also started to seek for a solution to dramatically reduce costs implied by the purchasing of faster silicon chips. The solution proved to be at hand actually: the use of water, the all-natural liquid that reaches into every home.

Rob Pennington, deputy director of the NCSA, considers that water cooling provides a much bigger advantage than air, because of power density. Pennington said that, through the use of water, the new Blue Waters petascale computing machine NCSA is planning would feature more than 200,000 cores and would occupy only about twice the space a current NCSA machine occupies, despite having only 9,600 cores. "Water cooling makes it possible," Pennington says. "If we had to do air cooling, we'd be limited by how much air can be blown up through the floor." Blue Waters is scheduled to become operational in 2011. Most likely, it will use servers based on IBM's future Power7 chips.

"By the end of 2005, NEC had on sale a water-cooled server that featured an Intel processor. IBM hasn't been using the water cooling technique since 1995, after shipping its last bipolar mainframe with CMOS (complementary metal-oxide-semiconductor) technology, so Big Blue is just returning to the technique," said Ed Seminaro, chief system architect for IBM's Power Systems.

"We actually went from a product that used almost 200 kilowatts of power down to a product that could basically satisfy the same function with about 5,000 watts," Seminaro says. "That's why we didn't need water cooling anymore. There was far less power required and far less heat density."

IBM added a so called hydro-cluster water cooling system to its System p5 575 supercomputer. This happened as the increasing number of transistors on a chip make keeping the power usage steady a rather difficult task. IBM had to turn to water cooling, but with a new, innovative design that practically brings water right up to the chip.

The heat produced by servers is transferred to water anyway. This also happens in datacenters that have big air conditioning systems, according to Jud Cooley, senior director of engineering for a Sun Microsystems water-cooled product known as the Modular Datacenter. In case of an air conditioning system, chillers are situated close to the racks, and heat is transformed from hot air into liquid and pumped outside afterwards.

"Every datacenter does move water. We get that water closer to the point where you're actually generating heat," Cooley says. "What comes along with it is the need to bring water into every server, and all the plumbing issues. Sun does not have a product in this space right now. But every vendor is looking into this."

The Modular Datacenter designed by Sun doesn't bring water inside the servers. The servers are placed inside a computer room in a large box cooled by water, so the air flow around the data-center is reduced. The Modular Datacenter has been available since January.

IBM's system brings water right on top of the chip for increased efficiency

IBM System p5 575 uses the Power6 chip and is more efficient. The cold water is pumped through pipes from a cabinet onto a little copper plate situated right on top of the chip, as Seminaro said. The circuit holds 7.2 gallons of purified water that moves continually. The heat is transferred from the circuit to the customer's water pipes through a connection to a building's plumbing system.

As there is the fear that leaks may appear, which would destroy expensive processors, IBM tried to minimize the risk by using a corrosion-resistant distribution system. Also, the temperature of the water causes no condensation. Even so, Seminaro says that leaks are possible. Big Blue has plans for expanding the system to more servers, because it has confidence in it.

"We're evaluating it now," Seminaro says. "We will definitely put it into more of our platforms. We started here because in the world of technical computing there is a real desire for a tremendous amount of compute capacity in a given location."

The System p5 575 featuring water cooling has 448 processors and can perform trillions of operations per second. IBM notes that, comparing to air, water is 4,000 times more efficient. The cost-efficiency of the system translates into reducing the air conditioning units by 80% and the energy consumption by 40%. On the other hand, IBM says that researching is in progress towards designing a system that would bring water to the hottest parts of a computer, so as to increase efficiency.

HP's power and cooling architect, Wade Vinson, said that HP had been making steps towards water-cooling since 1999 and that the company was using it on a system it began to offer four years ago. HP's Modular Cooling System is a water-cooled rack that can be supplied with water in three ways: through a direct connection to the building's chilled water system, a dedicated chilled water system, or a water-to-water heat exchanger unit connected to a water system.

According to Vinson, air cooling inside servers is easy and less risky than water, and customers are offered a 30 percent energy reduction gain through the use of the HP water-cooled rack.

On the opposite side, Pennington is more than confident in Blue Waters' capabilities. The 95,000-square-foot petascale computing facility will be located on the University of Illinois campus. Dell blade servers will be used to power-up the NCSA, a 9,600-core machine that will be able to provide computing resources to scientific engineers and industrial users. Pennington says that the machine will require three floors: the bottom for air handling units, the second floor for servers, and the third floor to handle the return of air flow.

The 200,000-core water-cooled system will have a mechanical room under the server floor, allowing the third floor to be used for office space. The university will maintain a large chilled water plant to which the datacenter will be connected through the building's plumbing infrastructure.

"We spent a significant amount of time working with people on campus and with companies, understanding how to make a water cooling room efficient," Pennington says. "I wouldn't say it's simpler. It's just a different set of engineering challenges."

About a year ago, IBM unveiled the Power7 chip, and Pennington hopes to use servers based on it. Water-cooling will be used inside the servers, but Pennington notes that there will be a lot of work involved to make it more efficient. Blue Waters is expected to cost about $208 million, including the machine room, computers, and staff. "We're providing water to the racks. IBM is doing all the other plumbing within the racks," he says.

IBM servers are not the only ones to be used for the NCSA. Pennington has said that the organization will turn to other water-cooled systems too, if they appear on the market. The only sure thing is that water cooling is the technology that will be adopted for the datacenter, as it offers the most efficiency at the moment. Pennington says that, after considering the options, "It was clear to us that water cooling was going to have to be a significant technology for us to think about."