Web giants like Google and Amazon are notoriously secretive about what goes on inside the worldwide network of data centers that serve up their sweeping collection of web services. They call it a security measure, but clearly, they also see these facilities as some sort of competitive advantage their online rivals mustn’t lay eyes on. When he joined Google, Ken Patchett — like so many other Googlers — signed an agreement that barred him from discussing the company’s data centers for at least a year after his departure, and maybe two.

But after leaving Google to run Facebook’s new data center in the tiny Northwestern town of Prineville, Oregon, Patchett says the security argument “doesn’t make sense at all” — and that data center design is in no way a competitive advantage in the web game. “How servers work has nothing to do with the way your software works,” he says, “and the competitive advantage comes from manipulating your software.”

It’s hard to argue with him. He just spent the better part of the afternoon giving us a walking tour of Facebook’s newest data center — from the rows upon rows of extra-efficient machines that serve up the company’s social networking site to the “penthouse” that lets the company cool its facility with outside air rather than burn electricity on the mammoth water chillers traditionally used by the world’s data centers. When Facebook turned on its Prineville data center this past spring, it also “open sourced” the designs for the facility and its custom-built servers. Patchett is merely extending this willingness to share.

Ken Patchett, general manager of Facebook’s Prineville data center

For Patchett, Facebook is trying to, well, make the world a better place — showing others how to build more efficient data centers and, in turn, put less of a burden on the environment. “The reason I came to Facebook is that they wanted to be open,” says Patchett.

“With some companies I’ve worked for, your dog had more access to you than your family did during the course of the day. Here [at Facebook], my children have seen this data center. My wife has seen this data center…. We’ve had some people say, ‘Can we build this data center?’ And we say, ‘Of course, you can. Do you want the blueprints?’”

‘The Tibet of North America’

The Evaporator Room inside the Prineville ‘penthouse’

Facebook built its data center in Prineville because it’s on the high desert. Patchett calls it “the Tibet of North America.” The town sits on a plateau about 2,800 feet above sea level, in the “rain shadow” of the Cascade Mountains, so the air is both cool and dry. Rather than use power-hungry water chillers to cool its servers, Patchett and company can pull the outside air into the facility and condition it as needed. If the air is too cold for the servers, they can heat it up — using hot air that has already come off the servers themselves — and if the outside air is too hot, they can cool it down with evaporated water.

In the summer, Prineville temperatures may reach 100 degrees Fahrenheit, but then they drop back down to the 40s in the evenings. Eric Klann, Prineville’s city engineer, whose family goes back six generations in central Oregon, says Facebook treats its data center much like the locals treat their homes. “Us country hicks have been doing this a long time,” says Klann, with tongue in cheek. “You open up your windows at night and shut them during the day.”

The added twist is that Facebook can also cool the air during those hot summer days.

Filters inside the penthouse clean the outside air before it’s pushed into the server room.

All this is done in the data center’s penthouse — a space the size of an aircraft carrier, split into seven separate rooms. One room filters the air. Another mixes in hot air pumped up from the server room below. A third cools the air with atomized water. And so on. With the spinning fans and the neverending rush of air, the penthouse is vaguely reminiscent of the room with the “fizzy lifting drinks” in Willy Wonka & the Chocolate Factory, where Charlie Bucket and Grandpa Joe float to the ceiling of Wonka’s funhouse. It’s an analogy Patchett is only too happy to encourage.

You might say that Facebook has applied the Willy Wonka ethos to data center design, rethinking even the smallest aspects of traditional facilities and building new gear from scratch where necessary. “It’s the small things that really matter,” Patchett says. The facility uses found water to run its toilets. An Ethernet-based lighting system automatically turns lights on and off as employees enter and leave areas of the data center. And the company has gone so far as to design its own servers.

‘Freedom’ Reigns

Facebook flies the flags of state, country, and social network.

Codenamed Freedom while still under development, Facebook’s custom-built servers are meant to release the company from the traditional server designs that don’t quite suit the massive scale of its worldwide social network. Rather than rely on pre-built machines from the likes of Dell and HP, Facebook created “vanity-free” machines that do away with the usual bells and whistles, while fitting neatly into its sweeping effort to improve the efficiency of its data center.

Freedom machines run roughly half the loads in Prineville. They do the web serving and the memcaching (where data is stored in machine memory, rather on disk, for quick access), while traditional machines still handle the database duties. “We wanted to roll out [the new servers] in baby steps,” says Patchett. “We wanted to try it and prove it, and then expand.”

Taller than the average rack server, the custom machines can accomodate both larger fans and larger heat sinks. The fans spin slower but still move the same volume of air, so Facebook can spend less energy pushing heat off the machines. And with the larger heat sinks, it needn’t force as much cool air onto the servers from the penthouse.

‘Freedom’ racks in the Facebook server room

The machines also use a power supply specifically designed to work with the facility’s electrical system, which is a significant departure from the typical data center setup. In order to reduce power loss, the Prineville data center eliminates traditional power distribution units (which transform power feeds for use by servers and other equipment) and a central uninterruptible power supply (which provides backup power when AC power is lost). And the power supplies are designed to accomodate these changes.

The custom power supplies accept 277-volt AC power — so Facebook needn’t transform power down to the traditional 208 volts — and when AC power is lost, the systems default to a 48-volt DC power supply sitting right next to the server rack, reducing the power loss that comes when servers default to a massive UPS sitting on the other side of a data center.

According to Facebook, the machines are 94.5 percent efficient, but they’re part of a larger whole. The result of all this electrical and air work is a data center that consumes far less power than traditional computing facilities. In addition to building and operating its own facility in Prineville, Facebook leases data center space in North California and Virginia, and it says the Prineville data center requires 38 percent less energy than these other facilities — while costing 24 percent less.

The average data center runs at 1.6 to 1.8 power usage effectiveness (PUE) — the ratio of the power used by a data center to the power delivered to it — while Facebook’s facility runs between 1.05 and 1.10 PUE over the course of the year, close to the ideal 1-to-1 ratio.

“We want to be the most effective stewards of the electrons we consume. If we pay for 100 megawatts of power we want to use 100 megawatts of power,” Patchett says. “The average house is between 2 and 3 PUE. That’s twice the amount of energy you actually have to assume. What if every home was the same efficient steward that we are?”

What Google Is Not

Power generators in the yard outside the Prineville facility

Google is also running a data center without chillers. And it’s building its own servers. But it won’t talk about them. Although the company has released some information on the data centers and servers it was running as far back as 2004, its latest technology is off limits. When we contacted Google to participate in this story, the company did not respond with any meaningful information.

According to Dhanji Prasanna, a former Google engineer who worked on a programming library at the heart of “nearly every Java server” at the company, the search giant’s latest data center technology goes well beyond anything anyone else is doing. But he wouldn’t say more. Like Ken Patchett — and presumably, all other Google employees — he signed a non-disclosure agreement that bars him from discussing the technology.

Jim Smith — the chief technology officer of Digital Realty Trust, a company that owns, operates, and helps build data centers across the globe — says that Google must have a good reason for keeping its designs under wraps. “I’m not an insider, but it must make sense [that Google is so secretive],” Smith tells Wired. “At every level [of Google employee] you meet, they only share certain bits of information, so I presume there’s good reason.” But Facebook believes the opposite is true — and it’s not alone.

When Facebook open sourced its Prineville designs under the aegis of the Open Compute Project, it was certainly thumbing its nose at Google. “It’s time to stop treating data center design like Fight Cluband demystify the way these things are built,” said Jonathan Heiliger, then the vice president of technical operations at Facebook. But the company was also trying to enlist the help of the outside world and, in the long run, improve on these initial designs.

“We think the bigger value comes back to us over time,” Heiliger said, “just as it did with open source software. Many people will now be looking at our designs. This is a 1.0. We hope this will accelerate what everyone is doing.”

Microsoft sees Heiliger’s logic. Redmond hasn’t open sourced its data center designs, but it has shared a fair amount of information about its latest facilities, including the chillerless data center it opened in Dublin, Ireland two years go. “By sharing our best practices and educating the industry and getting people to think about how to approach these problems, we think that they can start contributing to the solutions we need,” Microsoft distinguished engineer Dileep Bhandarkar tells Wired. “This will move the industry forward, and suppliers — people that build transformers, that build air handlers — will build technologies that we can benefit from.”

Thinking Outside the Module

Facebook’s ‘found water’ tank. And its gnomes.

With its designs, Facebook isn’t mimicking Google. The company is forging a new way. When Patchett left Google for Facebook, Facebook had him sign an agreement that he wouldn’t share his past experiences with the company. “I guess that’s because I worked for the G,” he says. This is likely a way for Facebook to legally protect itself, but it seems to show that the company has rethought the problems of data center design from the ground up.

Unlike at least some of Google’s facilities, Faceboook’s data center does not use a modular design. Google constructs its data centers by piecing together shipping containers pre-packed with servers and cooling-equipment. Patchett acknowledges that Google’s method provides some added efficiency. “You’ve got bolt-on compute power,” he says. “You can expand in a clustered kind of way. It’s really quite easy. It’s repetitious. You do the same thing each time, and you end up with known problems and known results.” But he believes the setup doesn’t quite suit the un-Googles of the world.

Google runs a unified software infrastructure across all its data centers, and all its applications must be coded to this infrastructure. This means Google can use essentially the same hardware in each data center module, but Patchett says that most companies will find it difficult to do the same. “I don’t buy it,” Patchett says of the modular idea. “You have to built your applications so that they’re spread across all those modules…. Google has done a pretty good job building a distributed computing system, but most people don’t build that way.”

Microsoft distinguished engineer Bhandarkar agrees — at least in part. Redmond uses modules in some data centers where the company is using software suited to the setup, but in others, it sidesteps the modular setup. “If you have a single [software platform], you can have one [hardware] stamp, one way of doing things,” Bhandarkar says. “But if you have a wide range of applications with different needs, you need to build different flavors.”

Codenames in the High Desert

Solar panels feed power to the data center’s office space.

Facebook designed the Prineville data center for its own needs, but it does believe these same ideas can work across the web industry — and beyond. This fall, the company built a not-for-profit foundationaround the Open Compute Project, hoping to bring more companies into an effort that already has the backing of giants such as Intel, ASUS, Rackspace, NTT Data, Netflix, and even Dell.

In building its own servers, Facebook has essentially cut Dell out of its data center equation. But the Texas-based IT giant says Facebook’s designs can help it build servers for other outfits with similar data center needs.

In some ways, Dell is just protecting its reputation. And on a larger level, many see Facebook’s effort as a mere publicity stunt, a way to shame its greatest rival. But for all of Ken Patchett’s showmanship — and make no mistake, he is a showman — his message is a real one. According to Eric Klann, Prineville’s city engineer, two other large companies have approached the town about building their own data centers in area. He won’t say who they are — their codenames are “Cloud” and “Maverick” — but both are looking to build data centers based on Facebook’s designs.

“Having Facebook here and having their open campus concept — where they’re talking about this new cooling technology and utilizing the atmosphere — has done so much to bring other players into Prineville. They would never have come here if it wasn’t for Facebook,” he tells Wired.com.

“By them opening up and showing everyone how efficiently they’re operating that data center, you can’t help but have some of the other big players be interested.”