An interview with Ron Yaggi of IO: The Evolution of Datacenter Design

By

IO Module StacksRon Yaggi, IO’s SVP of Federal stopped by our office last week. He sat down for a podcast with us, which is posted here. We have outlined the discussion in this post.

Ron gave us some insight into his 27 year Air Force career, while he traveled widely on various assignments, he spent a great deal of time in the Washington, DC area.He served as both an intelligence officer and a National Security adviser. After suffering a heart attack in 2002, Ron decided to make some changes in his life and career. After retiring from the Air Force as a Brigadier General in 2005, Ron worked for Computer Science Corporation as a program manager. Here he learned a bit about how corporations (rather than governments) work, and after looking for a new job as a CEO, he ran into the IO Corporation. Some former colleagues at the Cullen group askIO Moduleed him to look into IO, and George Slessman, the CEO of IO, headed out to meet him. After some successful talks, and a remarkable visit to IO’s Phoenix Data Center, Ron joined IO. Ron saw the immense savings that IO offered their clients (both limiting power usage and manpower requirements) and believed IO to be a great fit for the tightening budgets in Washington.

IO started as George Slessman, his brother Bill and their friend Tony Wanger purchased a large warehouse in 2000, creating a collocation data center. After running that data center for a few years, they sold it, and got back to work. In 2007, they started IO, with a couple of 100k sq ft traditional “raised floor” data centers. After some time, they migrated to a 530,000 sq ft data center, which they broke into chunks. Because of the huge investments necessary to outfit data centers, George Slessman wanted to find a way to fit out capacity as it was required. After taking a look at ISO containers (often used in deployed environments), enterprise users found them to be lacking in space and form. George wanted to find a different, smarter way to cool data centers, instead of cooling an entire building to the lowest common denominator of the most sensitive IT. George often compared the traditional data center cooling model to turning your entire A/C down to 40* to enjoy a chilled adult beverage. However, this started George moving, and they started engineering a box that would be capable of enterprise-level IT, but in a modular fashion. The IO.Anywhere module has 4′ cold-aisles and 3.5′ hot-aisles. This module is engineered to meet laws and restrictions for US trucking and prepared to endure harsh weather. The modules are around 43′ long, 12′ wide and 13′ high. IO has also created power and cooling modules which can added as the data center is built out – creating “just-in-time technology.” The IO modules ship with 462′ sq ft of white space, 18 standard 24″ racks, or whatever the client needs (achieving maximum flexibility).

IO DC EvolutionThe Data Center 2.0 capability is designed to be flexible, to satisfy current requirements and to meet those of the tomorrow. IO has taken over the former NYT printing factory, with 830,000 sq ft of capacity in New Jersey. IO was able to outfit a 3.6MW data center for a client in that factory, in just 95 days. The “hardened cages” of IO allow biometric access at the module level, increasing security parameters of the IT. The modules are rated NEMA 4, and some clients deploy the 50,000 lbs modules outside. IO has the ability to deploy either “half-modules” or the “D-Squared,” the half module allows you to deploy 7 racks, while the double module allows you to deploy 50 racks.

Ron sees the Data Center 2.0 as the next step that CIOs, CTOs and CFOs need to make because of the evolution that the IT stack has gone through (faster, cheaper, more power hungry), while data centers have stagnated. The ability to roll out capability as needed, to get power as necessary, makes it perfect to meet shrinking budgets, as federal decision makers have to do more with less. Ron gave an example of a 15MW data center that needs to be outfitted from day one, creating a huge construction project. However, when plugging in for Total Cost of Ownership (TCO) for ten years of maintaining a 15MW data center, they projected a TCO difference of $379M from DC 1.0 to DC 2.0. Cooling only the modules, instead of entire rooms, saves a huge amount of power, delivering a PUE ratio of 1.17, whereas the industry-wide PUE ratio is typically 2.9.

IO OS, a data center infrastructure management software designed by IO, can be programmed to run the data center. IO OS is different than most DCIMS because it was designed by IO and George Slessman (holder of many DC patents). IO OS allows the administrators to determine how they desire their data centers to run. This allows high-tolerance IT to be run hotter (90 degrees for the hot stack instead of the typical 72). As well, their variable flow fans can direct air within a module. IO’s Phoenix Data Center is a true “Apples to Apples” comparison, as they have a DC 1.0 and DC 2.0 stacks next to each other. IO is now building out Goldman Sachs’ data centers in the module fashion, in the UK, Singapore and New Jersey. They support Lexus Nexus, Allianz and SEC’s new collocation data center. IO is building smaller “right-sized” modules for some DoD programs. IO modules can be deployed in conjunction with DC 1.0 capabilities. If you are out of rack space, but have plenty of power and cooling available, a data module can be deployed anywhere you have floor space. If you have racks left, but are running out of power and cooling, IO.Anywhere power and cooling modules can be deployed to increase your functionality.

Ron believes IO gives you maximum flexibility to run an improved data center operation, all the while saving money on electricity, personnel and capital, spreading scant resources further.

Original post

Leave a Comment

Leave a comment

Leave a Reply