Over the past few quarters, it has become clear that there are two data center camps forming; 1) the Traditionalists and 2) the Pioneers. The Traditionalists are the folks that have grown up in IT for 20+ years, have a hands-on relationship with the same technologies they cut their teeth on early in their career, and they are happy to keep stretching the IT envelope to meet their company’s modern needs by scaling bigger. Building more space, consuming more power, and installing more gear using traditional computer, storage and networking approaches.
The Pioneers are the folks that want to upset the apple cart. They have become fully aware of the emergence of radically different styles of computing, and the economics which are driving today’s IT world. Mobile, Social and BYOD not to mention e-commerce, entertainment and utility computing are all here to stay, and the back ends that were designed years ago when simple Windows desktops talked to mega-servers are no longer the task at hand.
So I dove deeply into Intel’s new ARM initiatives and HP’s new Moonshot offerings, data storage and de-duplication, VMware vCloud/vSphere, the flattening SDN/SDDC developments and the state of the Mobile/tablet/BYOD markets. I looked at Koomey’s updated forecast on power usage for data centers, went over the advancements in containment and cooling approaches and even considered the alternatives to in-house (i.e. Co-Lo, Modular and Cloud). Not just interesting futures, but these are all technologies coming on-line NOW and in many cases already being deployed into production.
Not to be an alarmist, but I feel compelled to write this post about the fundamental genetic changes that are now available in the data center due to these innovations. By themselves, each is interesting, but as they are all coming together at the same time, it is fair to say that we are entering the next era of the “Processing Function” which is likely to be much smarter and more cost-centric, not just bigger and more power-hungry. The Pioneers are standing up and being noticed.
Like the Traditionalists, I’ve worked in the data center for 30 years (in systems, storage and networks) and almost without exception, the goal has always been ‘bigger’ and ‘faster’. The Traditionalists in the crowd will always gravitate towards bigger and faster to respond to new needs. Faster CPUs, bigger Ethernet pipes, more memory, higher capacity. More work meant building bigger work processing capability. Back-ends got bigger with terabit fabrics and all of the downstream 10G/40G and 100G pipes. Disks becoming more dense, and the chip vendors continue to react with some of the blindingly fast Xeon stuff from Intel and AMD which is causing my brain to swell. Traditional at it’s best!
It is true that a few years ago, POWER became *THE* rallying point underneath this growth, so in turn each of the back-end technologies made a simple attempt (read: Band-Aid) to manage this expansion at the power level, and figured out ways to create new versions of their products which consumed a bit less power than last year’s models. Those changes delayed the inevitable by a couple of years, but now here we are again. Million+ square foot data centers popping up all over the place. Many tens of megawatts pumping into these buildings. Raised floor succumbing to concrete slab to bear the hugely dense and heavy chassis being wheeled in. Bigger is better, right?
No, not really. Not according to the Pioneers. These folks will remind you that CAPACITY is the goal, not heft. Processing transactions is more important than installing faster/bigger servers. Core options are now available that provide the needed capacity while at the same time reversing the sizing trends. We are at the crossroad where the Traditionalists will be travelling down the traditional bigger and faster path, while the Pioneers will be travelling down the smarter and smaller path. It’s a crossroad.
Without going into each specific technology advance, consider what would have happened if we came to a similar crossroad in the 1950/60’s when the beginnings of the digital age were being handled by New England based scientists designing computers with vacuum tubes, while across the country the ‘new kids’ were playing with transistors. A silly example I know, but it illustrates the point. I could easily fill up that million square foot data center with a vacuum tube based computer that draws 10 megawatts and it might be as fast as a common $25,000 4U Xeon based server that draws 1000 watts. See the point? We are at that same kind of crossroad today. And this is NOT Issac Asimov future-stuff I am talking about, but real production technology that can be purchased today. Today I can shrink a 10,000 square foot data center running a common web-page server farm into 80% less space and 90% less power! (See HP’s Moonshot http://t.co/e4FwOuv14c as an example).
Want to talk about futures? Yikes, they are amazing! Consider that today it takes about a MILLION atoms to store 1 BIT of data. The geniuses at IBM have figured out a way to do the same thing with TWELVE atoms. That’s a storage density decrease of about 75,000 times, with a similarly predicted reduction in power. But enough of this ’20 years in the future’ vision, TODAY we can build data centers that provide more capacity in smaller spaces, with smaller power budgets… DOING THE SAME WORK! We can do so more consistently and more easily supportable. It just takes the Pioneering mind-set to think differently. Individual In-House Data Centers will likely begin shrinking. In-House centers will use newer technology that is smaller, smarter and less power-hungry and these will be augmented by a blend of MSP, Co-Lo, Cloud and Modular approaches. Capacity is the key, not size!