I continue to be fascinated when I see so many of the fundamental changes taking place in the data center. We see it with storage and networks nearly everyday. But, as if a spectator at the data center game, for the last 20 years the workhorse of the data center server has always been the standard x86 CPU chipset. Every volume production server used in a data center was built around that CPU, and the bigger the data center, the more of them they had. The biggest Internet companies, media streamers, and information sources have accumulated hundreds of thousands of these servers which essentially are all based on this same x86 design. Tweaks and adjustments were made to the x86 line over time, but essentially it provided a robust general purpose capability that could be leveraged at various performance levels for any application. The x86 was like ‘Kleenex’ for the data center. Any server-based upon the x86 design didn’t really have a lot of things to be concerned with, it was a safe bet for any application now or in the future. Basic purchasing decisions had to be made regarding brand, price, speed, amount of memory, and desired support, but that was trivial in nature, and closer to the job of a purchasing agent rather than a hard-core data center technologist.
Although there have always been CPU alternatives, there were simply too many ‘other’ factors that have prevented these from becoming mainstream. Limited overall performance (real and perceived), limited operating system support, lack of Tier-1 hardware offerings, fewer software development tools and environments, the availability of the common ‘server’ applications, and even the level of critical I/O performance has been a challenge. There have been so many factors and concerns that while choice has always been an option considered by the IT pioneers, the idea of using anything but an x86 rarely gathered any steam in the mainstream IT ranks across the data center community.
I am thrilled to say that this is changing. Perhaps sparked by the economy that we all fell into five years ago and then fueled by the continued low-power success of some of these alternative CPU chips in the ever more demanding tablet and handheld markets, a handful of very capable CPU choices that could easily be used in server applications now exist. Along with the original low-power ARM option, Intel continues to push its designs with the ATOM and QUARK families, and the (STI) Power architecture also now has designs that are well suited for core web-server style applications. It’s about balancing raw processing cycles, overall power consumption, with a keen eye on the I/O bandwidth based upon the intended application for each server. And just as important, we are seeing mainstream adoption of one or more of these chip designs in the Tier-1 vendor portfolios as well as the well-funded and high-profile server startups throughout the industry. The momentum is finally building, and the reward to think differently about solving the problem is front and center… major cost savings!
The server vendors’ ‘new chip’ story goes like this: Why waste precious resources (like power, space and cooling) for your volume computing needs by deploying server designs that were created back when general-purpose computing in the data center were all the rage? Today, the bulk of your data center applications can be served BETTER at a much LOWER cost, by choosing the right server alternative for each of those modern applications. This is no longer a one-size-fits-all world and the waste associated with continuing that bad behaviour are very real and fiscally concerning. Instead, start at the application, and then work backwards to determine the best hardware architecture to support those apps. Most likely, you will find a specific hardware architecture which is clearly suited for each of your handful of processing needs. Is this more work than just buying ONE general purpose device that can do anything? Well sure it is. But are there huge financial benefits to invest this extra time and make much more thoughtful and targeted server choices? Absolutely! That’s the point. We can’t afford to ignore this savings opportunity any longer. IT Professionals have the opportunity to wear their business hats and demonstrate bottom-line impacting decisions.
Do I believe it? Emphatically YES. I have seen some of these hardware server offerings in operation and the results are impressive. Each CPU will have its sweet spot for applications. Is an ARM based server going to run EXCEL like lightning? Nope, that’s not what ARM does well. In fact the one chassis based ARM server I looked at didn’t even sport floating point. But is your web farm going to be running float-point apps like EXCEL? Nope. That’s NOT what they do. Each low-power chip has its sweet spot and list of applications that scream on it. The volume server applications you have running in the data center likely includes things like APACHE, and some of these new CPU class of servers run APACHE really, really well. The most recent Power chip has even been optimized for I/O bandwidth which would give ANY x86 CPU a run for its money, a great choice when serving I/O intensive web-style applications
Net-Net: It’s about looking at the data center differently and optimizing for each of the tasks at hand. Servers should NOT be treated as a one-size-fits-all adventure. ALl of the major approaches should be considered for their value to the organization, looking for ways to balance risk and reward. It is fiscally irresponsible to take shortcuts in the data center. We need to start with each of the processing styles (applications), and then work backwards to the best and most cost-effective platform required to handle each. AT the end of the day, if we look at the data center as a transaction factory, then the various types of transactions will have differing costs and value to the organization. Choosing the right server architecture for each style is one of the big components in setting transaction cost.
Lead, don’t Follow