Where did all the SysAdmins go?

Remember the days maybe ten years ago when we had a sysadmin for every 30 or 40 servers? Those were simpler times when each server in the data center was unique, even if just slightly, from its closest neighbor. A data center with 1000 servers had 30 operators who spent each of their days patching and monitoring and resolving operational issues with ‘their’ servers. Each server was a critical element of the big picture, and the failure of any one of these was typically reason to cancel dinner plans with your significant other.

The role of the SysAdmin has changed - Now More interesting and rewarding

The role of the SysAdmin has changed – Now More interesting and rewarding

I have to laugh thinking back on those days and wonder how we survived. We were so tightly wound-up in all of the intertwined technology that most sysadmins didn’t have the luxury to think into the future, and instead spent all of their time trying to keep their head above water and keep what they had running!

The first big milestone came when automated and scalable software solutions popped up that could provision devices according to templates, and then apply those templates to new and existing servers. These templates become ever more capable, and things like patch management became part of the template and automated process. No longer would each device require an entire afternoon of loading software or applying patches. We simply defined any number of servers as a specific template type, and then the provisioning toolsets would keep those servers matching those templates. 1 server or 1000 servers, made no difference. A single sysadmin could now handle 25 times the number of servers in their daily schedule. Instead of managing 30-40 individual servers in the old days, they could now manage in some cases 30-40 RACKS full of servers.

The second game-changer was virtualization. Virtualization essentially broke the 1 to 1 alignment between these physical boxes and the number of actually running servers. Provisioning and update automation combined with virtualization meant that a sysadmin could now able to manage literally thousands of ‘servers’ almost instantly with a high degree of confidence that they were optimized, secured and supportable regardless of what failures might occur. And best of all, since many server instances were now virtualized, the number of walks down to the data center was drastically reduced, and in fact sysadmins could really exist ANYWHERE on the planet as long as a network connection existed. Follow-the-Sun(or moon) strategies popped up that leveraged sysadmins in multiple locations.

Lastly, the decomposition of many volume applications (like web ecomm, search and social servers) using technologies (like Hadoop) allowed these applications to automatically expand or contract to any number of servers without operational intervention which also allowed for failures of any given server to have no perceived effect on the business itself.. With relative ease, now 1000 servers could be tasked for web traffic in the middle of the night, and 10,000 servers could be active in the middle of the day, without any human intervention needed. The applications were resilient, essentially self-healing and users DID NOT need to know or care where their transactions were actually being serviced. Today, most web-centric data centers will have literally dozens or hundreds of physical servers off-line for maintenance due to hardware failure at any point in time, and the process to replace failed hardware device(s) in this type of configuration is considered a normal monthly maintenance task, rather than an urgent business-impacting one.

So, where did those SysAdmins go? Are they working at Starbucks (one of my favorite places on earth) or did they become teachers? Nope! They are still practicing their chosen craft, but doing so at a much higher level, with much more impact and in a fashion that increases their ability to innovate. Their job satisfaction has increased dramatically as their role transitioned from tedium to technology. In most cases these professionals have been given the opportunity to contribute in a much more meaningful fashion, and they feel more tightly connected to the organization’s success. Systems Administrators can now look for innovative ways to support the business in a fashion that either decreases operations costs, or increases business value.

Posted in Uncategorized | Tagged , , | Leave a comment

Modular Meets Open Compute, A Match Made in the Heaven!

Last year I would not have been able to write this story as I had a self-inflicted rule to avoid the mention of any specific vendor by name during the discussion of innovation. In 2014 I have decided to write about things that are introduced by leading vendors that catch my attention for some business impacting reason. I am sure you will agree that most of these innovations are also catching the attention of other data center activists too.  (This new practice saves me the time of answering a bunch of individual emails asking about specific vendor names)

Modular and Open Compute: The Perfect Private Cloud

Modular and Open Compute: The Perfect Private Cloud

So that brings me to IO’s latest introduction of their private cloud building block offering. Think: Modular meets Computing to yield a ready-to-go Cloud building blocks. Modular structures (IO’s original core competency) combined with industry standard computing (as provided by Open Compute). Anyone that reads my published materials knows that I am a big fan of both.

Building data centers ‘from scratch’ and trying to re-invent the wheel each time just seems so old-school (and expensive, long in duration, etc). In 1999, I saw my first modular entry. It was a million dollar ISO shipping container touring the USA on the back of a tractor-trailer that originated in Rhode Island. In 1999, modular was a novel concept, but the industry had little reason to consider alternatives to brick and steel, and just a few years later we had the dot-com burst with the resulting abundance of seemingly unlimited cheap data center space. At that time, we all asked why would anyone use modular? Five years later it became apparent that those OLD data centers were NOT going to cut it for onslaught of new high power and high-density IT solutions, and during the Christmas of 2006 or so, I found myself parking my car in a converted data center near San Jose airport which emphasized just how unsuitable those old centers were for modern IT deployments. (I was a bit surprised that they did NOT try to charge me for parking my car by the square foot!)

So back to modular. Modular makes sense (and cents) and the analysts of late are pretty pumped about it as well. Uptime’s latest survey puts nearly one-fifth of their network of 1000 respondents claiming that some form of modular will be part of the next 18 month future. Modular (when done right) allows you to stand up YOUR gear, in some cases HALF A MEGAWATT of it at a time, in just a handful of months. Start to finish. Months, not years! No surprise that a fifth of the enterprises they surveyed are planning on adding some modular capacity.

So Modular creates an uber-efficient ready-to-load ’box’. Then the question is what do you fill that box with? Do the IT guys also have to re-invent their own wheel over and over again too? How cool would it be if the chosen modular vendor could also supply the ENTIRE building block of everything, including the IT gear? Everything you need to compute, store and network within their fixed form ‘box’. Gear that was open, standard and secure. That is exactly what IO has now done. Imagine taking a brand new data center, fully loaded with the perfect complement of IT processing gear, and then just cutting it up in bite-sized (500kW) chunks ready to connect where every you want, at the drop of a hat. Want a MEG of computing? Buy two. Want two Meg worth? Buy four. Just decide where you want them to sit, connect to power and your core network, and you are up and running.

Now there are a couple of details worth discussing. First, the modular ‘box’ approach is just plain efficient. The design is optimized and incorporates the best technologies for each subsystem, all tightly chosen (not inherited) to be interweaved, monitored and actively managed. Why burden a control system to support the least common denominator for all of the industry’s possible chillers when in this specific case you know exactly what YOUR chiller will be each and every time, because the modular enclosure vendor chose it by design. Doing so enables vendors like IO to go ahead and take maximum advantage of each and every chosen component, rather than having to ‘accommodate’ some type of one-off or inherited facility. The IO ’box’ is highly defendable, economical and a great example what a standardized factory / assembly line can churn out. I really have no affiliation to IO, but find George’s approach to be aw-inspiring.

Second, consider the IT gear that fills the ‘box’. IO has chosen to supply a fully functional complement of open servers, switches and storage which are needed for processing. To do so, they chose gear based upon the Open Compute Project (OCP) specification, pioneered by companies like Facebook and other hyper-scale users. OCP is the Open Source version of hardware. All vendors that make servers based upon the Open Compute specification physically fit and work. They are compatible. Building racks using Open Compute creates an open environment. Again, like the enclosure structure, standardization is the key. Most of the Tier-1 server vendors have an Open Compute node offering and in this case IO chose the DELL ES-2600′s for their OCP racks.

So what caught my attention? IO got past theory and is delivering this today. They put this standardized IT complement inside their standardized modular structure, add deep granular control and active management and they have something pretty special. A chunk of computing ready to go. A solution that starts with a single chunk and then scales as big as you like with industry leading performance, interoperability and intelligence, completely secure and open.

How does this all related to the Cloud I mentioned at the top of this story?  There are some very specific capabilities that make up a “Cloud” and while everyone has their own spin on it, I think the NIST organization has one of the most balanced and unbiased definitions I have seen (ref: NIST 800-145). In that document, they call out a handful of key capabilities that are required for a Cloud (public or private have the same requirements):

  • On Demand Self Service Provisioning
  • Broad Network Access
  • Resource Pooling
  • Rapid Elasticity
  • Measured Services

Take standardized enclosures, add standardized IT gear, install it in a suitable location and then throw in a advanced layer of provisioning and management software and you have the perfect building block for a Private Cloud. Since everything inside the IO offering is standardized and automated, with control and chargeback, rapid provisioning, etc, they have built the very definition of Cloud, all in easy to eat 500kW bites.

Did I get your attention?

Posted in Uncategorized | Tagged , , , | Leave a comment

Software Defined …. Console? How Cool is this…

The other day I had the opportunity to sit down with some old friends of mine who were instrumental in maturing the out-of-band-console market during the mid-2000′s. Maybe surprising to many of you, but the market for ‘last resort’ management of active devices (servers, switches, routers, storage, etc) remotely is still alive and well because there is simply no alternative management scheme for certain critical applications when those devices go off of the network. Have  a critical BGP router that MUST stay alive? Console it. Putting a bunch of servers in remote offices without a lot of technical resources on site? Console them!

What? How can this be? With all of the innovation seen with virtualized servers and dense blades systems and all of the resiliency and redundancy schemes created in the past half a dozen years, surely these ‘last resort’ or out-of-band needs must be obscure? Nope. Drop down to your data center or network manager’s office and ask about how often their people use remote access. You’ll be surprised. There are just some applications where a good old remote access path is the state of the art and the only way they would ever imagine handling various tasks.

Software Defined ...  Console!

Software Defined … Console!

What has happened over the past few years is the market has transformed to include a wide range of different mechanisms to perform that critical remote interaction to each of the various types of targets. But make no mistake, all servers and switches and storage elements and even virtualized platforms have this type of built-in access. In the case of Virtualized platforms, this access is provided to both the Hosts and Guests. Remote access continues to be cool (and highly under-rated).

That said, remote access appliances and mechanisms continue to be provided by a wide range of suppliers, each with a different set of features which are specific to their own devices. For a quick refresher, check out some of the appliance devices from OpenGear.com or the virtualized MKS services offered by VMware. With this diversity,  it is likely that each remote access mechanism in use in your data center has a different set of capabilities, audit, security policies, setup and usage guidelines. Each remote access vendor treats their solution in their own way, like an island with little or no consistency across vendors and mechanisms.

So that brings me to the world of Software Defined. As it turns out separating the control plane from the forwarding plane continues to be key for many applications, and remote console is another one of them. Putting the intelligence in one place for a rich experience regardless of the forwarding technology just makes sense and leverages all of your investments.

For the console world, the folks at ZPE have done just this. They have abstracted all of the desireable features that any remote access technology should offer, and then created a controller where those features reside. The controller is smart enough to apply all of the capabilities to any forwarding mechanism, whether it be using SSH or TELNET into a server or switch, VMware’s MKS software APIs, or even old-school RS232 via a console server switches. In this fashion, any user and be provided with a universal mechanism for critical remote access, regardless of the underlying transport layer. The user experience is consistent, and the underlying transport appliances effectively ’appear’ smarter since the controller now presents higher level functionality, security, logging, etc.

I have to say that the demo is impressive. Have a look at http://zpesystems.com/ and be ready to get back to the future…

Posted in Uncategorized | Tagged , , | Leave a comment

Value Never Goes Out of Style

With all of the changes going on in the world of IT, it is curious to me how many discussions there are regarding the commoditization of many of the various components. Surprisingly, in many of these discussions the fundamental business metrics of value (and innovation) seem to be taking a bit of a back seat. Many of the discussions posted online and discussed in conferences are focusing on the interoperability aspects of IT components and imply that the world is now looking to buy ‘the cheapest box’ that does the basic technical job. Yikes, I am still catching my breath.  Disk Drives, networking, security, mobile devices, servers, storage, etc. are all going through these dramatic changes and it’s the innovation that must stay front and center. It’s not basic functionality needed when the entire world is becoming Internet-enabled. (And yes, I really did see an Internet connected toothbrush last month, and I am sure there is a Internet-toaster out there somewhere).

Today, the name of the game is running  IT as a streamlined, dependable, supportable and defendable strategic asset for the corporation, all with a tremendous amount of ‘adult supervision’ and financial oversight to quantify TCO. Those folks that were involved in the IT world in the 90′s will appreciate the interoperability trials and tribulations when new technologies first became available, but as an industry we have moved towards value as the real measure of success. Just making things ”connect” is not enough…

Value and Innovation are STILL the Most important Thing to Consider!

Value and Innovation are STILL the Most important Thing to Consider!

It’s about value. It’s about finding vendors that are committed to innovation to support their delivered value. We can not forget that it is still possible to BUILD A BETTER MOUSETRAP. (There’s a phase you probably haven’t heard in a while). With all of the change going on, and the wide range of IT infrastructure offerings on the market, it is critically important for end-users to think back to the first time they chose ’cheapest’ instead of ‘highest value’. IT is THE core underpinnings of your entire company’s success, so you must always challenge yourself to source solutions that you ”bet your business” upon, with your name associated with it. And remember that the applications that you want to run on your IT structure today are markedly different from those just 5 years ago.

I was watching a Q&A discussion about SDN at a recent Gartner conference (Las Vegas, Dec 2013) where one well-intentioned attendee was suggesting that  “Since SDN breaks the control plane from the forwarding plane (through the use of industry standard OpenFlow) allowing any compatible controller to talk to any compatible switch, I am now free to buy whatever brands of these I can find at the lowest cost, since they will all play together. Right?”  He was somehow postulating that the use of OpenFlow (et al) now eliminates the possibility of vendor differentiation while he implied that SDN will also force the cost for networking way down from current levels. Let me say for the record, uhhh… not really. Today, the world isn’t looking for raw connectivity any longer, but instead much more intelligent application-aware connectivity.  There are still a ton of opportunities to innovate the handling of information as it flows around the planet, and all of this innovation has cost and value. That’s part of the premise of SDN, highly intelligent transport of packets, on a flow by flow basis based upon application needs, without having to manually setup and/or pre-reserve and waste bandwidth like in the old days of vLANs, etc.

The storage, networking and systems market segments are proving that there is still plenty of value that a vendor can bring to the ‘commodity’ table. And best of all, it’s becoming quite clear that “value” comes not just from the ‘box’ itself, but also through the business practices and capabilities of the supplier.  Consider vendor resources and geographic coverage approaches, technology roadmap, warranty terms, maintenance contract commitments, distribution & availability, etc.

In the end, the Data Center Transformation era is proving that while a tremendous amount of ‘commoditization’ is occurring, there are still a wide range of choices which should not be taken lightly. Solutions that appear similar on the surface may in fact be very different under the skin, have a diverse set of business attributes when dealing with the vendor and how confident that you feel the chosen vendor cares about YOUR success. I am sure each vendor you are considering will be happy to have a detailed TCO discussion which may help identify for you the real value of any given solution.

Remember, VALUE never goes out of style, nor does the opportunity to innovation as part of providing value…

Posted in Uncategorized | Leave a comment

The Fruits of Innovation: Top 10 IT Trends in 2014

Data Center Knowledge

Data Center Knowledge

The IT industry and its data centers are going through change today at a breakneck pace. Changes are underway to the very fundamentals of how we create IT, how we leverage IT, and how we innovate in IT. Information Technology has always been about making changes that stretch the limits of creativity, but when so many core components change at the same time, it becomes both exciting and challenging even for the most astute of us IT professionals.

The changes we’re due to see in 2014 start with the way people think. A good bit of the change going on in IT is about the maturity of its business leaders and their business planning skills associated with all of those changes. In the end, these leaders are now tasked to accurately manage, predict, execute and justify.

Read the whole story at Data Center Knowledge.

 

Posted in Uncategorized | Leave a comment

My WalkAbout in Las Vegas: Gartner Data Center Expo 2013

Gartner Data Center Conference Las Vegas 2013

Gartner Data Center Conference Las Vegas

Last week we saw the annual convergence of 2500 of our closest friends tasked to deliver data center services to many of the largest companies in the USA. This is Must-See TV at its finest. The attendee roles ranged from systems administrators and data center managers to CIOs and everything in between. This is primarily an IT show, so not much in the way of new cooling devices or generators here, but we did see a wealth of cool new offerings on display from a number of high-visibility providers, established and startups alike.

The data center is transforming. Tactical responses are being replaced with strategies and planning, and most of the new generation of IT professionals in attendance at the show will look back in horror at our Data Center Wild-West days (circa 1985-2005) . I can picture water-cooler stories of truck-rolls, 2am pager calls and screw-driver gymnastics  being all the rage, and are already becoming folklore. What we are seeing now is the roots of that change. The components and concepts of the next generation of data center, where cost becomes paramount, and choices abound. The world is finally connecting. Imagination is the big opportunity now that technology to do (practically) ANYTHING exists!

With more than 125 vendors on the show floor, here is a list of the dozen or so (mostly) startups that caught my eye.

  • Cirba – One of the most established virtualized workload management plays in the market, they were showing their Automated Capacity Control and new Reservation Console. This enables workloads to be planned, executed and shifted as needed across the physical structure, regardless of where it resides. Cirba claims that in the Global 2000 deployments they have, end-users see a 40-70% improvement in efficiency through much higher VM density and more predictable operations. VMware shops owe it to themselves to take a deep look at how they are managing virtualized capacity and then start plumbing in SOMETHING now.
  • FalconStor – De-Dupe has been all the rage for a number of years now, and FalconStor was promoting what they call Global De-Duplication, which takes into account the constraints of the WAN and dialogs with remote data stores before any actual payload data is traversed on the WAN. In doing so, they claim a ton of improvements over traditional file or block level (only) de-dupe approaches. Global de-dupe is a clever approach that could deeply impact the perceived performance of geographically dispersed environments.
  • Nimble Storage – On display were the CS 200/400 appliances which are flash-optimized storage based upon their proprietary cache-accelerated technology. They understand the difference when using flash for READS versus WRITES, so their technology optimizes each separately. They claim their customers access data on average TEN TIMES faster, with a READ latency of .67mS and WRITE latency or .5 mS. Flash storage has become a cornerstone for the enterprise, and Nimble’s approach seems to have some solid legs, attaining Gartner’s “Visionary” status on the latest disk magic quadrant!
  • StackIQ – A server management player around for quite some time, StackIQ has been traditionally focused on HPC computing and is now applying their technology to the other areas of a data center. Essentially StackIQ assures that servers start and remain configured over long periods of time. They derive their core from the open-source ROCKS project, commercializing and tuning it for enterprise-class computing. In practice, it allows you to describe the ‘perfect’ server you want, and then let StackIQ make that perfect server happen. All the software, the applications, the setup and configuration, everything that you described will be maintained… and you can do this for hundreds or thousands of servers, each the same or different.
  • Simplivity – One of the hottest modular plays in the IT side of the house. With about $100Million of funding, this high-profile Boston startup carries two flags; Convergence and Federation. They converge the key IT functions of processing, storage and networks in a single rock-solid appliance designed for great IO performance. Then, any number of these appliances can be deployed or  added at any time. Along with their incredible IO performance due to purpose-built hardware, the rest of their secret sauce is their FEDERATION technology that makes ALL of these appliances (regardless of where they are deployed) appear as a single pool of global resources. Very cool, like drops of mercury that can be squeezed together at will, resulting in an indistinguishable single larger drop. Simplivity enables well-defined chunks of computing to be added at any time, anywhere.
  • IO – Over-built brick and mortar data centers are dead. Today, nobody wants to build a massive 100,000 square feet of DC space that will hopefully be used over 10 years. The industry wants space on demand. Just enough at just the right time. In 120 days, the folks at IO will bring any number of 500kW chunks of computing to your site or theirs, ready for you to install your favorite brand of server. New at the show, IO.Cloud offerings which pair their modular offerings with Open Compute servers to provide a plug-in and go data center, just add power. Also, check out their CEO’s discussion of information “Custody and the Cloud” for some great perspective on a little known topic that will affect all of us.
  • Synapsense – Reinvention seems to be look good on Synapsense. For the past half a dozen years they honed their core competencies around low-power wireless sensor technologies, they have now applied their highly capable sensing and visualization to tackle the need for energy-saving, roll-your-sleeves-up control. Synapsense’s new Active Control enables their sensors to influence the ‘set-points’ of the co-resident HVAC gear, essentially driving down the cost side of the model. Not just reporting and visualization any longer, their new Active Control holds a great deal of promise… As soon as data center operators are ready for their next level of intelligent data center-specific BMS, which Synapsense is banking on.
  • Nlyte – A long time player in the crowded DCIM space, Nlyte was showing both their recently released V7.3 Enterprise offering, as well as their new On-Demand SaaS offerings. In Nlyte V7.3, Nlyte now offers enhanced workflow, cable management and hyper scale, with end-user deployments exceeding more than 25,000 racks , according to Nlyte. With the new Nlyte On-Demand SaaS offering, potential customers can begin using the same Nlyte within a matter of days, rather than weeks or months. And for existing Nlyte customers, they also announced SaaS options for testing new versions and supporting disaster-recovery plans.
  • Skyera - Skyera was showing both their skyHawk storage appliances as well as their essential SeOS control OS. The key to using flash in storage applications, according to Skyera, is to eliminate the use of expensive SLC flash memory chips, instead opting for much lower cost MLC flash… only possible when the OS has been tuned to do so. One of the key challenges they have solved is the number of WRITE cycles a piece of Flash memory can handle. They call it Life Amplification and claim 100X increase in the effective life. Bottom line: high-speed and secure, low-latency Flash at the SAME price of spinning disk, with long lifespans.
  • Cloud Cruiser – I was quite impressed by this vendor’s focus on the economics of using Cloud services. Their software offering allows highly details analysis, charge back and show back amongst a ton of other capabilities including service analysis, demand forecasting, and consumer analysis. It was a pleasure to see a vendor solely focused on the business side of the Cloud, rather than the technology side. Best of all, it comes ready to work with VMware, MS, AWS, HP, Cisco, OpenStack, Rackspace and BMC.
  • Plexxi – On display was their Application-Centric series of switches which are traditionally grouped into the Top-of-Rack category. Plexxi now goes farther and takes these devices to the next level with higher level abstractions beyond what VLANs and ACLs can deliver. They make the point that all configuration of devices must be initiated upstream and in direct support of specific application needs, rather than tactically at the edge (which becomes a support nightmare). Something they call “Affinity SmartPath” rounds out their offerings by providing intelligent path selection and real-time balancing to assure applications get the network ‘they need’
  • OptiCool – Continuing to be seen in the data center shows, cooling doors always seem to make sense. Frankly, the monster IT vendors have dabbled in this for years, trying to extract heat from as close as possible to the source of that heat. OptiCool is a set of low-pressure close-coupled heat transfer technologies that can be deployed into any data center rack configuration to effectively eliminate heat at the source. Think of this systems as a refrigerant pump, which feeds a large number of heat-exchanging panels mounted inside your existing racks. When using this system, they claim 95% energy efficiency, 500% more gear capacity, and 90% less data center footprint. According to OptiCool, some of their existing users deploy in racks up to 20kW.
  • Nebula – A relatively new player, Nebula was showing their 2U appliance that is central to their entire Cloud ONE system. The controllers each manage up to 20 processing nodes, and up to 5 of these controllers can be bound together to provide a 100 node tightly coupled private cloud system. Each Cloud Node is x86 based and are connected to their respective controller via multiple 10GbE. In full deployment, this turnkey private cloud contains 1600 cores, 9600GB of memory and 2400TB of storage, all managed in real-time as a single entity. API interfaces to Openstack and AWS as well.
  • Virtual Instruments – I continue to be impressed by the maturity of Virtual Instruments and their VirtualWisdom analytics offering. Essentially VI looks at customers’ physical, virtual and private cloud computing environments, and then measures the performance, utilization and overall availability and health of the infrastructure from individual components to high-level business services. Using software probes, they can sense virtual server status, and with hardware probes can sense SAN and SAN performance. Purpose built and clearly valuable for the large production SAN/VMware shop.

Now its important to also mention the larger, mostly publicly traded IT vendors who also had interesting wares; VMware (software defined everything), HP (ARM-based moonshot and SDN), BMC (service management), IBM, DELL, Symantec (storage), EMC, Microsoft, Cisco, Juniper and Extreme (the blended Extreme / Entersys story). Many of these were clearly heading toward immediately supporting the modular, low-power and software-defined phenomena.

Add to this a few good dinners, the Cirque-de-Soleil style “Le Reve” at the Wynn, a couple $30 taxi rides, and one of the best bowls of Matzah-ball soup I have ever had (at the Venetian’s Deli) and my WalkAbout was a huge success…

Posted in Uncategorized | Leave a comment

Economies of Scale and the Data Center… It doesn’t apply!

Economies of Scale in the Data Center

Economies of Scale in the Data Center

Who knew that the familiar Economies of Scale rules we all grew up on don’t always apply to the data center? While we all learned that per-unit pricing decreases as the volume increases for nearly everything consumable in life, what we didn’t take into account were the timeframes associated with consuming those quantities to make the rule valid.  And in the case of a data center, these timeframes could be much longer than ever imagined.

For the data center, Economies of Scale just seemed to make sense 5 or so year ago. Simply build a data center as big as you could imagine needing for say 10 years or so, and then move into it gradually over time. Fill that huge data center with lots of high-capacity gear, and then rest easy that no matter what tomorrow brings, you are going to be all set. Lots of head-room means almost limitless capacity, on the ready, at the drop of a hat. Best of all, the world of IT rarely had a great deal of fiscal accountability, as long as they delivered “Five 9′s” of availability and uptime. So over building and over provisioning the data center was greeted as a hero’s welcome. Frankly, nobody cared if the servers were running at less than 5% utilization. No body cared if the data center was so cold you needed a warm coat just to walk through it.

In the financial justification process, all of the overview numbers just made sense. Add up all the costs to build, fill and operate the data center, then divide by the amount of TOTAL capacity to be delivered. Oops. There’s that pesky critter. TOTAL CAPACITY USAGE IS NOT GOING TO HAPPEN ON DAY 1. In some cases it is not even day 1000! So in reality, the burdened cost of the first square foot of space or the first server is in the MILLIONS of dollars, and then over time, perhaps 5 years or more, the cost of the last unit is essentially free.  It is the time element that trips us up. Time to get back to the drawing board…

Enter, Stage-Left: Modular Computing. All the way from the cement up to the IT functions has now been modularized. Want modular structures? You can get modular designs that can start small and can grow in as little as 20 rack increments…. in both brick and mortar and manufactured designs. You want modular servers? Lots of flavors of those too, from nearly all of the mainstream Tier-1 players. But let’s be smart here. You have to think about modularity from a commercial and supportability standpoint, not an academic or science-fair approach. Do you really want a modular server platform, and then a modular storage platform and then modular network and security platforms?  Sounds like a lot of platforms to me. Modularity is great, but some real-world life-cycle considerations are important.

Think function convergence. Along with modularity you’ll see it’s not just about scale, it’s about applying new ways of dealing with dynamic growth and expansion. It’s about converging multiple functions into fewer and more easily supportable physical platforms that have higher utilization and then deeply federating these converged offerings to make the aggregation of capacity supplied from many building blocks indistinguishable from those that would be supplied by a single BIG unit when viewed from the outside. Think of placing two smaller drops of mercury on a table top, and pushing them together with your finger (OSHA warning: don’t try this at home). When the two drops of mercury touch, they instantly and completely bond to create ONE drop of mercury. Take 100 drops and push them together, and you’ll still end up with ONE really big (and shiny) drop of mercury. In the case of mercury, its their molecular bonds that allow this. In the case of IT, it requires some pretty innovative engineering to create the equivalent of molecular bonds across devices.

That is what modular computing is all about. Forget most of the rules associated with economies of scale, or at least consider these economies of scale within a finite period of time. Everything else becomes a modularity discussion in the context of time.

Whether you are thinking about the structure itself, or the servers, storage and network that are contained inside, you really CAN think modular today. There are companies that offer their wares which act like drops of mercury. With these offerings, you can add a few thousand square feet of space to a properly designed data center with the result behaving as if it were a single center. I can deploy a handful of converged IT  building blocks that each combine their needed networking with processing and storage and do so over and over again, each time federating these devices into what appears to be a single structure.

With patented technologies and intentional design features, modularity is not only possible, BUT VERY COST EFFECTIVE when viewed over the same period of time as any data center is intended to be in service.

(References: IO Data Centers, Compass Data Centers, Simplivity, VMware)

Posted in Uncategorized | Leave a comment