The “Data Center App” – Who Knew?

For the past 5 years we have become used to saying  “There’s An App For That”, referring to the hundreds of thousands of available smart-device programs that could easily be downloaded for little or no cost and run in seconds.

The word “App” was born with the smart phone and has conjured up images of “tiny” bits of code that ran on these phones or tablets and did a few interesting things, perhaps even a bit of business, but mainly these Apps were creative, social or entertainment. We have become quite comfortable with finding and installing Apps at the drop of a hat, and if we don’t like any particular one, we delete it and try one of the hundred other versions of doing the same thing. The “App” is a pedestrian friendly version of a program. An app is very hard to stub your toe on and requires little if any support or documentation.

But before the “App” appeared, the world of IT had “Applications”, which referred to something perceived as bigger and more expensive which ran on ‘real computers’. Many of these applications ran on servers running in the data center while others ran on desktop and laptop PCs. A business application could be hundreds of dollars per user, and required IT-class support to install and maintain the software and underlying hardware. Oracle was an application. Microsoft Office and Adobe Photoshop are applications too. Thousands of other titles sit in that category as well.

Enterprises run on “Applications”. Today this is still where the lion’s share of work gets done. Whereas Intel’s roadmap for years was consumed with building faster and faster CPU cores required to run these ‘real’ applications (quickly progressing from 133Mhz to 3Ghz or so), the realistic price/performance limits per core have now been reached (i.e. about 4Ghz) so elaborate schemes were devised using things like fabricating many cores per chip and load balancing technologies to create the perception of an application that could have unlimited scale based upon commercial box-level CPU technology. The application really didn’t change, it still ran on a single server, but we just deployed thousands of copies of the same server running the same application, and pulling and pushing against a shared resource pool. In a few cases, the most aggressive applications actually realized that multi-threading could be used to take advantage of these in-box multi-cores as well. It all worked quite well and this is still the most common scenario for scaled business applications today.

Now clear your mind and step back a bit. Processing in IT is all about managing resources to do computing tasks. Put enough of these tasks together and in the right sequence and desired results pop out. For nearly 50 years operating system designers have been working miracles in the efficient use of physical resources. Just look at the body of intellectual property that has created IBM’s big iron operating systems, DEC’s VMS, AT&T and BSD UNIX, Microsoft’s Servers, and Linus’ Linux. Operating system design is a highly refined set of principles that are extremely well understood today. They are truly state of the art inventions.

Now this may sound crazy, but why not employ the same extremely mature resource management principles across the entire data center? A single server is basically a box with high-speed pipes between functions like CPU cores, I/O and storage. A Data Center on the other hand is a BIGGER box, with high-speed pipes between CPU, I/O and Storage. Sound familiar?

So we know how to manage computing resources that are connected with fast pipes, and we know how to build really large structures that house computing resources connected with fast pipes. In theory, those big data center structures can have all the scale ever needed, since additional computing components can simply be pooled within the ‘box’. So why not just build APPLICATIONS that actually run on the Data Center?

Apache Mesos Project allows application run ON the data center, not just IN the data Center!

Apache Mesos Project allows applications to run ON the data center, not just IN the data Center!

Turns out you can! A bunch of people have been working on an open source project called Apache Mesos which allows applications to be written for the Data Center itself. These applications run ON the data center, not just IN it! These applications do not have to understand anything about hardware or scale or redundancy. They simply use services that are provided by any number of physical devices in the data center. The best part is that due to this extreme abstraction, additional services can be wheeled in as needed. Need more I/O? Bring more I/O and start those services which essentially pool themselves with the rest of the I/O. Need more storage or CPU? The same thing applies.

Is this magic just theory or reality? Reality! In fact you probably used a Data Center App before your second cup of coffee this morning. A growing list of applications are already using Apache Mesos to create applications architectured to run on a Data Center operating system at any scale! Most importantly, this concept becomes intuitively obvious once you are shown it. It just makes sense. Technology finally caught up to allow this, so NOW is the time to think about delivering  IT services armed with solid unit-of-work level pricing which is essentially linear, and stop letting the peculiarities of complex IT structures and overhead get in the way.

To quote a famous lyric from 1986, “The Future’s is so bright, I gotta wear shades!”. (Extra credit: Can you name the band without looking it up?)

Posted in Uncategorized | Tagged , | Leave a comment

“220, 221, Whatever it Takes!”

DCIM Integrations

DCIM integration with ITSM systems is critically important and NOT an adventure to be executed with brute force

“220, 221… Whatever it takes!” was a famous line of dialog from one of my favorite classic movies, “Mr. Mom” starring Michael Keaton and Teri Garr. In the movie, Michael’s character finds himself between jobs and his wife becomes the bread-winner. During his hiatus from work, he occupies his time with odd projects around the home. At one point during the movie, Teri Garr’s boss (played by Martin Mull) comes over to the house to pick her up for a business trip and in a classic demonstration of male egos in contest, Michael’s character, holding a chainsaw in one hand states to Martin that he is “re-wiring the house”, which apparently is his way of expressing his masculinity (even though he ostensibly is just watching the kids). Mull’s character inquires, “220?” (referring to a common voltage standard used for high power appliances) to which Michael’s character replies, “220, 221, Whatever it takes!”. Clearly Michael’s character had no clue what the question even meant (there is no such thing as “221”), but in his desire to appear knowledgeable, he responded in a fashion that would have sounded good to the uninformed.

Why do I offer this excerpt from this classic movie here? Let me explain.

Posted in Uncategorized | Tagged , | Leave a comment

Everything as a Service! DCIM has reached its stride…

DCIMaaS Delivers!

DCIMaaS Delivers!

The world of IT is changing rapidly and nearly every conceivable IT offering is now being introduced by various vendors “AS A SERVICE”. Doing a quick Google search, and talking to your main IT vendors, you’ll see Desktop as a service, Internet as a Service, Storage as a Service, EMAIL as a Service, Platform as a Service, Network as a Service, Data Center as a Service and the grand-daddy of them all, Software as a Service (which I would humbly offer is where the “X” as a Service started). In fact one of the first wildly popular SaaS offerings was who proved that users of enterprise-class solutions would be happy to have certain types of applications offered per month and per user.

DCIM is no different and the strongest of the DCIM players, like Nlyte, have now realized this and introduced their offerings for “DCIMaaS”. It just makes sense….

Read the WHOLE story here.

Posted in Uncategorized | Tagged , | Leave a comment

Technology and the Federal Government…

Federal Office Systems Exposition 2014

Federal Office Systems Exposition (FOSE) 2014

The Federal government is in the midst of one of the most massive technology transformations in recent history. Every facet of the government is being evaluated to determine if the technology involved and already in place is both suitable to the current job at hand and cost-effective in the bigger picture.

Now I used the word ‘technology’ above and most readers here immediately picture data processing and IT infrastructures, but after spending the last few days at the Washington D.C. based Federal Office Systems Exposition (FOSE) exposition and conference, I have a new appreciation for that term. It turns out nearly every aspect of running the Federal government involves some type of people, process or product technology. Many of these don’t really jump out to most people until they are challenged to think about it. From the computers to the cameras, from the printers to the pistols. The tools to protect our nation are amazing to browse through and the attendees each had a deep appreciation for the specific technologies that they were directly involved in.

Where else can you start walking from one end of the trade show floor talking to dozens of vendors about security and data center efficiency software, pass by an on-floor theater where 400-500 people were gathered to hear a keynote presentation entitled “A Collaborative Approach to Catching a Terrorist”, and then continue on to aisles of vendors demonstrating their Tactical Command Vehicles (TCV) and weaponry? Clearly technology in Washington D.C. has a much broader reach than our traditional IT view.

Don’t get me wrong, the show had a number of the monster IT firms (like IBM and DELL) present, along with many smaller companies like Shavlik (showing their highly regarded server patching solutions), Nlyte (showing off their integrated process/workflow management for data center assets), CohoData (showing a very cool scalable on-premise storage offering) and Feith (showing their leading edge records management). Now since this was a government focus, it also had a ton of surveillance companies like Axis Communications, it had emergency communications companies like Redsky (makers of E911 systems) and a fair number of document handling companies like Fujitsu (showing off their high-speed and secured document scanners and management suite).

This was a trade show like none other and provided a huge opportunity for those present to reset their understanding and appreciation of the term ‘technology’. Clearly folks were here with purpose. All of the government agencies had representation as attendees and exhibitors (GSA, Homeland, DOJ, etc). The sessions were well attended (including mine about data center consolidation) and lots of questions were being asked on the floor and in the sessions.

So did I sit in the monster sized Tactical Command Vehicle parked on the show floor? You bet. The folks at NomadGCS were kind enough to let me climb in to get an appreciation for all of the technology available in a protection offering that rises to something like 10-12 feet off the ground and spans perhaps 20 feet front to back. Did I get to experience any of the assault weapons being shown by Beretta, or H&K? Sadly no. I drew the line on my curiosity and decided to yield to those that understand that technology a bit better than I…

Posted in Technology | Tagged , , , | Leave a comment

Where did all the SysAdmins go?

Remember the days maybe ten years ago when we had a sysadmin for every 30 or 40 servers? Those were simpler times when each server in the data center was unique, even if just slightly, from its closest neighbor. A data center with 1000 servers had 30 operators who spent each of their days patching and monitoring and resolving operational issues with ‘their’ servers. Each server was a critical element of the big picture, and the failure of any one of these was typically reason to cancel dinner plans with your significant other.

The role of the SysAdmin has changed - Now More interesting and rewarding

The role of the SysAdmin has changed – Now More interesting and rewarding

I have to laugh thinking back on those days and wonder how we survived. We were so tightly wound-up in all of the intertwined technology that most sysadmins didn’t have the luxury to think into the future, and instead spent all of their time trying to keep their head above water and keep what they had running!

The first big milestone came when automated and scalable software solutions popped up that could provision devices according to templates, and then apply those templates to new and existing servers. These templates become ever more capable, and things like patch management became part of the template and automated process. No longer would each device require an entire afternoon of loading software or applying patches. We simply defined any number of servers as a specific template type, and then the provisioning toolsets would keep those servers matching those templates. 1 server or 1000 servers, made no difference. A single sysadmin could now handle 25 times the number of servers in their daily schedule. Instead of managing 30-40 individual servers in the old days, they could now manage in some cases 30-40 RACKS full of servers.

The second game-changer was virtualization. Virtualization essentially broke the 1 to 1 alignment between these physical boxes and the number of actually running servers. Provisioning and update automation combined with virtualization meant that a sysadmin could now able to manage literally thousands of ‘servers’ almost instantly with a high degree of confidence that they were optimized, secured and supportable regardless of what failures might occur. And best of all, since many server instances were now virtualized, the number of walks down to the data center was drastically reduced, and in fact sysadmins could really exist ANYWHERE on the planet as long as a network connection existed. Follow-the-Sun(or moon) strategies popped up that leveraged sysadmins in multiple locations.

Lastly, the decomposition of many volume applications (like web ecomm, search and social servers) using technologies (like Hadoop) allowed these applications to automatically expand or contract to any number of servers without operational intervention which also allowed for failures of any given server to have no perceived effect on the business itself.. With relative ease, now 1000 servers could be tasked for web traffic in the middle of the night, and 10,000 servers could be active in the middle of the day, without any human intervention needed. The applications were resilient, essentially self-healing and users DID NOT need to know or care where their transactions were actually being serviced. Today, most web-centric data centers will have literally dozens or hundreds of physical servers off-line for maintenance due to hardware failure at any point in time, and the process to replace failed hardware device(s) in this type of configuration is considered a normal monthly maintenance task, rather than an urgent business-impacting one.

So, where did those SysAdmins go? Are they working at Starbucks (one of my favorite places on earth) or did they become teachers? Nope! They are still practicing their chosen craft, but doing so at a much higher level, with much more impact and in a fashion that increases their ability to innovate. Their job satisfaction has increased dramatically as their role transitioned from tedium to technology. In most cases these professionals have been given the opportunity to contribute in a much more meaningful fashion, and they feel more tightly connected to the organization’s success. Systems Administrators can now look for innovative ways to support the business in a fashion that either decreases operations costs, or increases business value.

Posted in Uncategorized | Tagged , , | Leave a comment

Modular Meets Open Compute, A Match Made in the Heaven!

Last year I would not have been able to write this story as I had a self-inflicted rule to avoid the mention of any specific vendor by name during the discussion of innovation. In 2014 I have decided to write about things that are introduced by leading vendors that catch my attention for some business impacting reason. I am sure you will agree that most of these innovations are also catching the attention of other data center activists too.  (This new practice saves me the time of answering a bunch of individual emails asking about specific vendor names)

Modular and Open Compute: The Perfect Private Cloud

Modular and Open Compute: The Perfect Private Cloud

So that brings me to IO’s latest introduction of their private cloud building block offering. Think: Modular meets Computing to yield a ready-to-go Cloud building blocks. Modular structures (IO’s original core competency) combined with industry standard computing (as provided by Open Compute). Anyone that reads my published materials knows that I am a big fan of both.

Building data centers ‘from scratch’ and trying to re-invent the wheel each time just seems so old-school (and expensive, long in duration, etc). In 1999, I saw my first modular entry. It was a million dollar ISO shipping container touring the USA on the back of a tractor-trailer that originated in Rhode Island. In 1999, modular was a novel concept, but the industry had little reason to consider alternatives to brick and steel, and just a few years later we had the dot-com burst with the resulting abundance of seemingly unlimited cheap data center space. At that time, we all asked why would anyone use modular? Five years later it became apparent that those OLD data centers were NOT going to cut it for onslaught of new high power and high-density IT solutions, and during the Christmas of 2006 or so, I found myself parking my car in a converted data center near San Jose airport which emphasized just how unsuitable those old centers were for modern IT deployments. (I was a bit surprised that they did NOT try to charge me for parking my car by the square foot!)

So back to modular. Modular makes sense (and cents) and the analysts of late are pretty pumped about it as well. Uptime’s latest survey puts nearly one-fifth of their network of 1000 respondents claiming that some form of modular will be part of the next 18 month future. Modular (when done right) allows you to stand up YOUR gear, in some cases HALF A MEGAWATT of it at a time, in just a handful of months. Start to finish. Months, not years! No surprise that a fifth of the enterprises they surveyed are planning on adding some modular capacity.

So Modular creates an uber-efficient ready-to-load ‘box’. Then the question is what do you fill that box with? Do the IT guys also have to re-invent their own wheel over and over again too? How cool would it be if the chosen modular vendor could also supply the ENTIRE building block of everything, including the IT gear? Everything you need to compute, store and network within their fixed form ‘box’. Gear that was open, standard and secure. That is exactly what IO has now done. Imagine taking a brand new data center, fully loaded with the perfect complement of IT processing gear, and then just cutting it up in bite-sized (500kW) chunks ready to connect where every you want, at the drop of a hat. Want a MEG of computing? Buy two. Want two Meg worth? Buy four. Just decide where you want them to sit, connect to power and your core network, and you are up and running.

Now there are a couple of details worth discussing. First, the modular ‘box’ approach is just plain efficient. The design is optimized and incorporates the best technologies for each subsystem, all tightly chosen (not inherited) to be interweaved, monitored and actively managed. Why burden a control system to support the least common denominator for all of the industry’s possible chillers when in this specific case you know exactly what YOUR chiller will be each and every time, because the modular enclosure vendor chose it by design. Doing so enables vendors like IO to go ahead and take maximum advantage of each and every chosen component, rather than having to ‘accommodate’ some type of one-off or inherited facility. The IO ‘box’ is highly defendable, economical and a great example what a standardized factory / assembly line can churn out. I really have no affiliation to IO, but find George’s approach to be aw-inspiring.

Second, consider the IT gear that fills the ‘box’. IO has chosen to supply a fully functional complement of open servers, switches and storage which are needed for processing. To do so, they chose gear based upon the Open Compute Project (OCP) specification, pioneered by companies like Facebook and other hyper-scale users. OCP is the Open Source version of hardware. All vendors that make servers based upon the Open Compute specification physically fit and work. They are compatible. Building racks using Open Compute creates an open environment. Again, like the enclosure structure, standardization is the key. Most of the Tier-1 server vendors have an Open Compute node offering and in this case IO chose the DELL ES-2600′s for their OCP racks.

So what caught my attention? IO got past theory and is delivering this today. They put this standardized IT complement inside their standardized modular structure, add deep granular control and active management and they have something pretty special. A chunk of computing ready to go. A solution that starts with a single chunk and then scales as big as you like with industry leading performance, interoperability and intelligence, completely secure and open.

How does this all related to the Cloud I mentioned at the top of this story?  There are some very specific capabilities that make up a “Cloud” and while everyone has their own spin on it, I think the NIST organization has one of the most balanced and unbiased definitions I have seen (ref: NIST 800-145). In that document, they call out a handful of key capabilities that are required for a Cloud (public or private have the same requirements):

  • On Demand Self Service Provisioning
  • Broad Network Access
  • Resource Pooling
  • Rapid Elasticity
  • Measured Services

Take standardized enclosures, add standardized IT gear, install it in a suitable location and then throw in a advanced layer of provisioning and management software and you have the perfect building block for a Private Cloud. Since everything inside the IO offering is standardized and automated, with control and chargeback, rapid provisioning, etc, they have built the very definition of Cloud, all in easy to eat 500kW bites.

Did I get your attention?

Posted in Uncategorized | Tagged , , , | Leave a comment

Software Defined …. Console? How Cool is this…

The other day I had the opportunity to sit down with some old friends of mine who were instrumental in maturing the out-of-band-console market during the mid-2000′s. Maybe surprising to many of you, but the market for ‘last resort’ management of active devices (servers, switches, routers, storage, etc) remotely is still alive and well because there is simply no alternative management scheme for certain critical applications when those devices go off of the network. Have  a critical BGP router that MUST stay alive? Console it. Putting a bunch of servers in remote offices without a lot of technical resources on site? Console them!

What? How can this be? With all of the innovation seen with virtualized servers and dense blades systems and all of the resiliency and redundancy schemes created in the past half a dozen years, surely these ‘last resort’ or out-of-band needs must be obscure? Nope. Drop down to your data center or network manager’s office and ask about how often their people use remote access. You’ll be surprised. There are just some applications where a good old remote access path is the state of the art and the only way they would ever imagine handling various tasks.

Software Defined ...  Console!

Software Defined … Console!

What has happened over the past few years is the market has transformed to include a wide range of different mechanisms to perform that critical remote interaction to each of the various types of targets. But make no mistake, all servers and switches and storage elements and even virtualized platforms have this type of built-in access. In the case of Virtualized platforms, this access is provided to both the Hosts and Guests. Remote access continues to be cool (and highly under-rated).

That said, remote access appliances and mechanisms continue to be provided by a wide range of suppliers, each with a different set of features which are specific to their own devices. For a quick refresher, check out some of the appliance devices from or the virtualized MKS services offered by VMware. With this diversity,  it is likely that each remote access mechanism in use in your data center has a different set of capabilities, audit, security policies, setup and usage guidelines. Each remote access vendor treats their solution in their own way, like an island with little or no consistency across vendors and mechanisms.

So that brings me to the world of Software Defined. As it turns out separating the control plane from the forwarding plane continues to be key for many applications, and remote console is another one of them. Putting the intelligence in one place for a rich experience regardless of the forwarding technology just makes sense and leverages all of your investments.

For the console world, the folks at ZPE have done just this. They have abstracted all of the desireable features that any remote access technology should offer, and then created a controller where those features reside. The controller is smart enough to apply all of the capabilities to any forwarding mechanism, whether it be using SSH or TELNET into a server or switch, VMware’s MKS software APIs, or even old-school RS232 via a console server switches. In this fashion, any user and be provided with a universal mechanism for critical remote access, regardless of the underlying transport layer. The user experience is consistent, and the underlying transport appliances effectively ‘appear’ smarter since the controller now presents higher level functionality, security, logging, etc.

I have to say that the demo is impressive. Have a look at and be ready to get back to the future…

Posted in Uncategorized | Tagged , , | Leave a comment