Workload management in the era of Hybrid Computing styles…

I sit down with data center managers and other IT professionals all the time and one of the things we start undoubtedly talking about is ‘the future’ of IT. In fact everyone in IT loves to talk about the future of IT since there is simply SO MUCH transformation going on these days.  Now the vast majority of us have been in IT for 25 years or so and I can genuinely say that I have never seen such a breakneck pace of change for all aspects of the data center and IT infrastructure in my generation.

While many of those ‘future’ discussions start with the state of the industry very generically, these discussions quickly zero in on the state of THEIR IT FUTURE.  While the state of the industry is curious and exciting to most, it is how that change will affect their own world that is of primary importance. And all of that change is being viewed with a fresh new set of ‘business colored glasses’. So while it’s still fun to talk about Ethernet Port speeds and new capabilities of Windows Server (oh boy) a higher priority discussion revolves around the costs to deliver IT “products” or as sometimes referred to as IT services. (e.g. a user’s email capability). And with worldwide spending on IT in 2014 of more than $3.7 Trillion dollars, you can bet there are a lot of people involved and a lot of financial analysts asking some pretty tough questions.

So that brings me to the topic of “computing styles”. 15 years ago, we didn’t really have ‘styles’ of computing. Prior to that time, everyone had a data center if they wanted to do computing. Any organization that relied on IT built big data centers. They did so on land that they bought, in buildings that they constructed, and they filled these structures with gear that they purchased. This was pretty standard fare for most of the 80′s and 90′s.

Then the we watched in horror throughout the troubling years of 2000-2010. We took a double-whammy in that decade; 1) The DOT-COM melt-down in 2000 and 2) The ECONOMY melt-down of 2008. What these did is provide extreme motivation to develop and promote alternative and much more cost-effective business models for computing.  Putting aside the related changes in end-users business needs themselves for a moment, these Enterprises now had a handful of choices or ‘styles’ for computing.

  1. In-House – characterized by brick and mortar construction, large upfront costs, and complete responsibility for all aspects of its operation and usage. In many circles, this in-house capability is being re-tooled to behave as a ‘private cloud’.
  2. Co-Location – much like an in-house Data Center, but the cement and MEP gear is provided, essentially as a service. Enterprises ‘rent’ space that comes with power and cooling and network connectivity.
  3. Cloud – the hyperscale data centers with tens or hundreds of thousands of servers, running specialized software for self-service and quick provisioning which provide the ability to purchase computing by the transaction, eliminating all other operational concerns. Usually “Cloud” is the shorthand for “Public Cloud”.
  4. Modular – think of a small in-house data center that can be transported to site in 500kW increments, stood-up in just weeks rather than years, and can be tuned for specific cookie-cutter business needs without impacting other portions of the IT structure.
Computing Style Mix Will Change Over TIme

Computing Style Mix Will Change Over TIme

Most importantly, IT professionals that get the big business picture realize that their own infrastructure WILL NOT be changed entirely overnight. In fact all four of these styles will exist in various combinations across their own span of control for years to come. If asked, most IT professionals will say something like “I am going to the Cloud” but what they really mean is that their strategy is to utilize a growing percentage of their computing needs via transactional cloud-oriented computing, focusing more on transactions and transactional costs rather than floor tiles or servers. It’s not black and white change, it’s about the changing mix of these styles over time. And the forward-looking decisions are centered on the value of using each type.

Now the beauty of computing styles is that there are a number of startups and public companies alike that are dealing with the TRANSITIONS and MIGRATIONS between these styles and ways to leverage each style for it’s core value. Say for instance a company already owns 100,000 square feet of data center space that is less than 3 years old. But say they need to augment their capacity with more transactions twice a year. Why not just add public cloud transactions to their own transaction handling capability? It turns out you can! Startups and open source projects make the migration of workloads as simple as clicking a mouse, or in some cases completely automated based upon demand. Workloads can be added or moved from across in-house servers as easily as between private and public clouds… and back again!

Or consider the concept of Disaster Recover planning in the context of computing styles. Everybody should have a rock solid DR plan , but justifying that plan and it’s costs usually revolves around the costs to stand-up a ton of warm standby gear.  What if you could use one of the other styles, say the cloud for instance, as the ‘flick of a switch’ style DR site by moving workloads to it when the main site is being impacted? Once again, you can!

Today’s challenge for data center and IT professionals is really is about properly setting a value to each of the IT workloads that they commit to provide and doing it with the level of resiliency needed for the business. It’s ALL about the business and the number of options available to run the business has increased!

Workload management is the key to business efficiency, and the most critical facets of workload management is identifying the cost to deliver work. With $3.7Trillion in spending on the line, do you know what your costs are to process each unit of your work, and do you know if it would be more or less per unit of work to shift from one style to another? Do you have transition plans and technologies to move workloads dynamically to leverage each style based upon demand, status or time? Can you handle demand peaks effectively or have you over provisioned resources which sit idle for most of their life? Do you have a DR plan that makes your IT contingency strategy both low-risk and low cost? These are the tough questions on the table today.

Anyone want go back to simpler times when we all talked about raw networking switch throughput or the capacity of the latest spinning media disk drives?

 

Posted in Technology | Tagged , , , , , , , | Leave a comment

The “Data Center App” – Who Knew?

For the past 5 years we have become used to saying  “There’s An App For That”, referring to the hundreds of thousands of available smart-device programs that could easily be downloaded for little or no cost and run in seconds.

The word “App” was born with the smart phone and has conjured up images of “tiny” bits of code that ran on these phones or tablets and did a few interesting things, perhaps even a bit of business, but mainly these Apps were creative, social or entertainment. We have become quite comfortable with finding and installing Apps at the drop of a hat, and if we don’t like any particular one, we delete it and try one of the hundred other versions of doing the same thing. The “App” is a pedestrian friendly version of a program. An app is very hard to stub your toe on and requires little if any support or documentation.

But before the “App” appeared, the world of IT had “Applications”, which referred to something perceived as bigger and more expensive which ran on ‘real computers’. Many of these applications ran on servers running in the data center while others ran on desktop and laptop PCs. A business application could be hundreds of dollars per user, and required IT-class support to install and maintain the software and underlying hardware. Oracle was an application. Microsoft Office and Adobe Photoshop are applications too. Thousands of other titles sit in that category as well.

Enterprises run on “Applications”. Today this is still where the lion’s share of work gets done. Whereas Intel’s roadmap for years was consumed with building faster and faster CPU cores required to run these ‘real’ applications (quickly progressing from 133Mhz to 3Ghz or so), the realistic price/performance limits per core have now been reached (i.e. about 4Ghz) so elaborate schemes were devised using things like fabricating many cores per chip and load balancing technologies to create the perception of an application that could have unlimited scale based upon commercial box-level CPU technology. The application really didn’t change, it still ran on a single server, but we just deployed thousands of copies of the same server running the same application, and pulling and pushing against a shared resource pool. In a few cases, the most aggressive applications actually realized that multi-threading could be used to take advantage of these in-box multi-cores as well. It all worked quite well and this is still the most common scenario for scaled business applications today.

Now clear your mind and step back a bit. Processing in IT is all about managing resources to do computing tasks. Put enough of these tasks together and in the right sequence and desired results pop out. For nearly 50 years operating system designers have been working miracles in the efficient use of physical resources. Just look at the body of intellectual property that has created IBM’s big iron operating systems, DEC’s VMS, AT&T and BSD UNIX, Microsoft’s Servers, and Linus’ Linux. Operating system design is a highly refined set of principles that are extremely well understood today. They are truly state of the art inventions.

Now this may sound crazy, but why not employ the same extremely mature resource management principles across the entire data center? A single server is basically a box with high-speed pipes between functions like CPU cores, I/O and storage. A Data Center on the other hand is a BIGGER box, with high-speed pipes between CPU, I/O and Storage. Sound familiar?

So we know how to manage computing resources that are connected with fast pipes, and we know how to build really large structures that house computing resources connected with fast pipes. In theory, those big data center structures can have all the scale ever needed, since additional computing components can simply be pooled within the ‘box’. So why not just build APPLICATIONS that actually run on the Data Center?

Apache Mesos Project allows application run ON the data center, not just IN the data Center!

Apache Mesos Project allows applications to run ON the data center, not just IN the data Center!

Turns out you can! A bunch of people have been working on an open source project called Apache Mesos which allows applications to be written for the Data Center itself. These applications run ON the data center, not just IN it! These applications do not have to understand anything about hardware or scale or redundancy. They simply use services that are provided by any number of physical devices in the data center. The best part is that due to this extreme abstraction, additional services can be wheeled in as needed. Need more I/O? Bring more I/O and start those services which essentially pool themselves with the rest of the I/O. Need more storage or CPU? The same thing applies.

Is this magic just theory or reality? Reality! In fact you probably used a Data Center App before your second cup of coffee this morning. A growing list of applications are already using Apache Mesos to create applications architectured to run on a Data Center operating system at any scale! Most importantly, this concept becomes intuitively obvious once you are shown it. It just makes sense. Technology finally caught up to allow this, so NOW is the time to think about delivering  IT services armed with solid unit-of-work level pricing which is essentially linear, and stop letting the peculiarities of complex IT structures and overhead get in the way.

To quote a famous lyric from 1986, “The Future’s is so bright, I gotta wear shades!”. (Extra credit: Can you name the band without looking it up?)

Posted in Uncategorized | Tagged , | Leave a comment

“220, 221, Whatever it Takes!”

DCIM Integrations

DCIM integration with ITSM systems is critically important and NOT an adventure to be executed with brute force

“220, 221… Whatever it takes!” was a famous line of dialog from one of my favorite classic movies, “Mr. Mom” starring Michael Keaton and Teri Garr. In the movie, Michael’s character finds himself between jobs and his wife becomes the bread-winner. During his hiatus from work, he occupies his time with odd projects around the home. At one point during the movie, Teri Garr’s boss (played by Martin Mull) comes over to the house to pick her up for a business trip and in a classic demonstration of male egos in contest, Michael’s character, holding a chainsaw in one hand states to Martin that he is “re-wiring the house”, which apparently is his way of expressing his masculinity (even though he ostensibly is just watching the kids). Mull’s character inquires, “220?” (referring to a common voltage standard used for high power appliances) to which Michael’s character replies, “220, 221, Whatever it takes!”. Clearly Michael’s character had no clue what the question even meant (there is no such thing as “221”), but in his desire to appear knowledgeable, he responded in a fashion that would have sounded good to the uninformed.

Why do I offer this excerpt from this classic movie here? Let me explain.

Posted in Uncategorized | Tagged , | Leave a comment

Everything as a Service! DCIM has reached its stride…

DCIMaaS Delivers!

DCIMaaS Delivers!

The world of IT is changing rapidly and nearly every conceivable IT offering is now being introduced by various vendors “AS A SERVICE”. Doing a quick Google search, and talking to your main IT vendors, you’ll see Desktop as a service, Internet as a Service, Storage as a Service, EMAIL as a Service, Platform as a Service, Network as a Service, Data Center as a Service and the grand-daddy of them all, Software as a Service (which I would humbly offer is where the “X” as a Service started). In fact one of the first wildly popular SaaS offerings was Salesforce.com who proved that users of enterprise-class solutions would be happy to have certain types of applications offered per month and per user.

DCIM is no different and the strongest of the DCIM players, like Nlyte, have now realized this and introduced their offerings for “DCIMaaS”. It just makes sense….

Read the WHOLE story here.

Posted in Uncategorized | Tagged , | Leave a comment

Technology and the Federal Government…

Federal Office Systems Exposition 2014

Federal Office Systems Exposition (FOSE) 2014

The Federal government is in the midst of one of the most massive technology transformations in recent history. Every facet of the government is being evaluated to determine if the technology involved and already in place is both suitable to the current job at hand and cost-effective in the bigger picture.

Now I used the word ‘technology’ above and most readers here immediately picture data processing and IT infrastructures, but after spending the last few days at the Washington D.C. based Federal Office Systems Exposition (FOSE) exposition and conference, I have a new appreciation for that term. It turns out nearly every aspect of running the Federal government involves some type of people, process or product technology. Many of these don’t really jump out to most people until they are challenged to think about it. From the computers to the cameras, from the printers to the pistols. The tools to protect our nation are amazing to browse through and the attendees each had a deep appreciation for the specific technologies that they were directly involved in.

Where else can you start walking from one end of the trade show floor talking to dozens of vendors about security and data center efficiency software, pass by an on-floor theater where 400-500 people were gathered to hear a keynote presentation entitled “A Collaborative Approach to Catching a Terrorist”, and then continue on to aisles of vendors demonstrating their Tactical Command Vehicles (TCV) and weaponry? Clearly technology in Washington D.C. has a much broader reach than our traditional IT view.

Don’t get me wrong, the show had a number of the monster IT firms (like IBM and DELL) present, along with many smaller companies like Shavlik (showing their highly regarded server patching solutions), Nlyte (showing off their integrated process/workflow management for data center assets), CohoData (showing a very cool scalable on-premise storage offering) and Feith (showing their leading edge records management). Now since this was a government focus, it also had a ton of surveillance companies like Axis Communications, it had emergency communications companies like Redsky (makers of E911 systems) and a fair number of document handling companies like Fujitsu (showing off their high-speed and secured document scanners and management suite).

This was a trade show like none other and provided a huge opportunity for those present to reset their understanding and appreciation of the term ‘technology’. Clearly folks were here with purpose. All of the government agencies had representation as attendees and exhibitors (GSA, Homeland, DOJ, etc). The sessions were well attended (including mine about data center consolidation) and lots of questions were being asked on the floor and in the sessions.

So did I sit in the monster sized Tactical Command Vehicle parked on the show floor? You bet. The folks at NomadGCS were kind enough to let me climb in to get an appreciation for all of the technology available in a protection offering that rises to something like 10-12 feet off the ground and spans perhaps 20 feet front to back. Did I get to experience any of the assault weapons being shown by Beretta, or H&K? Sadly no. I drew the line on my curiosity and decided to yield to those that understand that technology a bit better than I…

Posted in Technology | Tagged , , , | Leave a comment

Where did all the SysAdmins go?

Remember the days maybe ten years ago when we had a sysadmin for every 30 or 40 servers? Those were simpler times when each server in the data center was unique, even if just slightly, from its closest neighbor. A data center with 1000 servers had 30 operators who spent each of their days patching and monitoring and resolving operational issues with ‘their’ servers. Each server was a critical element of the big picture, and the failure of any one of these was typically reason to cancel dinner plans with your significant other.

The role of the SysAdmin has changed - Now More interesting and rewarding

The role of the SysAdmin has changed – Now More interesting and rewarding

I have to laugh thinking back on those days and wonder how we survived. We were so tightly wound-up in all of the intertwined technology that most sysadmins didn’t have the luxury to think into the future, and instead spent all of their time trying to keep their head above water and keep what they had running!

The first big milestone came when automated and scalable software solutions popped up that could provision devices according to templates, and then apply those templates to new and existing servers. These templates become ever more capable, and things like patch management became part of the template and automated process. No longer would each device require an entire afternoon of loading software or applying patches. We simply defined any number of servers as a specific template type, and then the provisioning toolsets would keep those servers matching those templates. 1 server or 1000 servers, made no difference. A single sysadmin could now handle 25 times the number of servers in their daily schedule. Instead of managing 30-40 individual servers in the old days, they could now manage in some cases 30-40 RACKS full of servers.

The second game-changer was virtualization. Virtualization essentially broke the 1 to 1 alignment between these physical boxes and the number of actually running servers. Provisioning and update automation combined with virtualization meant that a sysadmin could now able to manage literally thousands of ‘servers’ almost instantly with a high degree of confidence that they were optimized, secured and supportable regardless of what failures might occur. And best of all, since many server instances were now virtualized, the number of walks down to the data center was drastically reduced, and in fact sysadmins could really exist ANYWHERE on the planet as long as a network connection existed. Follow-the-Sun(or moon) strategies popped up that leveraged sysadmins in multiple locations.

Lastly, the decomposition of many volume applications (like web ecomm, search and social servers) using technologies (like Hadoop) allowed these applications to automatically expand or contract to any number of servers without operational intervention which also allowed for failures of any given server to have no perceived effect on the business itself.. With relative ease, now 1000 servers could be tasked for web traffic in the middle of the night, and 10,000 servers could be active in the middle of the day, without any human intervention needed. The applications were resilient, essentially self-healing and users DID NOT need to know or care where their transactions were actually being serviced. Today, most web-centric data centers will have literally dozens or hundreds of physical servers off-line for maintenance due to hardware failure at any point in time, and the process to replace failed hardware device(s) in this type of configuration is considered a normal monthly maintenance task, rather than an urgent business-impacting one.

So, where did those SysAdmins go? Are they working at Starbucks (one of my favorite places on earth) or did they become teachers? Nope! They are still practicing their chosen craft, but doing so at a much higher level, with much more impact and in a fashion that increases their ability to innovate. Their job satisfaction has increased dramatically as their role transitioned from tedium to technology. In most cases these professionals have been given the opportunity to contribute in a much more meaningful fashion, and they feel more tightly connected to the organization’s success. Systems Administrators can now look for innovative ways to support the business in a fashion that either decreases operations costs, or increases business value.

Posted in Uncategorized | Tagged , , | Leave a comment

Modular Meets Open Compute, A Match Made in the Heaven!

Last year I would not have been able to write this story as I had a self-inflicted rule to avoid the mention of any specific vendor by name during the discussion of innovation. In 2014 I have decided to write about things that are introduced by leading vendors that catch my attention for some business impacting reason. I am sure you will agree that most of these innovations are also catching the attention of other data center activists too.  (This new practice saves me the time of answering a bunch of individual emails asking about specific vendor names)

Modular and Open Compute: The Perfect Private Cloud

Modular and Open Compute: The Perfect Private Cloud

So that brings me to IO’s latest introduction of their private cloud building block offering. Think: Modular meets Computing to yield a ready-to-go Cloud building blocks. Modular structures (IO’s original core competency) combined with industry standard computing (as provided by Open Compute). Anyone that reads my published materials knows that I am a big fan of both.

Building data centers ‘from scratch’ and trying to re-invent the wheel each time just seems so old-school (and expensive, long in duration, etc). In 1999, I saw my first modular entry. It was a million dollar ISO shipping container touring the USA on the back of a tractor-trailer that originated in Rhode Island. In 1999, modular was a novel concept, but the industry had little reason to consider alternatives to brick and steel, and just a few years later we had the dot-com burst with the resulting abundance of seemingly unlimited cheap data center space. At that time, we all asked why would anyone use modular? Five years later it became apparent that those OLD data centers were NOT going to cut it for onslaught of new high power and high-density IT solutions, and during the Christmas of 2006 or so, I found myself parking my car in a converted data center near San Jose airport which emphasized just how unsuitable those old centers were for modern IT deployments. (I was a bit surprised that they did NOT try to charge me for parking my car by the square foot!)

So back to modular. Modular makes sense (and cents) and the analysts of late are pretty pumped about it as well. Uptime’s latest survey puts nearly one-fifth of their network of 1000 respondents claiming that some form of modular will be part of the next 18 month future. Modular (when done right) allows you to stand up YOUR gear, in some cases HALF A MEGAWATT of it at a time, in just a handful of months. Start to finish. Months, not years! No surprise that a fifth of the enterprises they surveyed are planning on adding some modular capacity.

So Modular creates an uber-efficient ready-to-load ‘box’. Then the question is what do you fill that box with? Do the IT guys also have to re-invent their own wheel over and over again too? How cool would it be if the chosen modular vendor could also supply the ENTIRE building block of everything, including the IT gear? Everything you need to compute, store and network within their fixed form ‘box’. Gear that was open, standard and secure. That is exactly what IO has now done. Imagine taking a brand new data center, fully loaded with the perfect complement of IT processing gear, and then just cutting it up in bite-sized (500kW) chunks ready to connect where every you want, at the drop of a hat. Want a MEG of computing? Buy two. Want two Meg worth? Buy four. Just decide where you want them to sit, connect to power and your core network, and you are up and running.

Now there are a couple of details worth discussing. First, the modular ‘box’ approach is just plain efficient. The design is optimized and incorporates the best technologies for each subsystem, all tightly chosen (not inherited) to be interweaved, monitored and actively managed. Why burden a control system to support the least common denominator for all of the industry’s possible chillers when in this specific case you know exactly what YOUR chiller will be each and every time, because the modular enclosure vendor chose it by design. Doing so enables vendors like IO to go ahead and take maximum advantage of each and every chosen component, rather than having to ‘accommodate’ some type of one-off or inherited facility. The IO ‘box’ is highly defendable, economical and a great example what a standardized factory / assembly line can churn out. I really have no affiliation to IO, but find George’s approach to be aw-inspiring.

Second, consider the IT gear that fills the ‘box’. IO has chosen to supply a fully functional complement of open servers, switches and storage which are needed for processing. To do so, they chose gear based upon the Open Compute Project (OCP) specification, pioneered by companies like Facebook and other hyper-scale users. OCP is the Open Source version of hardware. All vendors that make servers based upon the Open Compute specification physically fit and work. They are compatible. Building racks using Open Compute creates an open environment. Again, like the enclosure structure, standardization is the key. Most of the Tier-1 server vendors have an Open Compute node offering and in this case IO chose the DELL ES-2600′s for their OCP racks.

So what caught my attention? IO got past theory and is delivering this today. They put this standardized IT complement inside their standardized modular structure, add deep granular control and active management and they have something pretty special. A chunk of computing ready to go. A solution that starts with a single chunk and then scales as big as you like with industry leading performance, interoperability and intelligence, completely secure and open.

How does this all related to the Cloud I mentioned at the top of this story?  There are some very specific capabilities that make up a “Cloud” and while everyone has their own spin on it, I think the NIST organization has one of the most balanced and unbiased definitions I have seen (ref: NIST 800-145). In that document, they call out a handful of key capabilities that are required for a Cloud (public or private have the same requirements):

  • On Demand Self Service Provisioning
  • Broad Network Access
  • Resource Pooling
  • Rapid Elasticity
  • Measured Services

Take standardized enclosures, add standardized IT gear, install it in a suitable location and then throw in a advanced layer of provisioning and management software and you have the perfect building block for a Private Cloud. Since everything inside the IO offering is standardized and automated, with control and chargeback, rapid provisioning, etc, they have built the very definition of Cloud, all in easy to eat 500kW bites.

Did I get your attention?

Posted in Uncategorized | Tagged , , , | Leave a comment