The Good Ol’ Days are gone – Security is the Basis of Your Future

Having just come back from Gartner’s Data Center Conference held annually in Las Vegas, I had the opportunity to reflect about what I heard at a macro level over the past few days. For those that didn’t attend, Gartner brings together 2500 or so of the industry’s leading IT professionals from the vast majority of the Fortune 500. Their titles range from IT and Data Center Manager to CIO and VP of IT or Infrastructure. Over the course of four days or so, this mass of IT folks get together and mingle, discuss business strategies and try to get a sense of where the various technologies and the industry itself is going. Obviously areas like Internet of Things, Cloud, BYOD, APM and Co-Lo were very hot topics, as were the latest generation of offerings of servers, storage and networks. But there was one topic that seemed to be an integral part of all other discussions: security.

Data Center Transformation is a function of Security

Data Center Transformation is a function of Security

As I sat back and listened, I realized that many of the folks in attendance had a solid reference point of “their IT” as it has existed for the past 20 years. Many of the vendor presentations and hallway discussions had a tone which longed for the simpler “Good Ol’ Days’ where their biggest concerns were capacity, availability and interoperability.  Sure, we had some good times dealing with those to be sure and many of us have made our entire career chasing those rabbits.

So here we are in 2014 and it is OURS to define the going-forward plans. And those plans will be dramatically different from the ones that got us here. And as it turns out, the massive connected mesh we are all striving for brings along with it the responsibility to deliver it all in a highly secured fashion with all the tentacles (endpoints) secured as well. The topic of Security is now clearly a sharp focus and prevails inside all of the other discussions about building IT structures today and tomorrow.

Now follow me on this next journey: According to Rakesh Kumar at Gartner, 1) Over $3.7 TRILLION dollars has been spent on IT and IT services in 2014, and 2) Over 37% of those dollars are being spent on IT solutions but OUTSIDE of the IT organization. Why are these important? What it means is that the safe and confined comfortable corporate world of IT that we all grew up on and protected is now littered with connections and other services that exist OUTSIDE of your control! And it’s not just remote users, but remote applications as well. Connecting to SaaS and other clouds, remote access and BYOD users forms a critical component of our going forward plan and yet many of us are still throwing 2005 vintage protection schemes at our corporate borders.

In the Good Ol’ Days, massive security commercial breaches were something we rarely heard of at the corporate level because those companies simply didn’t allow much external access, and when they did allow external access, they used VPN like technologies and felt safe and secure since they thought the endpoints were as good as ‘inside’ the corporate structure. Today, nearly every day, we hear about another major retailer or bank, government agency or telecom disclosing ‘issues’ with unauthorized data access. What changed? Are these companies simply not spending enough on firewalls or IDS? It should be so simple. In fact IDC says that they spent nearly $10 BILLION on protection systems this year. Money is not the problem. They want to protect, but the modern threats have simply matured so much that old school ‘signature’ based technologies (the ones deployed by most companies today) are dramatically ineffective. Today it doesn’t really matter how many ‘signatures’ those old school devices have built-in. We need to think different.

It’s about behavior. With $3.7 Trillion of new spend on the line, the forward thinkers are realizing that detection signatures are something that describes the past, where as behavior is something that defines the future. What do I mean by behavior?  Well the autopsy of a typical breach goes like this; 1) A simple system like a desktop, laptop or web server is hacked and some form of malware control app is placed upon it. 2) The malware becomes the ‘agent’ on that box and can be instructed to do anything the outside hacker wishes, 3) The hacker typical is looking for something in specific, sensitive data, so this control agent is told to search that data out and then once found, package it up and use a familiar friendly protocol (like HTTP) to send it back to hacker central. Lastly, 4) these agents are usually instructed to try to find similarly breach-able peer systems so that the process can be repeated.

With this behavior in mind, it’s just a matter of designing new protection systems from the ground up that try to identify this flow. They are designed to focus on zero-day threats (those never seen before) as well as all kinds of Advanced Persistent Threats (APTs). These newer protection systems understand the zillion variants of behavior and the best of these systems actually get smarter over time. These systems test their initial analysis and even try running certain payloads that are seen traversing the wire.

The point of this whole story? The Good Ol’ Days are fun to talk about and even share a few stories over a cup of Starbucks or a cocktail, but OUR real business needs going forward all hinge on solving the security challenge for a highly connected world. Frank Gens at IDC estimates that by 2018, HALF of ALL of our IT spending on computing and storage will be Public Cloud based. This means all that computing will be at the end of a wire which you have little or no control over, is outside of your comfortable brick and mortar walls and has its own set of security mechanisms.

We will find our footing, and we will collectively reach a consistent definition of what is expected from the new secured world of IT. This is Data Center Transformation in the making. But as we saw in Las Vegas, it will come with a primary desire to think about the new task at hand, and solving for that, rather than building on the structure we have inherited. Time to put on our big-boy pants…

Posted in Predications, Technology | Tagged , , , , | Leave a comment

Dynamic Workload Management, Physical to Virtual to Cloud – and back!

Enabling Workload Migration between Physical, Virtual and Cloud providers

Enabling Workload Migration between Physical, Virtual and Cloud providers

Now that we are all becoming ‘experts’ about the transformation being seen in the world of computing, we should all be realizing that there is an exciting opportunity in front of us by leveraging the foundational platform innovations that have occurred. “Hybrid computing” is a phrase used by many of you to describe the various different combinations of traditional servers, virtualized servers and the various flavors of the cloud to conduct work. And to add a bit of a second dimension to this phrase, each of these different computing platforms has a number of major vendor-choices, each with a particular set of capacities and specific dependencies. For the server, major choices include HP, DELL, Lenovo and a slew of other Tier-2 vendors like Intel, Supermicro and Quanta. For virtualization we see VMware, Microsoft, Xen and KVM. And the Cloud is dominated by players like Amazon, Softlayer, NTT, Rackspace, VMware, Sungard and CenturyLink.

Now wearing your newly acquired IT business hat (XL, courtesy of your CFO), you should keep in mind that your goal is to find the most suitable platform for each of your applications, and that the definition of “most suitable” changes from application to application and as a function of time.  The actual work to be performed doesn’t actually change in this process, it is just the foundational means to deliver that computing that could change. The catalyst for you to consider moving your workloads to other platforms could be for performance or capacity reasons, could be economically driven, or even based upon disaster-recovery or compliance needs.

While many of us have a great understanding about each of the discrete platforms, the ability to move workloads between traditional servers, virtualized servers and the cloud (public and Private) is less understood and yet has already become a critical success factor to managing the costs of doing work and the reliability of IT. As it turns out, moving workloads is conceptually very easy. Simply package up each of your applications as ‘workloads’ (carefully identifying the dependencies that each unit of work has), and then characterize the various platforms on their ability to deliver those dependencies. To put a workload on a particular platform, you simply grab the unit of workload, add to it the specific target platform wrapper, load it on the target platform and press “GO”. In the reverse direction it works in the exact same fashion. Want to move work between Cloud-A and Cloud-B? No problem. Grab the workload, strip off the Cloud-A wrapper, add back in the Cloud-B wrapper and you are all set.

Conceptually it’s about separating the application doing the work from the intended platform to run that work upon. There are a handful of startups, like Silicon Valley-based Rackware and Rightscale from Santa Barbara, that have set their sights on doing this and a few of these companies have become experts at doing so. They make short-work of migrating workloads between public clouds, private clouds, physical servers and virtual instances. Basically any source can be migrated to any destination.

These are really exciting times and those IT pros that fully embrace hybrid computing and workload migration as a function of value and performance will be rewarded over and over again. So while you have spent the last 25 years talking about doing work in ‘the data center’, going forward we should probably start using a term like ‘the data function’ or other less restrictive phrase that encompasses computing that may also occur outside of our original four walls.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Being Too Close to the Trees can Cloud Your View

Close-Up to Trees

Close-Up to Trees

Make no mistake, the transformation of computing is happening. While 10 years ago the IT technologists would have been talking about CPU speeds and core density as well as gigabit pipes and gigabytes disks, today they are talking about putting all of these advances into a logical and elastic computing platform that can be scaled to any size and accessed on demand at just the level needed, with the matching minimum cost structure for the desired level of demand. We all know it today as “Cloud Computing” and it allows the consumer of computing to tailor their purchased supply to their actual demand for computing.

The Big Picture

The Big Picture

While it’s still very exciting to talk about the technology of “The Cloud” itself, there is a ground swell underway of companies that are looking to leverage the technology of the Cloud by focusing on delivering business value on top of the Cloud in very specific vertical worlds. It’s the difference between talking about owning a forest of oak trees compared to talking about fine hand-crafted wooden kitchen tables and chairs. See the relation? The buyer of a chair won’t have nearly the interest in the details of the forest or the trees, but instead would like to focus on the color, finish or styling of a chair made from those trees and built for their specific kitchen. They understand the impracticability of themselves transforming a raw oak log into a kitchen chair, and could appreciate the number of craftsman that they would have to hire to convert that tree log into a chair. They know that there is real value in having someone do that work for them.

Like the tree log, the Cloud is the raw material for delivering end-user focused applications and value. If asked, the end-user would always prefer to find applications that are purpose-built for their specific need (vertical), rather than buying a one-size-fits-all and trying to make it into something it was never intended to be. Think of the diversity of needs between hospitality and retail, between life sciences and legal. Each of these verticals presents a very different set of needs, so when companies with these varying business plans go looking for cloud based solutions, they would prefer to find something that is pre-made for their world, not a general-purpose solution that may fit some of their needs, but miss the ball on others.

Much of the ‘buzz’ in the tech industry is regarding the Cloud itself, but a growing swell is luckily focusing on the innovation and agility the Cloud makes possible. Entire companies exist to innovate applications built for the Cloud that address their vertical values, simply identifying the Cloud as the requisite raw material for access and delivery. There are a ton of capabilities that the Cloud brings to the table which are inherited by the application providers that use it,  but at the end of the day it is not the Cloud that matters to these users as much as the vertically focused value.

The key for all of us is to broaden our minds and re-consider the Cloud platform as an amazing new opportunity to re-think the myriad of general-purpose solutions already in place. Take EMAIL for example. We all grew up on it, and take it for granted. Today the Cloud allows new users to be provisioned with the click of a mouse. No hardware or software needed. Pretty amazing. But email is just email, right? Wrong. When was the last time you were asked to share your credit card via email and felt safe doing so? Never. How about receiving your medical test results via email? Nope, not going to happen with today’s email systems. Email wasn’t built for these vertical needs, and consequently can’t be used for those common purposes.

Now think of building applications with the specific use case in mind. If we wanted to have an email system designed for sharing financial records we could. But it must be designed to do so, from the start. Sure it would use the Cloud as the delivery mechanism, but the innovation happens above the Cloud.

I am always excited when I see companies that get past the sound-bites and instead focus on the heavy lifting.  Cloud is a popular sound-bite and generates more column inches in the press than anything else seen in years, but that is just the raw tree logs. You’ll see a growing number of people talking about their fine crafted “chairs”, I mean “vertical applications” going forward…

Posted in Uncategorized | Tagged | Leave a comment

“The New Phonebook’s Here!” (a.k.a. Gartner releases the first ever DCIM Magic Quadrant!)

Validation is Important to Everyone. DCIM is being Validated again.

Validation is Important to Everyone. DCIM is being Validated again.

You probably remember this classic sound bite from the 1979 movie called “The Jerk” which starred Steve Martin where he played a naïve guy from Mississippi who is transplanted to St. Louis and tries to make his way in his simple life. In one of the early scenes, Navin (his character) grabs a copy of a brand new phonebook (the white pages, remember them?) and immediately looks for his name in the book as a way to validate that his existence in St. Louis has not gone unnoticed.

Sometimes it just takes a little written formalization of reality to make a big difference. This is the case with Gartner’s newest Magic Quadrant, the first one in fact for DCIM, released September 22nd, 2014. (Gartner subscribers refer to document #G00259286). While over a hundred vendors have been slugging it out for years, trying to demonstrate their added value to the management of the physical layer of the data center, the need was always instinctual to some innovators, but did not rise to the level of urgency in most people’s game plan. In fact, less than 5% of the likely adopters of DCIM have already done so, the remainder looking for some form of major validation from the industry. While a number of analysts have done loose DCIM roundups in the past, Gartner’s new Magic Quadrant has brought together 20 or so vendors that each are visible within the DCIM industry, each has some number of customers that have adopted their respective waves, and each have solved some portion of the needs defined in this category. They have spent nearly 2 years researching the market, and drilling into business value.

It’s worth reminding everyone that the DCIM category is still broadly defined, so the vendors listed in the DCIM Magic Quadrant are not direct competitors necessarily. In some cases they are complementary too. The key to Gartner’s Magic Quadrant process is ranking these vendors based upon their completeness of vision and their ability to execute that vision. As you read the Gartner DCIM Magic Quadrant, consider it as a short listing tool for your own DCIM efforts. Every Fortune 2500 company will be adopting DCIM, it’s just a matter of time. As you’ll see in this and other reports, the value realized through the adoption of DCIM is simply too great to ignore any longer. Yes, DCIM is a new way of doing things in IT management, but take a deep breath and look around and you’ll see the WHOLE concept of IT infrastructures is changing overnight. It only makes sense to add DCIM to your strategic plan to help raise the efficiency and reduce your costs in the data center, in your co-los, and across your sites.

As always, I applaud anything that formalizes the reality of DCIM as a strategic investment with a tangible cost savings. And when a major analyst group like Gartner spends this amount of time and energy to compare and contrast vendors like they did here in the DCIM Magic Quadrant, the take-away for all Fortune 2500 IT executives should be an underscored CONFIRMATION that DCIM is real and can be deployed today, that DCIM has huge business merits and with produce measureable cost savings today and that DCIM should be part of everyone’s dance card… TODAY!

Posted in Uncategorized | Tagged , , , | Leave a comment

Getting Past Rhetoric, Real observations of the value of DCIM

We have all lived through the past half-dozen years waiting for the nascent DCIM market to arrive. Vendors and customers alike have been hearing about ‘the promise’ of DCIM for many years, and have been waiting for just the right moment to jump into the fray, the point at which the promise could actually be realized in their own world, in their own data centers. The great news is that a number of brave pioneers have already taken on the opportunity provided by DCIM. Want to here more about these pioneers?  Read More Here.

Posted in Uncategorized | Tagged | 2 Comments

Workload management in the era of Hybrid Computing styles…

I sit down with data center managers and other IT professionals all the time and one of the things we start undoubtedly talking about is ‘the future’ of IT. In fact everyone in IT loves to talk about the future of IT since there is simply SO MUCH transformation going on these days.  Now the vast majority of us have been in IT for 25 years or so and I can genuinely say that I have never seen such a breakneck pace of change for all aspects of the data center and IT infrastructure in my generation.

While many of those ‘future’ discussions start with the state of the industry very generically, these discussions quickly zero in on the state of THEIR IT FUTURE.  While the state of the industry is curious and exciting to most, it is how that change will affect their own world that is of primary importance. And all of that change is being viewed with a fresh new set of ‘business colored glasses’. So while it’s still fun to talk about Ethernet Port speeds and new capabilities of Windows Server (oh boy) a higher priority discussion revolves around the costs to deliver IT “products” or as sometimes referred to as IT services. (e.g. a user’s email capability). And with worldwide spending on IT in 2014 of more than $3.7 Trillion dollars, you can bet there are a lot of people involved and a lot of financial analysts asking some pretty tough questions.

So that brings me to the topic of “computing styles”. 15 years ago, we didn’t really have ‘styles’ of computing. Prior to that time, everyone had a data center if they wanted to do computing. Any organization that relied on IT built big data centers. They did so on land that they bought, in buildings that they constructed, and they filled these structures with gear that they purchased. This was pretty standard fare for most of the 80’s and 90’s.

Then the we watched in horror throughout the troubling years of 2000-2010. We took a double-whammy in that decade; 1) The DOT-COM melt-down in 2000 and 2) The ECONOMY melt-down of 2008. What these did is provide extreme motivation to develop and promote alternative and much more cost-effective business models for computing.  Putting aside the related changes in end-users business needs themselves for a moment, these Enterprises now had a handful of choices or ‘styles’ for computing.

  1. In-House – characterized by brick and mortar construction, large upfront costs, and complete responsibility for all aspects of its operation and usage. In many circles, this in-house capability is being re-tooled to behave as a ‘private cloud’.
  2. Co-Location – much like an in-house Data Center, but the cement and MEP gear is provided, essentially as a service. Enterprises ‘rent’ space that comes with power and cooling and network connectivity.
  3. Cloud – the hyperscale data centers with tens or hundreds of thousands of servers, running specialized software for self-service and quick provisioning which provide the ability to purchase computing by the transaction, eliminating all other operational concerns. Usually “Cloud” is the shorthand for “Public Cloud”.
  4. Modular – think of a small in-house data center that can be transported to site in 500kW increments, stood-up in just weeks rather than years, and can be tuned for specific cookie-cutter business needs without impacting other portions of the IT structure.
Computing Style Mix Will Change Over TIme

Computing Style Mix Will Change Over TIme

Most importantly, IT professionals that get the big business picture realize that their own infrastructure WILL NOT be changed entirely overnight. In fact all four of these styles will exist in various combinations across their own span of control for years to come. If asked, most IT professionals will say something like “I am going to the Cloud” but what they really mean is that their strategy is to utilize a growing percentage of their computing needs via transactional cloud-oriented computing, focusing more on transactions and transactional costs rather than floor tiles or servers. It’s not black and white change, it’s about the changing mix of these styles over time. And the forward-looking decisions are centered on the value of using each type.

Now the beauty of computing styles is that there are a number of startups and public companies alike that are dealing with the TRANSITIONS and MIGRATIONS between these styles and ways to leverage each style for it’s core value. Say for instance a company already owns 100,000 square feet of data center space that is less than 3 years old. But say they need to augment their capacity with more transactions twice a year. Why not just add public cloud transactions to their own transaction handling capability? It turns out you can! Startups and open source projects make the migration of workloads as simple as clicking a mouse, or in some cases completely automated based upon demand. Workloads can be added or moved from across in-house servers as easily as between private and public clouds… and back again!

Or consider the concept of Disaster Recover planning in the context of computing styles. Everybody should have a rock solid DR plan , but justifying that plan and it’s costs usually revolves around the costs to stand-up a ton of warm standby gear.  What if you could use one of the other styles, say the cloud for instance, as the ‘flick of a switch’ style DR site by moving workloads to it when the main site is being impacted? Once again, you can!

Today’s challenge for data center and IT professionals is really is about properly setting a value to each of the IT workloads that they commit to provide and doing it with the level of resiliency needed for the business. It’s ALL about the business and the number of options available to run the business has increased!

Workload management is the key to business efficiency, and the most critical facets of workload management is identifying the cost to deliver work. With $3.7Trillion in spending on the line, do you know what your costs are to process each unit of your work, and do you know if it would be more or less per unit of work to shift from one style to another? Do you have transition plans and technologies to move workloads dynamically to leverage each style based upon demand, status or time? Can you handle demand peaks effectively or have you over provisioned resources which sit idle for most of their life? Do you have a DR plan that makes your IT contingency strategy both low-risk and low cost? These are the tough questions on the table today.

Anyone want go back to simpler times when we all talked about raw networking switch throughput or the capacity of the latest spinning media disk drives?

 

Posted in Technology | Tagged , , , , , , , | Leave a comment

The “Data Center App” – Who Knew?

For the past 5 years we have become used to saying  “There’s An App For That”, referring to the hundreds of thousands of available smart-device programs that could easily be downloaded for little or no cost and run in seconds.

The word “App” was born with the smart phone and has conjured up images of “tiny” bits of code that ran on these phones or tablets and did a few interesting things, perhaps even a bit of business, but mainly these Apps were creative, social or entertainment. We have become quite comfortable with finding and installing Apps at the drop of a hat, and if we don’t like any particular one, we delete it and try one of the hundred other versions of doing the same thing. The “App” is a pedestrian friendly version of a program. An app is very hard to stub your toe on and requires little if any support or documentation.

But before the “App” appeared, the world of IT had “Applications”, which referred to something perceived as bigger and more expensive which ran on ‘real computers’. Many of these applications ran on servers running in the data center while others ran on desktop and laptop PCs. A business application could be hundreds of dollars per user, and required IT-class support to install and maintain the software and underlying hardware. Oracle was an application. Microsoft Office and Adobe Photoshop are applications too. Thousands of other titles sit in that category as well.

Enterprises run on “Applications”. Today this is still where the lion’s share of work gets done. Whereas Intel’s roadmap for years was consumed with building faster and faster CPU cores required to run these ‘real’ applications (quickly progressing from 133Mhz to 3Ghz or so), the realistic price/performance limits per core have now been reached (i.e. about 4Ghz) so elaborate schemes were devised using things like fabricating many cores per chip and load balancing technologies to create the perception of an application that could have unlimited scale based upon commercial box-level CPU technology. The application really didn’t change, it still ran on a single server, but we just deployed thousands of copies of the same server running the same application, and pulling and pushing against a shared resource pool. In a few cases, the most aggressive applications actually realized that multi-threading could be used to take advantage of these in-box multi-cores as well. It all worked quite well and this is still the most common scenario for scaled business applications today.

Now clear your mind and step back a bit. Processing in IT is all about managing resources to do computing tasks. Put enough of these tasks together and in the right sequence and desired results pop out. For nearly 50 years operating system designers have been working miracles in the efficient use of physical resources. Just look at the body of intellectual property that has created IBM’s big iron operating systems, DEC’s VMS, AT&T and BSD UNIX, Microsoft’s Servers, and Linus’ Linux. Operating system design is a highly refined set of principles that are extremely well understood today. They are truly state of the art inventions.

Now this may sound crazy, but why not employ the same extremely mature resource management principles across the entire data center? A single server is basically a box with high-speed pipes between functions like CPU cores, I/O and storage. A Data Center on the other hand is a BIGGER box, with high-speed pipes between CPU, I/O and Storage. Sound familiar?

So we know how to manage computing resources that are connected with fast pipes, and we know how to build really large structures that house computing resources connected with fast pipes. In theory, those big data center structures can have all the scale ever needed, since additional computing components can simply be pooled within the ‘box’. So why not just build APPLICATIONS that actually run on the Data Center?

Apache Mesos Project allows application run ON the data center, not just IN the data Center!

Apache Mesos Project allows applications to run ON the data center, not just IN the data Center!

Turns out you can! A bunch of people have been working on an open source project called Apache Mesos which allows applications to be written for the Data Center itself. These applications run ON the data center, not just IN it! These applications do not have to understand anything about hardware or scale or redundancy. They simply use services that are provided by any number of physical devices in the data center. The best part is that due to this extreme abstraction, additional services can be wheeled in as needed. Need more I/O? Bring more I/O and start those services which essentially pool themselves with the rest of the I/O. Need more storage or CPU? The same thing applies.

Is this magic just theory or reality? Reality! In fact you probably used a Data Center App before your second cup of coffee this morning. A growing list of applications are already using Apache Mesos to create applications architectured to run on a Data Center operating system at any scale! Most importantly, this concept becomes intuitively obvious once you are shown it. It just makes sense. Technology finally caught up to allow this, so NOW is the time to think about delivering  IT services armed with solid unit-of-work level pricing which is essentially linear, and stop letting the peculiarities of complex IT structures and overhead get in the way.

To quote a famous lyric from 1986, “The Future’s is so bright, I gotta wear shades!”. (Extra credit: Can you name the band without looking it up?)

Posted in Uncategorized | Tagged , | Leave a comment