Exposing IT Value to the Business

The IT industry is currently experiencing an amazing transformation. Whereas most long term IT professionals have spent their careers creating and supporting increasingly complex IT structures with primary metrics of availability and uptime, the new CIO mantra has become service delivery at the right cost. In effect, CIOs are formalizing many previous efforts and creating service products that can be delivered upon request, with a keen understanding about the costs incurred to deliver those services. CIOs today think about these Service Portfolios as the means to set expectations on how technology will be adopted, how it will be supported, and at what cost those technologies will come. – See more at here

Posted in Uncategorized | Tagged | Leave a comment

Taking the 2015 IT Infrastructure Scenic View

IT Infrastructure is changing dramatically in 2015 and beyond

The Ways in Which We deliver IT is Changing Dramatically in 2015 and beyond

The IT Infrastructure challenge in 2015 is daunting and only the strongest will survive: The strongest vendors and the strongest IT professionals. Nearly everything we knew about building and maintaining IT infrastructures is being superseded by a new wave of fresh new technologies, new processes, new people and new approaches.  I specifically used the word “New” rather than “Alternative” since these fundamental changes really are radically different from those in the past and they *ARE* happening with or without you. For instance, we aren’t just making disk drives faster or bigger, we are eliminating them from your data center floor or making them so smart they handle growth easily. Same thing for servers and networks. Not just more of the same old thing, but NEW WAYS of doing it. All of IT is going through radical change.

Think of the kind of change we saw in the data center after 1991 (the year Linux came on the scene),  or the desktop after to 1981 (when IBM started to ship the PC) or sharing and collaboration in 1995 (after Al Gore invented the internet). These foundational changes allowed/required everyone involved to (in Steve Jobs’ words) “Think Different”, and it also destroyed the Luddites that chose to ignore these shifts. For those of us that were paying close attention and embraced those changes, we were handsomely rewarded. Those forward-looking companies were wildly successful and a lot of people’s careers were made. Data Centers could now be filled with systems based on new hardware and software, virtualized loads, unlimited scale applications, all with a level of performance, interoperability, efficiency and cost structure that wasn’t even in the same ballpark as previous approaches.

So that brings us to IT circa 2015. It’s all happening again! Everything we 40- and 50-somethings know about the IT business is up for grabs again and all of it is being retooled around us as we speak.

Here is my list of 10 of the most impactful changes that are occurring which are worth getting your hands around if you are an IT professional planning to stay in the IT segment:

1. IT Service Catalog. The IT structure and all of its processes has grown into a monster. The complexity and delicate structure stifles creativity, and chokes new initiatives at a time where stakeholders are asking for more creativity and agility.  The most successful IT professionals are now looking for the means to create service offerings as if by catalog. Each service “product” must have a known cost per user, a known delivery timeframe, has specific capability/deliverables and expectations, and a whole slew of escalation definition when things don’t go right. IT “products” (like email) are being defined, and the costs to deliver those “products” quantified. It is this Service Catalog mentality that makes the most admired CIOs in this new era smile.

2. Automating process and control. Whereas we used to have an sysadmin for every 50 or 100 servers, keeping them patched and operating, we now find sysadmins handling thousands of servers through the usage of automation tools. Automated patching, provisioning, migration. Application installs and Password resets. All of this is becoming automated through the use of tools that capture the human intelligence and then dispatch that same knowledge automatically each time the same task needs to be performed. Think “copy and paste” at the macro level. And it doesn’t stop there. Virtualized loads can automatically shift from physical server to physical server, and HVAC gear can automatically sense the conditions and self adjust as needed. This is happening today.

3. Software Defined Networks are a new concept, with less than 5% penetration (in production), but are on a hockey-stick shaped curve today. Adoption is beginning to take off and the various approaches are finding their sweet-spots. What started out as an economic “OpenFlow” free-for-all, has quickly become a business discussion about capabilities, flexibility and value. The protocol itself has taken the backseat to new capabilities. Why did this happen? Well, we all grew up on networks which were built-in a north-south fashion. The vast majority of all traffic started or ended at the edge devices. Server to server communication was limited. That’s why all of the switch vendors have two lines of products today; Core and Edge. With the highly connected web full of various ‘services’, we now see server-to-server communications skyrocketing. This can be thought of east-west communications and is demanding a ‘fabric’ approach to networking. And the icing on the cake: once you have built an SDN interconnect fabric, you have the perfect place to host virtualized services that essentially reside everywhere the fabric is. Let’s think of this as a ‘thick fabric’. And what more, applications themselves can tune the fabric for their own needs! And a bonus for the pure technologiests; full layer-1 monitoring can also be done within a robust SDN fabric! No more expensive/duplicate networks just for performance analysis!

4. Self-Service mentality. I want IT now. Rakesh Kumar at Gartner presented a paper in December 2014 that stated that 37% of the nearly $4Trillion dollars of IT spending in 2014 occurred OUTSIDE of the IT organization. 37% of all IT projects didn’t involve the IT organization at all! Shocking. This is because the end-user has the option to look at any number of service catalogs and buy with a credit card. Want a new desktop? Consider a tablet. Need storage? Think Box. Need email in a hurry? Google to the rescue.  No longer is the IT organization the end users’ only source of services.  The strongest player moving forward will be the IT professional that embrace speed and agility in the delivery of their capabilities. Projects with multi-month delivery schedules are no longer realistic.

5. Who would have thought that a 10U chassis would be housing HUNDREDS of CPU cores and be able to move over 7Tb/s of data internally? Look at the new HP C7000 for instance and you’ll see one of the most dense boxes you can buy (a cool $250K nicely loaded), and can put 4 of these monsters in one rack.  I dare say we’ll see 40kW per rack more often than we ever imagined just a couple of years ago. This is a mindset that enables dramatic difference in the way we approach new data center build-outs and retrofits. It has been hypothesized that your existing data center would last for 30 years or more if you simply took advantage of all of the ‘Moore’s Law’ advances taking place. (assuming that your utility company can get you ‘a few more megawatts’ every few years).

6. I like to think about the simpler days when we all built 24-inch square raised floors, and ran our cooling and cabling under these tiles. We talked about loading capacities and laughed at the racks getting heavier, but until recently it was just a curious discussion. No longer! Those same monster dense devices also weigh a ton (literally) and it is no longer a best practice to plan for using raised floor. Building your data center directly on concrete is all the rage. Data and power cabling run overhead, and cooling strategies are beginning to take advantage of the fact that COLD AIR likes to FALL. It’s hard to understand why we decided 25 years ago to create cold air on the perimeter and then PUSH it upward into the racks, fighting pressure physics. Cold air sinks and the new generation of data center designers get that.

7. Unbounded Infrastructure as a means to blur the lines between mechanics and functions delivered on the data center floor. As it turns out if we stop thinking about individual boxes as individual management islands with each individual box doing a bit of work and then somehow the results being aggregated externally, we can take a whole new approach to IT. Hardware and Software mechanisms are now commonplace that ignore the physical device boundaries and allow the capacity to be aggregated.  Want to know how Twitter or AirBnB handle all of their transactions in real-time? There is a project called Mesos that creates services out of boxes. Need more I/O? Simply add more I/O services which instantly become part of the ‘system’. How do you write applications for a world like this? You write applications for the platform, and then let the platform independently take care of being scaled.

8. We used to build data centers based upon some perceived level or watermark of capacity needed. In a typical scenario, data centers were built as big as anyone could imagine, and then organizations and their projects moved into over time. The downside to this old school approach is the cost: Comparing the cost to occupy the first square foot of space on day-1 compared to the costs associated with the last square foot 3 years down the road were enormous.  Building big data center was only an academic exercise to keep the CFO happy. In reality, it never made sense to over-build and hope that the space would be needed down the road. Today, you can find modular designs to replace your old way of building a data center; both manufactured (like Baselayer) and brick-and-mortar (like Compass) that can get you new space in 20-200 rack increments or less. (Companies like Elliptical even do a micro-modular design down to ONE rack at a time). I guess we’ll need a new term to use to replace ‘breaking ground’ when using modular.

9. We are laughed when data centers were so cold that you needed a jacket to walk through it. Over the last 10 years it has become quite another matter and everyone is talking about it. What started out as energy efficiency with the Green Grid publishing their “PUE” metric, has become a battle cry for every manufacturer and end-user alike. Use more power and in a smaller amount of space, and get the power bill down per unit of work. With the cost of power now being a top-3 concern for everyone in an Enterprise, whole new approaches are being used to make data centers more efficient. Starting with the location of cheaper power which is driving where they are being built, and then consider the ability to use free-air cooling at that location gets us thinking.  Add to it the advanced is CPU cheap and power supply design and we have everyone working on energy costs.

10. I purposely placed The Cloud here at the end since it is one of the most dramatic changes some of us may see in our entire career.  Gartner has predicated that by next year, 20% of all Enterprises will have NO backend IT functions in-house. Even the old-line corporations are already diving into the Cloud for various applications. But don’t panic. This is a drawn out process and will be going on for a dozen years. Certain applications are perfect for the Cloud today and others are too time-sensitive or confidential to lend themselves to today’s Cloud offerings. Remember Cloud has already been around for 10 years. Salesforce.Com was one of the earliest examples of an application/software level cloud. However the Cloud probably raised your eyebrows when companies like Amazon and Rackspace began offered platform-level cloud. Most exciting, the tools now exist to allow in-house workloads to be shifted to the Cloud and visa versa as demand changes. (Companies like VMware do a good job of this).

So what does all of this mean? Think Forward. Think about articulating the business problems first, and then solicit every smart guy you know to figure out how to solve that problem. Building more of the same old structure is likely the wrong answer. Simply adding more disk spindles or a new edge switch is likely throwing good money away. Embrace it all and be ready to defend your choices. “It’s always worked like this” is no longer an answer with much street credibility. And “If it’s not broken don’t fix it” still applies, but I would argue that nearly all of the IT stuff we’ve collectively inherited in our data center could be considered “broken” by today’s standards.

Let’s commit to fix it…

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

SDN: Are We There Yet?

Remember in 2001 when you heard about VMware GSX?  It sounded like pure magic and seemed to do the impossible: it allowed you to run multiple instances of any real server operating system on a single hardware server. The operating system thought is was running on a hardware box, and yet it was just a slice of that box. Over the next few years, the buzz turned to a roar and quickly encouraged most commercial organizations to try a ‘pilot’. During those pilots, they realized that certain applications (like web servers) were a great fit for virtualized servers and these organizations set their buying sights on a new generation of big beefy servers which were needed to take full advantage of virtualization. Virtualization delivered just what it promised, and then some!

It turns out there are several technical ways to virtualize, so VMware found competitors like Citrix and Microsoft also jumped in and over the next few years the single-host/multiple-guest computing model came of age. Intel and AMD even changed their CPU chip architectures to directly support this type of virtualization innovation at the core. Software providers changed their licensing models to account for virtualized servers directly too. From a timeline standpoint, virtualization was imagined, tested, tweaked and then adopted “en masse” over a period of a dozen years. According to Gartner, server virtualization accounted for 16% of all workloads in 2009,  accounts for more than 50% of all server workloads today and will rise above 80% within the next 3 years. This is an adoption curve we expect with game changing technology.

The SDN Market is coming together.

The SDN Market is coming together, but is highly competitive with different technical underpinnings.

That brings me to the area of Software Defined Networking. Although programmable networking can be traced all the way back to 1995 with efforts at AT&T and SUN, the modern-day use of the term SDN is connected to the period around 2011 when the Open Networking Foundation (ONF) was formed to further the creation and use of the openflow networking standard protocol.  The idea of today’s SDN is simple: Rather than each vendor continuing to ship proprietary networking gear with each device carrying all of its transport intelligence in every device, why not separate the control plane from the forwarding plane? Decompose the problem into two distinct areas that can each individually be focused upon. By making this separation distinct, each device can be aggressively optimized, and if that separation is done using industry standard protocols, then in theory multiple vendors will be able to be combined into a single network based upon the value.

Today we are just a handful of years into this SDN journey, like where virtualization was circa 2006. SDN is all the buzz today, but is still just scratching the surface in adoption. Many corporations are trying SDN pilots and investments are being made by the VCs, vendors and end-users alike. A few huge deployments have happened (like Google), which goes a long way to demonstrate the scale and commercial value of SDN. Startups have formed for nearly every aspect of SDN. Some create high density hardware (“Forwarding Plane” or Switches), some create high intelligence controllers (“Control Plane” or Operating System), some even create value-added Applications (like traffic management and analytics). The biggest old-line networking vendors have released overlays to their existing products to allow “participation” in SDN networks. (This participation is at best a defensive/transitional approach, since the old devices will still carry all their heavy baggage, but it may allow some level of migration for large installed bases until they get to REAL SDN). Given the huge potential and the original premise of SDN, that transitional approach will be short-lived and I would expect to see a significant number of new generation hardware and software suppliers that are built from the ground up to be SDN components, adopting SDN protocols as their main communication scheme.

We are also seeing the SDN revolution underscore the need to think about application-level business values and set expectations accordingly. The staff required to manage SDN networks is vastly different that that of the older CLI-based “box by box” approaches network administrators have practiced for years. With SDN, if you can “think” it, the network can be programmed to support it. Most importantly, you “think” networking in an SDN world at the business level and delivered service, not at the box or protocol level. And in the same vein, the performance of applications can be measured against those business needs and could (in theory) self-adjust the network performance to meet their precise contracted needs. While SDN protocols have certain built-in performance values being collected all the time, this next generation of tuning capabilities will come from software developers that orchestrate the performance data being collected at the application level and communicate changes needed directly to the control plane itself.

Time will tell where and when adoption will occur. OpenFlow has been an earlier leader in the technical approaches used by many vendors in the SDN community, and yet there are a handful of other competing approaches to SDN on the table today. The stakes are high, so startups that focus on delivering the highest value will likely be poised to enjoy their unfair share of the burgeoning SDN market. And just like Virtualization, we will likely find ourselves with several competing approaches to SDN. But that said, by 2020, we’ll all be connected by SDN….

Posted in Uncategorized | Tagged | Leave a comment

The Good Ol’ Days are gone – Security is the Basis of Your Future

Having just come back from Gartner’s Data Center Conference held annually in Las Vegas, I had the opportunity to reflect about what I heard at a macro level over the past few days. For those that didn’t attend, Gartner brings together 2500 or so of the industry’s leading IT professionals from the vast majority of the Fortune 500. Their titles range from IT and Data Center Manager to CIO and VP of IT or Infrastructure. Over the course of four days or so, this mass of IT folks get together and mingle, discuss business strategies and try to get a sense of where the various technologies and the industry itself is going. Obviously areas like Internet of Things, Cloud, BYOD, APM and Co-Lo were very hot topics, as were the latest generation of offerings of servers, storage and networks. But there was one topic that seemed to be an integral part of all other discussions: security.

Data Center Transformation is a function of Security

Data Center Transformation is a function of Security

As I sat back and listened, I realized that many of the folks in attendance had a solid reference point of “their IT” as it has existed for the past 20 years. Many of the vendor presentations and hallway discussions had a tone which longed for the simpler “Good Ol’ Days’ where their biggest concerns were capacity, availability and interoperability.  Sure, we had some good times dealing with those to be sure and many of us have made our entire career chasing those rabbits.

So here we are in 2014 and it is OURS to define the going-forward plans. And those plans will be dramatically different from the ones that got us here. And as it turns out, the massive connected mesh we are all striving for brings along with it the responsibility to deliver it all in a highly secured fashion with all the tentacles (endpoints) secured as well. The topic of Security is now clearly a sharp focus and prevails inside all of the other discussions about building IT structures today and tomorrow.

Now follow me on this next journey: According to Rakesh Kumar at Gartner, 1) Over $3.7 TRILLION dollars has been spent on IT and IT services in 2014, and 2) Over 37% of those dollars are being spent on IT solutions but OUTSIDE of the IT organization. Why are these important? What it means is that the safe and confined comfortable corporate world of IT that we all grew up on and protected is now littered with connections and other services that exist OUTSIDE of your control! And it’s not just remote users, but remote applications as well. Connecting to SaaS and other clouds, remote access and BYOD users forms a critical component of our going forward plan and yet many of us are still throwing 2005 vintage protection schemes at our corporate borders.

In the Good Ol’ Days, massive security commercial breaches were something we rarely heard of at the corporate level because those companies simply didn’t allow much external access, and when they did allow external access, they used VPN like technologies and felt safe and secure since they thought the endpoints were as good as ‘inside’ the corporate structure. Today, nearly every day, we hear about another major retailer or bank, government agency or telecom disclosing ‘issues’ with unauthorized data access. What changed? Are these companies simply not spending enough on firewalls or IDS? It should be so simple. In fact IDC says that they spent nearly $10 BILLION on protection systems this year. Money is not the problem. They want to protect, but the modern threats have simply matured so much that old school ‘signature’ based technologies (the ones deployed by most companies today) are dramatically ineffective. Today it doesn’t really matter how many ‘signatures’ those old school devices have built-in. We need to think different.

It’s about behavior. With $3.7 Trillion of new spend on the line, the forward thinkers are realizing that detection signatures are something that describes the past, where as behavior is something that defines the future. What do I mean by behavior?  Well the autopsy of a typical breach goes like this; 1) A simple system like a desktop, laptop or web server is hacked and some form of malware control app is placed upon it. 2) The malware becomes the ‘agent’ on that box and can be instructed to do anything the outside hacker wishes, 3) The hacker typical is looking for something in specific, sensitive data, so this control agent is told to search that data out and then once found, package it up and use a familiar friendly protocol (like HTTP) to send it back to hacker central. Lastly, 4) these agents are usually instructed to try to find similarly breach-able peer systems so that the process can be repeated.

With this behavior in mind, it’s just a matter of designing new protection systems from the ground up that try to identify this flow. They are designed to focus on zero-day threats (those never seen before) as well as all kinds of Advanced Persistent Threats (APTs). These newer protection systems understand the zillion variants of behavior and the best of these systems actually get smarter over time. These systems test their initial analysis and even try running certain payloads that are seen traversing the wire.

The point of this whole story? The Good Ol’ Days are fun to talk about and even share a few stories over a cup of Starbucks or a cocktail, but OUR real business needs going forward all hinge on solving the security challenge for a highly connected world. Frank Gens at IDC estimates that by 2018, HALF of ALL of our IT spending on computing and storage will be Public Cloud based. This means all that computing will be at the end of a wire which you have little or no control over, is outside of your comfortable brick and mortar walls and has its own set of security mechanisms.

We will find our footing, and we will collectively reach a consistent definition of what is expected from the new secured world of IT. This is Data Center Transformation in the making. But as we saw in Las Vegas, it will come with a primary desire to think about the new task at hand, and solving for that, rather than building on the structure we have inherited. Time to put on our big-boy pants…

Posted in Predications, Technology | Tagged , , , , | Leave a comment

Dynamic Workload Management, Physical to Virtual to Cloud – and back!

Enabling Workload Migration between Physical, Virtual and Cloud providers

Enabling Workload Migration between Physical, Virtual and Cloud providers

Now that we are all becoming ‘experts’ about the transformation being seen in the world of computing, we should all be realizing that there is an exciting opportunity in front of us by leveraging the foundational platform innovations that have occurred. “Hybrid computing” is a phrase used by many of you to describe the various different combinations of traditional servers, virtualized servers and the various flavors of the cloud to conduct work. And to add a bit of a second dimension to this phrase, each of these different computing platforms has a number of major vendor-choices, each with a particular set of capacities and specific dependencies. For the server, major choices include HP, DELL, Lenovo and a slew of other Tier-2 vendors like Intel, Supermicro and Quanta. For virtualization we see VMware, Microsoft, Xen and KVM. And the Cloud is dominated by players like Amazon, Softlayer, NTT, Rackspace, VMware, Sungard and CenturyLink.

Now wearing your newly acquired IT business hat (XL, courtesy of your CFO), you should keep in mind that your goal is to find the most suitable platform for each of your applications, and that the definition of “most suitable” changes from application to application and as a function of time.  The actual work to be performed doesn’t actually change in this process, it is just the foundational means to deliver that computing that could change. The catalyst for you to consider moving your workloads to other platforms could be for performance or capacity reasons, could be economically driven, or even based upon disaster-recovery or compliance needs.

While many of us have a great understanding about each of the discrete platforms, the ability to move workloads between traditional servers, virtualized servers and the cloud (public and Private) is less understood and yet has already become a critical success factor to managing the costs of doing work and the reliability of IT. As it turns out, moving workloads is conceptually very easy. Simply package up each of your applications as ‘workloads’ (carefully identifying the dependencies that each unit of work has), and then characterize the various platforms on their ability to deliver those dependencies. To put a workload on a particular platform, you simply grab the unit of workload, add to it the specific target platform wrapper, load it on the target platform and press “GO”. In the reverse direction it works in the exact same fashion. Want to move work between Cloud-A and Cloud-B? No problem. Grab the workload, strip off the Cloud-A wrapper, add back in the Cloud-B wrapper and you are all set.

Conceptually it’s about separating the application doing the work from the intended platform to run that work upon. There are a handful of startups, like Silicon Valley-based Rackware and Rightscale from Santa Barbara, that have set their sights on doing this and a few of these companies have become experts at doing so. They make short-work of migrating workloads between public clouds, private clouds, physical servers and virtual instances. Basically any source can be migrated to any destination.

These are really exciting times and those IT pros that fully embrace hybrid computing and workload migration as a function of value and performance will be rewarded over and over again. So while you have spent the last 25 years talking about doing work in ‘the data center’, going forward we should probably start using a term like ‘the data function’ or other less restrictive phrase that encompasses computing that may also occur outside of our original four walls.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Being Too Close to the Trees can Cloud Your View

Close-Up to Trees

Close-Up to Trees

Make no mistake, the transformation of computing is happening. While 10 years ago the IT technologists would have been talking about CPU speeds and core density as well as gigabit pipes and gigabytes disks, today they are talking about putting all of these advances into a logical and elastic computing platform that can be scaled to any size and accessed on demand at just the level needed, with the matching minimum cost structure for the desired level of demand. We all know it today as “Cloud Computing” and it allows the consumer of computing to tailor their purchased supply to their actual demand for computing.

The Big Picture

The Big Picture

While it’s still very exciting to talk about the technology of “The Cloud” itself, there is a ground swell underway of companies that are looking to leverage the technology of the Cloud by focusing on delivering business value on top of the Cloud in very specific vertical worlds. It’s the difference between talking about owning a forest of oak trees compared to talking about fine hand-crafted wooden kitchen tables and chairs. See the relation? The buyer of a chair won’t have nearly the interest in the details of the forest or the trees, but instead would like to focus on the color, finish or styling of a chair made from those trees and built for their specific kitchen. They understand the impracticability of themselves transforming a raw oak log into a kitchen chair, and could appreciate the number of craftsman that they would have to hire to convert that tree log into a chair. They know that there is real value in having someone do that work for them.

Like the tree log, the Cloud is the raw material for delivering end-user focused applications and value. If asked, the end-user would always prefer to find applications that are purpose-built for their specific need (vertical), rather than buying a one-size-fits-all and trying to make it into something it was never intended to be. Think of the diversity of needs between hospitality and retail, between life sciences and legal. Each of these verticals presents a very different set of needs, so when companies with these varying business plans go looking for cloud based solutions, they would prefer to find something that is pre-made for their world, not a general-purpose solution that may fit some of their needs, but miss the ball on others.

Much of the ‘buzz’ in the tech industry is regarding the Cloud itself, but a growing swell is luckily focusing on the innovation and agility the Cloud makes possible. Entire companies exist to innovate applications built for the Cloud that address their vertical values, simply identifying the Cloud as the requisite raw material for access and delivery. There are a ton of capabilities that the Cloud brings to the table which are inherited by the application providers that use it,  but at the end of the day it is not the Cloud that matters to these users as much as the vertically focused value.

The key for all of us is to broaden our minds and re-consider the Cloud platform as an amazing new opportunity to re-think the myriad of general-purpose solutions already in place. Take EMAIL for example. We all grew up on it, and take it for granted. Today the Cloud allows new users to be provisioned with the click of a mouse. No hardware or software needed. Pretty amazing. But email is just email, right? Wrong. When was the last time you were asked to share your credit card via email and felt safe doing so? Never. How about receiving your medical test results via email? Nope, not going to happen with today’s email systems. Email wasn’t built for these vertical needs, and consequently can’t be used for those common purposes.

Now think of building applications with the specific use case in mind. If we wanted to have an email system designed for sharing financial records we could. But it must be designed to do so, from the start. Sure it would use the Cloud as the delivery mechanism, but the innovation happens above the Cloud.

I am always excited when I see companies that get past the sound-bites and instead focus on the heavy lifting.  Cloud is a popular sound-bite and generates more column inches in the press than anything else seen in years, but that is just the raw tree logs. You’ll see a growing number of people talking about their fine crafted “chairs”, I mean “vertical applications” going forward…

Posted in Uncategorized | Tagged | Leave a comment

“The New Phonebook’s Here!” (a.k.a. Gartner releases the first ever DCIM Magic Quadrant!)

Validation is Important to Everyone. DCIM is being Validated again.

Validation is Important to Everyone. DCIM is being Validated again.

You probably remember this classic sound bite from the 1979 movie called “The Jerk” which starred Steve Martin where he played a naïve guy from Mississippi who is transplanted to St. Louis and tries to make his way in his simple life. In one of the early scenes, Navin (his character) grabs a copy of a brand new phonebook (the white pages, remember them?) and immediately looks for his name in the book as a way to validate that his existence in St. Louis has not gone unnoticed.

Sometimes it just takes a little written formalization of reality to make a big difference. This is the case with Gartner’s newest Magic Quadrant, the first one in fact for DCIM, released September 22nd, 2014. (Gartner subscribers refer to document #G00259286). While over a hundred vendors have been slugging it out for years, trying to demonstrate their added value to the management of the physical layer of the data center, the need was always instinctual to some innovators, but did not rise to the level of urgency in most people’s game plan. In fact, less than 5% of the likely adopters of DCIM have already done so, the remainder looking for some form of major validation from the industry. While a number of analysts have done loose DCIM roundups in the past, Gartner’s new Magic Quadrant has brought together 20 or so vendors that each are visible within the DCIM industry, each has some number of customers that have adopted their respective waves, and each have solved some portion of the needs defined in this category. They have spent nearly 2 years researching the market, and drilling into business value.

It’s worth reminding everyone that the DCIM category is still broadly defined, so the vendors listed in the DCIM Magic Quadrant are not direct competitors necessarily. In some cases they are complementary too. The key to Gartner’s Magic Quadrant process is ranking these vendors based upon their completeness of vision and their ability to execute that vision. As you read the Gartner DCIM Magic Quadrant, consider it as a short listing tool for your own DCIM efforts. Every Fortune 2500 company will be adopting DCIM, it’s just a matter of time. As you’ll see in this and other reports, the value realized through the adoption of DCIM is simply too great to ignore any longer. Yes, DCIM is a new way of doing things in IT management, but take a deep breath and look around and you’ll see the WHOLE concept of IT infrastructures is changing overnight. It only makes sense to add DCIM to your strategic plan to help raise the efficiency and reduce your costs in the data center, in your co-los, and across your sites.

As always, I applaud anything that formalizes the reality of DCIM as a strategic investment with a tangible cost savings. And when a major analyst group like Gartner spends this amount of time and energy to compare and contrast vendors like they did here in the DCIM Magic Quadrant, the take-away for all Fortune 2500 IT executives should be an underscored CONFIRMATION that DCIM is real and can be deployed today, that DCIM has huge business merits and with produce measureable cost savings today and that DCIM should be part of everyone’s dance card… TODAY!

Posted in Uncategorized | Tagged , , , | Leave a comment