A Funny Thing Happened on the Way to DCIM

DCIM and Automation

DCIM and Automation

The IT world around us changed! The very way we approach delivering IT services was re-imagined and re-invented. While the underlying technologies themselves got faster, smaller and lower in cost, each of those pieces became more commoditized and virtualized which added a layer of abstraction that served to make the physical componentry found in the data center even less important. And at the same time, the Public Cloud went from curiosity to contender, and the enterprise use of co-location space became the norm. When was the last time you heard about an Enterprise building a new brick and mortar data center?

10 Years and Counting!

When DCIM was started ten years ago, times were simpler. Much simpler. We lived in a world where IT organization delivered grand projects in bespoke data centers. New business initiatives resulted in long planning cycles and purpose-built projects which spanned months or years to deploy. Each project resulted in custom topologies that had to fit delicately into the structure that was already in place. The world looked to DCIM as the management solution for change and two camps of first generation DCIM players formed; those that addressed all of the constant change seen with IT assets, and those that addressed the optimization of the building itself and its energy usage. A few companies tried to do both, but failed to deliver on the utopia they promised.

So in the days when a stream of custom IT solutions were being created, first generation DCIM as an asset manager made a lot of sense. “Where is the best place to put my 6 new servers to run a new ERP?” was a great question in 2006 and DCIM’s asset management capabilities could answer this question handily. Every data center was different, and every project needed a unique combination of hardware devices to implement the required functions at the required scale. Where was the best space to put servers and how to connect everything to available resources was a project-centric process, and pioneers in the DCIM adoption camp realized just how powerful DCIM could be at shortening the time it took to react to changing business needs. And first generation DCIM shined at allowing individual devices to be located and serviced.

Capacity is now a Business Function

In 2016, all that has changed. IT had to run much faster, and provide instant gratification. To do this, IT has become a planning function which assures that the data center always has enough resources for the next 6 months or growth. Those resources must be sitting in a capacity pool in ANTICIPATION of the business which can be carved out with the touch of a button. In 2016, applications are virtualized and run on resources which are also are virtualized, so these pooled resources can be used for any application and the specific device where an application, or a part of an application doesn’t really matter. All that matters is that suitable levels of resources must always be available, which is a business planning function, not a technology exercise. This is similar to the Public Cloud story and in fact those companies that are embracing the Public Cloud in a big way cite the elimination of the need to care about the physical structure elasticity as a main driver for their choice. That said, very few companies today are wholly Public Cloud based, so in-house data centers and co-location facilities are the mainstay of all of our livelihoods.

New Approaches and New Challenges

So what does this mean to you as an IT professional? You may be considering a way to leverage the Public Cloud, and you likely are trying to balance Public Cloud and in-house resources to maximize value. For your in-house structure, if you are deploying new gear in response to specific business applications that have already become a requirement, then you are ‘kicking it old school’. You are delivering IT services in a manner that was state of the art 10 years ago. Over the subsequent years, the a Public Cloud providers proved that IT could be built as a pool of resources and THEN utilized for ANY application to realize instant gratification. Pushbutton IT was delivered by public cloud providers and proved that IT agility was possible, highly valued and very cost-effective. According to Gartner, more than 37% of all IT services are delivered without the involvement of the IT organization due to this desire for instant gratification. (They call it “Shadow IT”). Today, in-house resources simply need to be transformed into Private Clouds.

The Business of Growth

The biggest role of IT has become managing these pools of resources to make sure there is just enough over every point in time. Putting the Public Cloud portions aside for a minute, the details of which server or what switch is far less important because everything looking forward is cookie-cutter, virtualized and modular. As resources are consumed, the business planning function results in more capacity being brought online in manageable increments. An analogy would be a typical private municipal water district which is chartered to deliver water to residents, and who must negotiate long-term wholesale contracts to do so. When they contract water, they look at the statistical growth in their service areas and plan accordingly, not the specific houses that are under construction at any point in time. These contracts span 25 years or more, similar to the lifespan of a data center itself.

Door #1 or Door #2?

So what does this mean for DCIM? Remember I said there were two camps? The first camps deals with managing in-house asset life cycles. The need for individual asset management in this new macro and virtualized world is at best tactical, and in worse case irrelevant. We no longer focus too heavily on any the life cycle of single device any longer. We don’t really care about one server or one port or one Rack-Unit of space. We do need an absolute ‘as built’ model of what is in the data center, and first generation DCIM is very good at this, but the change management granularity is now at a much larger scale. (i.e. entire rooms are changed for tech refresh purposes, not individual servers). We all need to think bigger.

Gratuitous Mention of IoT?

No, not really. Data center automation is turning out to be the secret sauce. Delivering IT services cost-effectively in a virtualized (or software-defined) data center requires comprehensive instrumentation and action-oriented automation. The science behind capture instrumentation and then massaging it into policies that can be automatically executed comes from the new generation of data science practitioners. A whole new crop of people are learning how to deal with millions of data points that come in real-time, and turn them into rules and policies that can be fed into a orchestrator engine. For example, to accurately change the set point of a CRAC which services a specific pod, there may be more than 1000 data points that need to be understood. The new data scientists who practice IoT today can apply this same science to automating a data center.

Today, Gartner estimates that more than 86% of all servers are virtualized, and moving workloads around the data center is fully automated by the virtualization vendors. With all of that application movement, so goes their storage and networking. Essentially the IT function is already automated, and the work being processed above the “raised floor” is already more dynamic than you can imagine. So what about the data center facilities structure itself?

“DCIM-nextgen” is Automation

Automation is the key to everything in a modern data center, modular or otherwise. Custom automation approaches have already been used for years by the co-location providers. And now that embedded instrumentation is widely available on most IT devices today, and the science to interpret and action it exists, the most forward looking DCIM players will adapter their offerings to not only SHOW how the data center is built, but will make precise decisions about what it takes to operate that data center and automatically do it! As IT workloads increase and decrease, or as virtualization technologies move these workloads from one aisle to another, an automated data center itself will optimize power and cooling most effectively. Over time, the “IoT” style data science will make these feedback loops tighter and increasingly more efficient.

Where DCIM is Going…

That is where DCIM is going. DCIM will become the plan of record for what is on the data center floor, in-house or co-lo. And DCIM-nextgen will take a much bigger role in the coordination and application of facility resources which align with the automated movement of IT workloads in those centers. The days where actual humans will be involved in the ‘management’ of a data center are coming to a close. I look forward to the days when you can almost hear the buzzing of business when walking through a data center, and feel the varying breeze as you move from pod to pod, knowing full well that the business is optimized and running cost-effectively.

DCIM and Automation

DCIM and Automation

About admin

Mark Harris Fremont, CA
This entry was posted in Uncategorized and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply