Since the very beginnings of the DCIM industry, Real-Time monitoring has always been connected to the segment. Regardless of whose definition is used, DCIM is built upon the need to gather extensive amounts of operational and asset data, store that data in some form of easy to access warehouse, and then create meaningful and increasingly more valuable ways of presenting and leveraging that data. These 3 main functional blocks form DCIM.
What has gotten a bit confusing is the various interpretations of each of these parts of the equation, and where each piece fits in the big picture as well as the order in which each piece should be deployed. Customers and vendors alike are having trouble articulating where to start and what the actual ‘end-game’ of DCIM really is. Without this clear description, in many cases customers realize just a fraction of the value of what DCIM can deliver.
Real-Time monitoring is ONE PIECE of the equation. It is an essential part of deploying a highly strategic DCIM solution, and becomes the eyes and ears of any otherwise static or disconnected system. Real-Time connectivity is the (not so) secret sauce that makes DCIM feel alive!
Two points to keep in mind for the sake of DCIM discussions:
- ‘Real-Time’ is a concept that can be interpreted as polling intervals of minutes in most cases. Perhaps seconds in a few others. That said, Real-Time is never expected to be SUB-SECOND in the context of DCIM. Many polling interval defaults run 10 minutes or more for physical asset monitoring (like power consumption in a rack), while some events actually do need triggered async near-real-time monitoring (like a door opening).
- There is no single standard for data center monitoring. There is no single protocol and even in places where the same protocol is used, there is rarely an accepted standard for value mapping. SNMP for example can be interpreted and used as a conduit, but the actual placement of metrics within an SNMP packet is left for each manufacturer, and in many cases, changes from device to device even within a manufacturer’s own product lines!
Critically important to realizing maximum value from a DCIM solution today is monitoring, yet there is no single standard or agreed approach for monitoring across data center assets! How fun is that? What happens is many startups have been formed over the past dozen years that tackle JUST the monitoring task. They are a DCIM building block and spend their entire existence attempting to wrestle this alligator. THEIR end-game is as a source for raw data to feed higher level tools. A few even try presentation, but that is just a hint of what is needed in the DCIM world. Some of these startup a do a great job, some not so great. In all cases however they are just a piece of the bigger DCIM picture.
DCIM suite vendors on the other hand take one of two paths to get to their end-game;
- They realize that they have a internal commitment and roadmap with a comprehensive DCIM value proposition, end to end, and then they decide to try to build their own universal monitoring layer. The monitoring layer tends to be related to hardware devices, which may be in their blood, so creating a layer that can monitor and control* various devices seems to be in their wheelhouse.
- They choose to partner with one or more of the startups’ monitoring solutions that are beginning to be deployed in actual customer sites. They attempt to predict which ones will take hold and gain momentum. The more open DCIM vendors will make it easy to integrate with multiple monitoring systems.
How do customers choose to move forward? Where do they start on their DCIM journey? Do they start at the top with DCIM suites or from the bottom with monitoring startup solutions which form a monitoring utility? The simple answer is a fully functional DCIM solution will need BOTH top-level business management capability as well as the bottom monitoring and control* utility layer. They can start EITHER place with the suites or the utilities, but they need to realize that they’ll be doing both over time, and that ONLY when they have both in place will they realize a strategic approach to data center efficiency.
Remember, Real-time monitoring (as described above) is really in its infancy. There are so many competing choices for monitoring components and no real standards. A DCIM vendor’s most compatible or ‘open’ approach will quite likely be to treat the monitoring layer as a utility as customers are doing. Each DCIM vendor’s ability to tool their DCIM solution to take advantage of MULTIPLE emerging monitoring solutions will become essential to their long term success and delivering customer value. In most cases, I see 4 or 5 installed monitoring systems contributing to the whole real-time monitoring picture for any given data center. Don’t forget we have BMS systems, IT Asset Management systems, Network and Virtualization monitoring systems, power and environmental monitoring, etc. DCIM vendors must support the monitoring systems gaining traction and treat each as a portion of the real time data source utility.
Notice I briefly mentioned “Control” above? It is my firm belief that as DCIM continues to mature, and the maturity of the data center operational task becomes more business-centric, the ‘End-Game’ for all will be expanded to include the needs for self-managing data centers. Self healing, dynamically provisioning servers And CRACs and everything in between. It will become commonplace for monitoring AND control to live as a single utility. Think of this as a superset of the features found in virtualization solutions today, where new capacity can be dynamically spun-up or taken down based upon demand. Couple that logical dynamics to the physical infrastructure and you have DCIM 2015. Distributed MONITORING AND CONTROL utility fabrics will become an essential part of any efficient data center at that time.