How many environmental sensors do I need?

Seems like a nit, but I have been inundated with questions about environmental sensing in the data center over the past few months. First of all, let me say that this level of renewed interest in what could be confused as a mundane topic is a VERY GOOD thing. Sensors are becoming the life-blood of understanding energy usage in the data center. Sensors of many types are becoming common-place inside many devices, and can easily be augmented throughout the data center with wireless technologies that simply work as described. Environmental sensing is typically used in the context of power and temperature, however there is humidity, air pressure, asset location, leak and security sensing as well. So ‘nough said? Not by a long shot. This is low-hanging fruit that should be on everyone’s DCIM roadmap. Simply put, sensors (of any type) make DCIM suites better.

Now I wrote a white paper about the “Taxonomy of Data Center Instrumentation” a few years ago which was published by Mission Critical magazine, and much of the material still rings true as written, but what has changed is the significantly higher visibility regarding energy management. Sensing is a critical success factor in the execution of a DCIM strategy. Without sensing and real-world connectivity, DCIM suites lose an entire facet of their core value.

Here’s my 2013 Top-10 baseline of thoughts to consider regarding the use of sensors in the data center:

1. More is always better, but there is a cost factor to sober us up and influence the reality. You can never go back to recreate data that was missed, so think it through strategically, not just tactically based on cost. I remember when Microsoft’s Chicago DC opened, they boasted 1 MILLION points of data were being gathered across 500 racks. Was that too much? Not sure since I don’t know all of the analytics that were also deployed, but I would take a fairly educated guess to say the ‘right’ number was closer to 1 Million than it would have been to say, 500. (That large number of points would be derived from a combination of wired, wireless, device-included, etc.)

2. Sensors are now commonly available within many of the tier-1 IT boxes, and the accuracy of these devices has increased over the past 2 years. While these internal sensors are readily available programmatically, the collection of resulting data from all those sensors across the entire IT structure is still an adventure that should not be taken lightly. Remember collection of data points requires access (and it’s inherent threshold security issues), scale, distance and normalization not to mention an entire presentation/integration layer to use the data effectively.

3. Sensor strategies should include all available means. Use what exists, augment as required. At the end of the day, think about the litany of sensor fabrics as an abstraction task, where 2 or more dissimilar mechanisms will be combined to yield normalized data that can drive business decisions. Sensor strategies should account for the major systems already in place.

4. Wireless is a no-brainer and can be considered anywhere in the data center. The technology has gotten so good, that battery life is a non-issue for most, and the economics of deploying are impressive. Wireless is a perfect augment to accessible wired sensors and can eliminate a ton of the installation and deployment costs associated with wired counterparts. In many places, a $50 wireless sensor can easily replace $500 worth of intelligent wired gear.

5. Sensing plans should be coordinated with cooling and power strategies. Power and cooling can be sensed at many locations. Sensor strategies should consider all of these areas where change can occur, or where the possibility of these changes will result in required actions to be taken. For instance if you care about rack doors being opened for security reasons, then you might want to consider door closure sensors. As a general guideline for temperature, all racks should be sensed at the top, middle and bottom of each front door, with another two sensors in the back at one-third increments.

ASHRAE Data Center Guideline

ASHRAE Data Center Guideline

6. ASHRAE TC9.9 publishes their guidelines on data center environments and this document is worth a read.  It is imperfect, but it discusses some of the block and tackling that should be considered when designing cooling strategies. On Page 9, Figure #2 in the 2011 version of the document is of particular interest as it identifies a handful of IT gear classes of operation, temperature and humidity. The “recommended  envelope” is the safest old-school plan, but likely not as energy-efficient as modern IT gear allows.

7. The tier-1 IT vendors are each taking a position on higher temperature data centers. (A2 thru A4 in the ASHRAE chart mentioned above). Each has their own ‘warranted’ specification, BUT when using many brands of gear in the same data center, it is the LEAST COMMON DENOMINATOR that would need to be followed to assure all gear remains in warranty by the manufacturer. A multitude of sensors are a clear way to enable the management of micro-climates through-out the data center and the assurance that gear remains supportable.

8. While it is approximated that ‘for every degree that a data center temperature is raised, a couple of % is saved on the energy bill’ this sound-bite is one of the most commonly mis-represented data center energy ‘facts’. The original was based upon a published observation by Mark Monroe (now CTO @ DLB) while working in Sun’s labs a handful of years ago. Today, the specific numbers are not clear due to all of the variations seen in today’s data center cooling schemes, but the spirit is surely still true, hence the ASHRAE classes as mentioned above.

9. While it is true that IT gear failure rates rise as temperature rise, the actual percentage of rise in those failure rates for modern gear across these ASHRAE classes is negligible, but necessitates the need for more sensors as you approach the bounds of these classes. If you plan for normal operations closer to the bounds, then it is critical to monitor those edges to assure that they stay in range.

10. Sensors are not just about temperature, pressure, humidity and leaks, but can support security, regulatory and audit requirements. Being able to confirm in real-time certain conditions, certain locations, and certain events, many organizations begin to rely on sensors to replace their manual efforts which are many times considered distractions from their core business challenges. Knowing with the push of a button that a device is installed in a certain rack and consuming a specific amount of power eliminates a ton of otherwise labor-intensive activity.

So, there you have it. My Top-10 thoughts on sensing for the data center. Now I know some of you are saying, “But can’t I just put a few sensors on every third rack?”. Well sure you can, it’s all about the balance between cost and knowledge required for YOUR world. If you believe that whatever happens in the third rack will be indicative of the other two, then by all means that is a valid cost-aware strategy. If you don’t care about the temperatures in the back of the rack, then no need for sensors. However, if each of your racks has a different complexion, air-flow pattern and different power envelope and types of equipment, then that strategy might be an error-prone and as they say ‘penny wise and pound foolish’.

Lastly, don’t forget that sensing can live by itself, but in the BIG PICTURE, it is an important enhancement to DCIM suites. When you are deploying your DCIM suite, make sure it offers the support for your sensing strategy. Whatever that compilation of sensor fabrics entails, make sure that your DCIM suite vendor has addressed each in their model. A robust sensor strategy which has been leveraged by a comprehensive DCIM suite simply enables better decision-making over the long haul.

About admin

Mark Harris Fremont, CA
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply