Best Bet DCIM Providers for 2016

Now that 2016 is upon us, I find that the DCIM industry is really starting to settle out. Most of you know that I joined the DCIM industry before it was actually called “DCIM” (in fact for a brief period of time we all called it “PRIM” (Physical Resource Infrastructure Management, but through a combination of coincidences at both Gartner and Forrester, the term migrated to DCIM).  I worked for many of the players in this space, and became deeply involved with all of the key vendors attempting to participate in this solution area, and in the process I even became friends with the vast majority of the pundits in the industry.

The DCIM Dust is Settling in 2016

The DCIM Dust is Settling in 2016

While I have left the DCIM category professionally in my ‘day job’, I still stay tightly connected to what is going on. In fact, I still get quite a few calls from researchers and other vested parties regarding what is happening in the industry or with one company or another. And as you know, I have my opinions and am happy to share.

First, let me say that DCIM is real. It can be the answer to real questions that are on the table now. Data Centers themselves are changing quickly, but the need to understand more about what is happening inside those centers has never been more urgent. Whether you own the cement or not, the “data center” is still yours.

Did I say ‘urgent’?  Yup, I did. let me explain.

Assertion: It is urgent to get your data center house in order. With the easy access to rented transactions and services through providers like AWS and GOOGLE, your challenge is to figure out how those cloud providers do the magic they do, and then either subscribe to them in a big way, or learn from them ASAP!  You can choose either path, but time is ticking by and if you don’t make that strategic choice, YOUR REPLACEMENT WILL! Getting your house in order is not just about new ways to assure availability and capacity, but instead focuses heavily on understanding the fiscal impacts of every choice you make to the business. And in specific, reducing your cost per unit of work. If you don’t know what YOUR cost is per unit of work, then your house is NOT in order. DCIM can help you understand unit of work costing but you need to understand what you are missing to get there.

Unit of Work? What the heck is that?

It’s the reason you exist. It’s the reason IT exists. It’s the way you provide capabilities and services to your business and it’s the plan you signed up for when you became an IT professional. If you are the IRS, perhaps the unit of work you care about is processing an individual 1040 tax form.  If you are eBay, then your unit of work might be displaying a page in its marketplace. At the end of the day, you want to know the cost for that unit of work (which for example at eBay is currently running about three-quarters of a penny). Tip: Thinking about cost per unit of work shows that you have traded your technologist hat for a business hat. And it is just common sense that over time your goal MUST BE to reduce the cost per unit of work.

So, which DCIM is right for you to help you get there?

Great question, and one I have been wrestling with for 10 years! Here’s the industry’s dirty little secret: there is no single answer! “DCIM” does not define a single set of capabilities, it refers to the whole roster of solutions that offer insight and/or control over the physical components found in a data center (or co-location facility, same story). Those physical components can be facilities oriented (CRAC, PDU, Genset, etc) or IT oriented (servers, switches, storage, etc). And to date, no single vendor has proven that they can address all (or even most) of the needs of the entire data center with any amount of credibility. (There has been a ton of posturing and wild vendor claims, but the reality is each vendor does one or two things very well, and the rest of YOUR actual and tangible needs will need to be met by additional vendor solutions)

So here’s my 2016 “DCIM GoTo”  list of vendors that can deliver EXACTLY what they say and who spend a lot LESS time posturing about what they really do…

  1. Want great facilities monitoring of well-behaved (Network enabled) mechanical and electric gear? Then FieldView is a great choice. They collect data, normalize it, and then present it in a clean set of views and reports. Their technology is installed in hundreds of the world’s biggest data centers and their deployment process is an afternoon project. (Well, it gets a bit more complex when you have older equipment which communicates using ancient protocols, RS232 interfaces, or 20mA signally but they can get you there too).
  2. Feel like high quality and intelligent power at the rack level is part of the insight you seek? You probably need to get Legrand or ServerTech PDUs. They can offer a level of instrumentation at the outlet level (if you care) or at the branch level (which you need) and each supplies highly capable management software to leverage their hardware PDUs intelligence. Above all, they are solid citizens (that won’t fail to deliver on Job #1: providing clean power over long stressful periods of time). Keep in mind that dense racks exceed 20kW these days, so delivering solid power under these high loads is not an easy task. Legrand and STI understand this and can help you too. (And if you happen to have any other brand of intelligent PDU already, head on over to Sunbird for their PowerIQ vendor-neutral software solution to manage ANY brand of intelligent network-attached PDU)
  3. Want to understand the 1000’s mini workflows associated with all of the devices in a data center and all of the related  asset provisioning and decommissioning? Want to see where every IT device is installed and connected at any point in time? Or maybe where the next new server is best placed? Nlyte would be a good choice and their solution is focused on the contents and context of each rack. At any point in time, they can allow you to visualize all of the racks you maintain in near perfect fidelity as they exist today, or over the course of future projects once completed.
  4. How about 3D fly-through views of your data center with the ability to turn layers on and off to allow focused visibility into the physical subsystems in production? Commscope’s  iTracs makes sense. For a facilities and cabling oriented DCIM goal, this could be an essential element and visually stunning to watch. Note: “real work” in a data center gets done in 2D, so be careful not to get enamored by the art of iTracs, and try to stay focused on the value you seek. iTracs is an amazing solution for the maintainers of cable (data and power) and the associated flooring subsystems.
  5. Feel like working deep through the fiscal model of your data center and studying the impacts of your various proposed changes on Opex and Capex? Want to quantify the TCO in a defendable way? CFO breathing down your neck to translate your tech-speak into dollars and timeframes? Think Romonet as they understand every aspect of the data center business and allow each component to be included in their fiscal modelling.
  6. And what about those environmental sensors?  So many data centers are still running blind when it comes to heat and humidity sensing, and yet the technology is cheap and effective. Temperature and humidity may no longer be a major factor in wide-scale equipment failure (since the manufacturers have widen the ranges quite a bit), but those issues play havoc on COSTS. RFcode does a great job in delivering sensors that work hard and feed into very visual at-a-glance dashboards. They are installed in seconds and begin reporting immediately. They are low-cost and included batteries last for years!
  7. And finally, Discovery. This is an age-old problem. Now it would be great if some magical software could figure out WHERE a server or switch was installed, but thanks to the 80-year old 19-inch rack standard, we are not going to see that in any non-proprietary fashion anytime soon. There simply is no agreed mechanism to do so at the rack level and even the new OCP rack spec ignores this need once again. That said, No Limits Software is currently the deepest and least intrusive way to figure out WHAT is installed in the rack. It digs deep inside the device operating software and paints a clear picture, right now to firmware version numbers. Discovery can play a huge part in profiling what you have in the data center, firmware versions, installed software, etc

So what about CA, Emerson and Schneider? It is still not clear what their end-game is. They DO provide value to their customers, but tend to have their maximum value within their own installed base. The result, many end users consider Emerson and Schneider as element managers for users of their own equipment. Most exciting, as data centers become more software-defined, both Emerson and Schneider have indicated that they have a solid vision for delivery of automated and self-adjusting data centers over time.  (Emerson has a video on YouTube which puts that timeframe between now and the year 2025). And CA? Humm…. It is my expectation that they will simply extend their existing ITSM and ITAM  tools to include some of the asset information being sought, and they will continue to offer energy-related stand-alone solutions.

Make 2016 the year that you challenge yourself to be an active listener when talking to DCIM providers. If you listen carefully you’ll hear what they do REALLY well, and you’ll hear what they consider a minor area of interest. Try not to put words in their mouths. They want to say “yes” to most everything, but be smarter this year. Understand that for the foreseeable future, you’ll need multiple tools to understand the physical layer and the resulting business metrics that go with it.

Posted in Predications | Tagged , , , , , , , | Leave a comment

Moore’s Law – It’s about Embracing the Business Opportunity

Gordon Moore's 1965 Graphic

Gordon Moore’s 1965 Graphic about technology doubling

I love pondering about the last 50 years of computing innovation. Although I knew nothing about technology in the mid-sixties when Gordon Moore observed that the number of components for integrated functions doubles every 12 months, it has been a guideline influencing literally millions of subsequent business choices that have been made by vendors and end-users alike for much of that period of time.

Now the curious thing is that Gordon Moore changed his projected timeframe to 24 months in the mid-seventies, at the very beginning of the first gen of the multi-purpose CPU revolution (refer to the general purpose CPU, the Intel 8080) since he realized that building multi-purpose CPUs was a much bigger undertaking than the function-level integrated circuits (refer to single-function chips like the 7400 Series) that were the state of the art until that point.

Wait, in 1965 Gordon Moore said component counts double every “12 months”  and then when big bad chips (like the 8080) were in their infancy ten years later he said the doubling rate had slowed to “24 months” and yet everything you and I read today quotes “Moore’s Law” (which really isn’t a LAW at all) as a doubling every “18 months”. What gives? Well, marketing does. Some clever marketing soul realized that the only way to make the facts and the fiction ‘kind of align’ was to take the average… 18 months in this case. It was believable, defendable, and has stood the test of time (with just a bit of hand-waving required).

Transisitor Counts for CPUs

Transistor Counts for CPUs has loosely followed Gordon Moore’s observation

So, does it really matter which number is more accurate? No, not really. The point is that every year or two, most technology things double in capacity AND half in cost at the component level. Servers become twice as capable every couple of years. Network transport doubles too. And when you compound this effect over any reasonable period of time, it becomes staggering. In fact, we store more information in one day today then we did in all of the 1980’s. Most importantly, we don’t build technology for technology’s sake, we do so to access the VALUE of all of this information, which doubles too!

And with technologies like the Internet of Things and Software-Defined networking and storage, the rate of this doubling is accelerating. We as an industry are like a veracious animal, feeding on information with nothing but opportunity and creativity to guide us.  The social experience is getting your 3 year old daughter and your 93-year old grandmother into the game too. And all of it is made possible with the new generation of Information Technology which is doubling per the curve. Not the IT that existed when Gordon made his observations, but the IT that sits in your hand right now and is connected to the world. Keep in mind that the Facebook main screen that you probably looked at this morning during your first cup of coffee actually consists of a hundred or more applications working together, each driving some portion of your experience. Each app communicating with the others to bring you a rich, fun and VALUABLE experience. That is why we all do what we do in the tech industry, that’s where it all shines, and that is why this doubling concept is so essential.

At the end of the day, there are massive transformations of near every sector of business happening to take advantage of this new IT. Finally, the business is driving technology. Finance and Education, Government and Aerospace, Entertainment and Internet… the most successful businesses are re-tooling themselves to embrace and leverage these new technologies knowing that everything they do today will be HALF of what their opportunity is next year.

Thanks Gordon…

Posted in Uncategorized | Tagged | 1 Comment

DCIM Facts versus Myth – Time for a Reality Check!

Facts versus Myths for DCIM

Facts versus Myths for DCIM

Last week we conducted an online webinar devoted to discussing the common mis-understandings and myths associated with DCIM. Over 400 people registered for the webinar and we had a ton of questions and comments afterwards. Its very clear that DCIM is a brand new category of solution for many of the attendees and there are many assumptions and incorrect data points that are preventing many end-users from realizing the benefits of DCIM.

I have selected a handful of the more popular myths we explored during the webinar, and present them here along with a more detailed narrative about the reason for the “myth” and the informed facts that should be considered instead. My goal is to provide the necessary DCIM facts for your consideration and to seed your thought processes as you begin your DCIM journey. Read the whole article Nlyte Blog.

Posted in Uncategorized | Tagged | Leave a comment

Exposing IT Value to the Business

The IT industry is currently experiencing an amazing transformation. Whereas most long term IT professionals have spent their careers creating and supporting increasingly complex IT structures with primary metrics of availability and uptime, the new CIO mantra has become service delivery at the right cost. In effect, CIOs are formalizing many previous efforts and creating service products that can be delivered upon request, with a keen understanding about the costs incurred to deliver those services. CIOs today think about these Service Portfolios as the means to set expectations on how technology will be adopted, how it will be supported, and at what cost those technologies will come. – See more at here

Posted in Uncategorized | Tagged | 1 Comment

Taking the 2015 IT Infrastructure Scenic View

IT Infrastructure is changing dramatically in 2015 and beyond

The Ways in Which We deliver IT is Changing Dramatically in 2015 and beyond

The IT Infrastructure challenge in 2015 is daunting and only the strongest will survive: The strongest vendors and the strongest IT professionals. Nearly everything we knew about building and maintaining IT infrastructures is being superseded by a new wave of fresh new technologies, new processes, new people and new approaches.  I specifically used the word “New” rather than “Alternative” since these fundamental changes really are radically different from those in the past and they *ARE* happening with or without you. For instance, we aren’t just making disk drives faster or bigger, we are eliminating them from your data center floor or making them so smart they handle growth easily. Same thing for servers and networks. Not just more of the same old thing, but NEW WAYS of doing it. All of IT is going through radical change.

Think of the kind of change we saw in the data center after 1991 (the year Linux came on the scene),  or the desktop after to 1981 (when IBM started to ship the PC) or sharing and collaboration in 1995 (after the internet boomed). These foundational changes allowed/required everyone involved to (in Steve Jobs’ words) “Think Different”, and it also destroyed the luddites that chose to ignore these shifts. For those of us that were paying close attention and embraced those changes, we were handsomely rewarded. Those forward-looking companies that adopted these changes were wildly successful and a lot of people’s careers were made. Data Centers could now be filled with systems based on new hardware and software, virtualized loads, unlimited scale applications, all with a level of performance, interoperability, efficiency and cost structure that wasn’t even in the same ballpark as previous approaches.

So that brings us to IT circa 2015. It’s all happening again! Everything that we 40- and 50-somethings know about the IT business is up for grabs again and all of it is being retooled around us as we speak.

Here is my list of 10 of the most impactful changes that are occurring which are worth getting your hands around if you are an IT professional planning to stay in the IT segment:

1. IT Service Catalog. Most IT structures and the associated processes have grown into monsters. The complexity and delicate structures stifle creativity, and choke new initiatives at a time where stakeholders are asking for more creativity and agility.  The most successful IT professionals are now looking for the means to create service offerings as if by catalog. Each service “product” must have a known cost per user, a known delivery timeframe, has specific capability/deliverables and expectations, and a whole slew of escalation definition when things don’t go right. IT “products” (like email) are being defined, and the costs to deliver those “products” quantified. It is this Service Catalog mentality that makes the most admired CIOs in this new era smile.

2. Automating process and control. Whereas we used to have an sysadmin for every 50 or 100 servers, keeping them patched and operating, we now find sysadmins handling thousands of servers through the usage of automation tools. Automated patching, provisioning, migration. Application installs and password resets. All of this is becoming automated through the use of tools that capture the human intelligence and then dispatch that same knowledge automatically each time the same task needs to be performed. Think “copy and paste” at the macro level. And it doesn’t stop there. Virtualized loads can automatically shift from physical server to physical server, and HVAC gear can automatically sense the conditions and self adjust as needed. This is happening today.

3. Software Defined Networks are a new concept, with less than 5% penetration (in production), but are on a hockey-stick shaped curve today. Adoption is beginning to take off and the various approaches are finding their sweet-spots. What started out as an economic “OpenFlow” free-for-all, has quickly become a business discussion about capabilities, flexibility and value. The protocol itself has taken the backseat to new capabilities. Why did this happen? Well, we all grew up on networks which were built-in a north-south fashion. The vast majority of all traffic started or ended at the edge devices. Server to server communication was limited. That’s why all of the traditional switch vendors have two lines of products today; core and edge. With the highly connected web full of various ‘services’, we now see server-to-server communications skyrocketing. This can be thought of east-west communications and is demanding a ‘fabric’ approach to networking. And the icing on the cake: once you have built an SDN interconnect fabric, you have the perfect place to host virtualized function or services that essentially reside everywhere the fabric is. Load-balancers and firewalls come to mind. Let’s think of this SDN structure as a ‘thick connectivity fabric’. And what more, applications themselves can tune the fabric for their own needs and take advantage of lots of orchestration possibilities! And a bonus for the pure technologiests; some of the industry’s most advanced SDN players can offer full layer-1 monitoring within the fabric itself! No more expensive/duplicate networks just for performance analysis!

4. Self-Service mentality. I want IT now. Rakesh Kumar at Gartner presented a paper in December 2014 that stated that 37% of the nearly $4Trillion dollars of IT spending in 2014 occurred OUTSIDE of the IT organization. 37% of all IT projects didn’t involve the IT organization at all! Shocking. This is because the end-user now has the option to look at any number of 3rd party service catalogs and buy with a credit card. Want a new desktop? Consider a tablet via CDW. Need storage? Think Box. Need email in a hurry? Google to the rescue.  No longer is the IT organization the end users’ only source of services.  The strongest players moving forward will be the IT professionals that embrace speed and agility in the delivery of their capabilities. Projects with many-month delivery schedules are no longer realistic when a 3rd party can deliver next-day or next week.

5. Who would have thought that a 10U-12U chassis would be housing HUNDREDS of CPU cores and be able to move over 7Tb/s of data internally? The big server providers today offer really dense boxes you can buy for $100K or more ($250K fully loaded in some cases), and at 10-12Ucan put 3 or 4 of these monsters in one rack.  I dare say we’ll see 40kW per rack very soon, whereas just a few years ago we saw 40kW per ROW. This is a mindset that enables dramatic difference in the way we approach new data center build-outs and retrofits. It has been hypothesized that your existing data center would last for 30 years or more if you simply took advantage of all of the ‘Moore’s Law’ advances taking place at the device density level. (assuming that your utility company can get you ‘a few more megawatts’ every few years).

6. I like to think about the simpler days when we all built 24-inch square raised floors, and ran our cooling and cabling under these tiles. We talked about loading capacities and laughed at the racks getting heavier, but until recently it was just a curious discussion. No longer! Those same monster dense devices also weigh a ton (literally) and it is no longer a best practice to plan for using raised floor. Building your data center directly on concrete is all the rage. Data and power cabling run overhead, and cooling strategies are beginning to take advantage of the fact that COLD AIR likes to FALL (and HEAT rises). It’s hard to understand why we decided 25 years ago to create cold air on the perimeter and then PUSH it upward into the racks, fighting pressure physics. Cold air sinks and the new generation of data center designers get that.

7. Unbounded Infrastructure as a means to blur the lines between mechanics and functions delivered on the data center floor. As it turns out if we stop thinking about individual boxes as individual management islands with each individual box doing a bit of work and then somehow the results being aggregated externally, we can take a whole new approach to IT. Hardware and Software mechanisms are now commonplace that ignore the physical device boundaries and allow the capacity to be aggregated.  Want to know how Twitter or AirBnB handle all of their transactions in real-time? There is a project called Mesos that creates services out of boxes. Need more I/O? Simply add more I/O services which instantly become part of the ‘system’. How do you write applications for a world like this? You write applications for the platform not the server, and then let the platform independently take care of being scaled in whatever resource it needs.

8. We used to SIZE and BUILD data centers based upon some perceived top-level or watermark of capacity needed. In a typical scenario, data centers were built as big as anyone could imagine the load would be, and then organizations and their projects moved into that center over an extended period of time. The downside to this old school approach is the cost: Comparing the cost to occupy the first square foot of space on day-1 compared to the costs associated with the last square foot 3-5 years down the road were enormous.  Building a big data center was only an academic exercise to keep the CIO happy. In reality, it never made sense (to the CFO) to over-build and hope that the space would be needed down the road. Today, you can find modular designs to replace your old way of building a data center; both manufactured (like Baselayer) and brick-and-mortar (like Compass Data Centers) that can get you new space in 20-200 rack increments or less. (Companies like Elliptical even do a micro-modular design down to ONE rack at a time). I guess we’ll need a new term to use to replace ‘breaking ground’ when using these modular approaches.

9. We all laughed when data centers were so cold that you needed a jacket to walk through it. Over the last 10 years it has become quite another matter and everyone is talking about it. What started out as energy efficiency with the Green Grid publishing their “PUE” metric, has become a battle cry for every manufacturer and end-user alike. Use more power and in a smaller amount of space, and get the power bill down per unit of work. With the cost of power now being a top-3 concern for everyone in an Enterprise, whole new approaches are being used to make data centers more efficient. Starting with the location of cheaper power which is driving where they are being built, and then consider the ability to use free-air cooling at that location gets us thinking.  Add to it the advances in CPU designs and power supply design and we have everyone working on energy costs.

10. I purposely placed “The Cloud” here at the end since it is one of the most dramatic changes some of us may see in our entire career.  Gartner has predicated that by next year, 20% of all Enterprises will have NO backend IT functions in-house. Even the old-line corporations are already diving into the Cloud for various applications. But don’t panic. This is a drawn out process and will be going on for a dozen years. Certain applications are perfect for the Cloud today and others are too time-sensitive or confidential to lend themselves to today’s Cloud offerings. Remember Cloud has already been around for 10 years. Salesforce.Com was one of the earliest examples of an application/software level cloud. However the Cloud probably raised your eyebrows when companies like Amazon and Rackspace began offered platform-level cloud. Most exciting, the tools now exist to allow in-house workloads to be shifted to the Cloud and visa versa as demand changes. (Companies like VMware do a good job of this). Best guess on the mix of in-house IT services  versus  Public Cloud provided services: less than 15-20% today, approaching 30-40%% in 5 years and perhaps 50-60% in 10 years.

So what does all of this mean? Think Forward. Think about articulating the business problems first, and then solicit every smart guy you know to figure out how to solve that problem. Building more of the same old structure is likely the wrong answer. Simply adding more disk spindles or a new edge switch is likely throwing good money away. Embrace it all and be ready to defend your choices. “It’s always worked like this” is no longer an answer with much street credibility. And “If it’s not broken don’t fix it” still applies, but I would argue that nearly all of the IT stuff we’ve collectively inherited in our data center could be considered “broken” by today’s standards.

Let’s commit to fix it…

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

SDN: Are We There Yet?

Remember in 2001 when you heard about VMware GSX?  It sounded like pure magic and seemed to do the impossible: it allowed you to run multiple instances of any real server operating system on a single hardware server. The operating system thought is was running on a hardware box, and yet it was just a slice of that box. Over the next few years, the buzz turned to a roar and quickly encouraged most commercial organizations to try a ‘pilot’. During those pilots, they realized that certain applications (like web servers) were a great fit for virtualized servers and these organizations set their buying sights on a new generation of big beefy servers which were needed to take full advantage of virtualization. Virtualization delivered just what it promised, and then some!

It turns out there are several technical ways to virtualize, so VMware found competitors like Citrix, Microsoft and Sun also jumped in and over the next few years the single-host/multiple-guest computing model came of age. Intel and AMD even changed their CPU chip architectures to directly support this type of virtualization innovation at the hardware core. Software providers changed their licensing models to account for virtualized servers directly too. From a timeline standpoint, virtualization was imagined, tested, tweaked and then adopted “en masse” over a period of a dozen years. According to Gartner, server virtualization accounted for 16% of all workloads in 2009,  accounts for more than 50% of all server workloads today and will rise above 80% within the next 3 years (2017). This is an adoption curve we expect with game changing technology.

The SDN Market is coming together.

The SDN Market is coming together, but is highly competitive with different technical underpinnings.

That brings me to the area of Software Defined Networking. Although programmable networking can be traced all the way back to 1995 with efforts at AT&T and SUN, the modern-day use of the term SDN is connected to the period around 2011 when the Open Networking Foundation (ONF) was formed to further the creation and use of “open” networking standard protocols.  The idea of today’s SDN is simple: Rather than each vendor continuing to ship proprietary networking gear with each device carrying all of its transport intelligence in every device, why not separate the control plane from the forwarding plane? Decompose the problem into two distinct areas that can each individually be optimized. And most important, scale and visibility becomes just a matter of technical creativity since a distributed controller architecture that drives any number of physical switching ports can be easily created to offer ‘one view’ of the whole thing!  And the icing on the cake is when you realize that this new decomposed architecture (when implemented well) allows APPLICATIONS to determine their specific performance needs, NOT a slew of network engineers that are buried trying to set traffic shaping rules for every new capability that is being added to the network across any number of individual boxes.

Today we are just a handful of years into this SDN journey, like where virtualization was circa 2006. SDN is all the buzz today, and we are clearly at the tipping point on the hockey stick curve. Many corporations are trying SDN pilots and investments are being made by the VCs, vendors and end-users alike. A growing number of production deployments are being seen with a few huge deployments  (like Facebook and Google) proving it out, which goes a long way to demonstrate the scalability, security and commercial value of SDN. Startups have formed for nearly every aspect of SDN. Some create high density hardware (“Forwarding Plane” or Switches), some create high intelligence controllers (“Control Plane” or Operating System), some even create value-added Applications (like traffic management, visualization and analytics). The biggest old-line networking vendors have even released overlays to their existing products to allow some level of “participation” in SDN networks. (This participation is at best a defensive/transitional approach, since the old devices will still carry all their heavy baggage, but it may allow some level of migration for large installed bases until/if they get to REAL SDN). Given the huge potential and the original premise of SDN, that transitional approach will be short-lived and I would expect to see a significant number of new generation hardware and software suppliers that are built from the ground up to be SDN components.

We are also seeing the SDN revolution underscore the need to think about application-level business values and set expectations accordingly. The staff required to manage SDN networks is vastly different than that of the older CLI-based “box by box” and application by application approaches network administrators have practiced for years. With SDN, if you can “think” it, the network can be programmed to support it. Most importantly, you “think” networking in an SDN world at the business application level, not at the box or protocol level. And in the same vein, the performance of applications can be measured against those business needs and could (in theory) self-adjust the network performance to meet their precise contracted needs. While SDN protocols have certain built-in performance values being collected all the time, this next generation of tuning capabilities will come from software developers that orchestrate the performance data being collected at the application level and communicate changes needed directly to the control plane itself.

Time will tell where and when adoption will occur. OpenFlow has been an earlier leader in the technical approaches used by many vendors in the SDN community, and yet the real SDN story is NOT about the protocols in use, its about the ease in which business services can be delivered better and faster and at a lower cost.  It’s about enabling the new generation of computing, what Frank Gens at IDC calls the “Third Platform“. This new era is based upon always connected handheld, IoT, etc. And just like Virtualization, Heros will be made who will look back on their early adoption and championing of SDN as the crowning moment of their careers.

Be an SDN Hero!

Posted in Uncategorized | Tagged | Leave a comment

The Good Ol’ Days are gone – Security is the Basis of Your Future

Having just come back from Gartner’s Data Center Conference held annually in Las Vegas, I had the opportunity to reflect about what I heard at a macro level over the past few days. For those that didn’t attend, Gartner brings together 2500 or so of the industry’s leading IT professionals from the vast majority of the Fortune 500. Their titles range from IT and Data Center Manager to CIO and VP of IT or Infrastructure. Over the course of four days or so, this mass of IT folks get together and mingle, discuss business strategies and try to get a sense of where the various technologies and the industry itself is going. Obviously areas like Internet of Things, Cloud, BYOD, APM and Co-Lo were very hot topics, as were the latest generation of offerings of servers, storage and networks. But there was one topic that seemed to be an integral part of all other discussions: security.

Data Center Transformation is a function of Security

Data Center Transformation is a function of Security

As I sat back and listened, I realized that many of the folks in attendance had a solid reference point of “their IT” as it has existed for the past 20 years. Many of the vendor presentations and hallway discussions had a tone which longed for the simpler “Good Ol’ Days’ where their biggest concerns were capacity, availability and interoperability.  Sure, we had some good times dealing with those to be sure and many of us have made our entire career chasing those rabbits.

So here we are in 2014 and it is OURS to define the going-forward plans. And those plans will be dramatically different from the ones that got us here. And as it turns out, the massive connected mesh we are all striving for brings along with it the responsibility to deliver it all in a highly secured fashion with all the tentacles (endpoints) secured as well. The topic of Security is now clearly a sharp focus and prevails inside all of the other discussions about building IT structures today and tomorrow.

Now follow me on this next journey: According to Rakesh Kumar at Gartner, 1) Over $3.7 TRILLION dollars has been spent on IT and IT services in 2014, and 2) Over 37% of those dollars are being spent on IT solutions but OUTSIDE of the IT organization. Why are these important? What it means is that the safe and confined comfortable corporate world of IT that we all grew up on and protected is now littered with connections and other services that exist OUTSIDE of your control! And it’s not just remote users, but remote applications as well. Connecting to SaaS and other clouds, remote access and BYOD users forms a critical component of our going forward plan and yet many of us are still throwing 2005 vintage protection schemes at our corporate borders.

In the Good Ol’ Days, massive security commercial breaches were something we rarely heard of at the corporate level because those companies simply didn’t allow much external access, and when they did allow external access, they used VPN like technologies and felt safe and secure since they thought the endpoints were as good as ‘inside’ the corporate structure. Today, nearly every day, we hear about another major retailer or bank, government agency or telecom disclosing ‘issues’ with unauthorized data access. What changed? Are these companies simply not spending enough on firewalls or IDS? It should be so simple. In fact IDC says that they spent nearly $10 BILLION on protection systems this year. Money is not the problem. They want to protect, but the modern threats have simply matured so much that old school ‘signature’ based technologies (the ones deployed by most companies today) are dramatically ineffective. Today it doesn’t really matter how many ‘signatures’ those old school devices have built-in. We need to think different.

It’s about behavior. With $3.7 Trillion of new spend on the line, the forward thinkers are realizing that detection signatures are something that describes the past, where as behavior is something that defines the future. What do I mean by behavior?  Well the autopsy of a typical breach goes like this; 1) A simple system like a desktop, laptop or web server is hacked and some form of malware control app is placed upon it. 2) The malware becomes the ‘agent’ on that box and can be instructed to do anything the outside hacker wishes, 3) The hacker typical is looking for something in specific, sensitive data, so this control agent is told to search that data out and then once found, package it up and use a familiar friendly protocol (like HTTP) to send it back to hacker central. Lastly, 4) these agents are usually instructed to try to find similarly breach-able peer systems so that the process can be repeated.

With this behavior in mind, it’s just a matter of designing new protection systems from the ground up that try to identify this flow. They are designed to focus on zero-day threats (those never seen before) as well as all kinds of Advanced Persistent Threats (APTs). These newer protection systems understand the zillion variants of behavior and the best of these systems actually get smarter over time. These systems test their initial analysis and even try running certain payloads that are seen traversing the wire.

The point of this whole story? The Good Ol’ Days are fun to talk about and even share a few stories over a cup of Starbucks or a cocktail, but OUR real business needs going forward all hinge on solving the security challenge for a highly connected world. Frank Gens at IDC estimates that by 2018, HALF of ALL of our IT spending on computing and storage will be Public Cloud based. This means all that computing will be at the end of a wire which you have little or no control over, is outside of your comfortable brick and mortar walls and has its own set of security mechanisms.

We will find our footing, and we will collectively reach a consistent definition of what is expected from the new secured world of IT. This is Data Center Transformation in the making. But as we saw in Las Vegas, it will come with a primary desire to think about the new task at hand, and solving for that, rather than building on the structure we have inherited. Time to put on our big-boy pants…

Posted in Predications, Technology | Tagged , , , , | Leave a comment