Are My 25 Years of IT Experience Valuable Today?

Are Experienced IT Workers Relevant?

Are Experienced IT Workers Relevant?

Who would have thought that in 2016 many of us would be wrestling with the question of whether our entire 25+ year career of IT experience was worth anything to modern businesses today. We always assumed that our deep experience was valued and the more experience we had, the more valued we would be. Right?

According to the US Labor Dept, the average IT worker is 42 years old, which means that half of the IT folks we work with started their careers in the mid 1990’s. They started building file and printer sharing networks (remember Netware) and have basically been building bigger and faster versions of their computing environments ever since. Sure, they had challenges and over those years, we all worked through all the compatibility and translation issues in both hardware and software, we found ways to build clusters, and share storage, we added security to the mix, we conquered the scale and resiliency topics like gladiators, and in the past 5 years we even figured out ways to embrace the ‘Co-Lo’ and ‘Cloud’ things without upsetting our emotional apple carts too much.

But now many of the IT workers are at a point where they are asking the question: In today’s always-connected world where everything talks to everything, everything is standardized and the location of those pieces really just doesn’t matter… am I personally relevant?

The short answer is YES. In fact, a HUGE YES!  But you have to step back and think about WHAT you actually learned in those 25 years. Sure you probably have a hundred stories about writing programs when you were just starting out, the time you spent all night building out a new network during a company move, or bringing an e-commerce site online, but those are really just stories, since the details just don’t matter any longer. They make you smile and make you proud, but WHAT you did does not really matter any longer.

HOW you did it matters. Today, it’s all about business and business is all about PROBLEM SOLVING. Yup, for the same reason that many of us went to undergrad school, we find our biggest value in today’s market is our ability to think through complex problems and using a healthy dose of discipline, create action plans and take action. We have learned how to solve complex problems using a litany of data points drawn from those years of technology battles to guide our efforts. We can translate business to technology better, we can estimate project time-frames better, we can look at costing models , and we can even gauge the impacts of technology on staffing.

So are YOU relevant? Absolutely!  Do you need to show people how to program a bubble-sort in COBOL, or explain how to connect FDDI networks to Ethernet? Or even how to patch an SSL library on Linux?  Nope. Those types of minutia are now handled in a much more elegant and automated fashion, and IT users need not worry about discrete build-outs… their intelligent infrastructure options can do that for them.

Your value is your experience in solving business problems using technology as the enabler. When you focus on the business aspects and metrics of IT, your 25 years of experience shines. Those around you will appreciate your metrics, methods and approaches. They will listen carefully to your business guidance and your articulation of the fiscal impacts of technology. Remember that Cloud and Web-scale  (and their little bothers Private Clouds and Converged Infrastructures) essentially have solved all the technology challenges (the very detailed stuff that we all wrestled with for most of our careers) for us, so it is your experience in aligning the business needs with the ability for IT to deliver the right amount of work processing that matters. Even players like Google and Facebook have huge amounts of this “adult supervision” in their flip-flop filled hallways.

Make no mistake, YOUR 25 years of experience are desperately needed to run every modern business. Think business, think value and offer your guidance.

Posted in Uncategorized | Tagged , | Leave a comment

A Funny Thing Happened on the Way to DCIM

DCIM and Automation

DCIM and Automation

The IT world around us changed! The very way we approach delivering IT services was re-imagined and re-invented. While the underlying technologies themselves got faster, smaller and lower in cost, each of those pieces became more commoditized and virtualized which added a layer of abstraction that served to make the physical componentry found in the data center even less important. And at the same time, the Public Cloud went from curiosity to contender, and the enterprise use of co-location space became the norm. When was the last time you heard about an Enterprise building a new brick and mortar data center?

10 Years and Counting!

When DCIM was started ten years ago, times were simpler. Much simpler. We lived in a world where IT organization delivered grand projects in bespoke data centers. New business initiatives resulted in long planning cycles and purpose-built projects which spanned months or years to deploy. Each project resulted in custom topologies that had to fit delicately into the structure that was already in place. The world looked to DCIM as the management solution for change and two camps of first generation DCIM players formed; those that addressed all of the constant change seen with IT assets, and those that addressed the optimization of the building itself and its energy usage. A few companies tried to do both, but failed to deliver on the utopia they promised.

So in the days when a stream of custom IT solutions were being created, first generation DCIM as an asset manager made a lot of sense. “Where is the best place to put my 6 new servers to run a new ERP?” was a great question in 2006 and DCIM’s asset management capabilities could answer this question handily. Every data center was different, and every project needed a unique combination of hardware devices to implement the required functions at the required scale. Where was the best space to put servers and how to connect everything to available resources was a project-centric process, and pioneers in the DCIM adoption camp realized just how powerful DCIM could be at shortening the time it took to react to changing business needs. And first generation DCIM shined at allowing individual devices to be located and serviced.

Capacity is now a Business Function

In 2016, all that has changed. IT had to run much faster, and provide instant gratification. To do this, IT has become a planning function which assures that the data center always has enough resources for the next 6 months or growth. Those resources must be sitting in a capacity pool in ANTICIPATION of the business which can be carved out with the touch of a button. In 2016, applications are virtualized and run on resources which are also are virtualized, so these pooled resources can be used for any application and the specific device where an application, or a part of an application doesn’t really matter. All that matters is that suitable levels of resources must always be available, which is a business planning function, not a technology exercise. This is similar to the Public Cloud story and in fact those companies that are embracing the Public Cloud in a big way cite the elimination of the need to care about the physical structure elasticity as a main driver for their choice. That said, very few companies today are wholly Public Cloud based, so in-house data centers and co-location facilities are the mainstay of all of our livelihoods.

New Approaches and New Challenges

So what does this mean to you as an IT professional? You may be considering a way to leverage the Public Cloud, and you likely are trying to balance Public Cloud and in-house resources to maximize value. For your in-house structure, if you are deploying new gear in response to specific business applications that have already become a requirement, then you are ‘kicking it old school’. You are delivering IT services in a manner that was state of the art 10 years ago. Over the subsequent years, the a Public Cloud providers proved that IT could be built as a pool of resources and THEN utilized for ANY application to realize instant gratification. Pushbutton IT was delivered by public cloud providers and proved that IT agility was possible, highly valued and very cost-effective. According to Gartner, more than 37% of all IT services are delivered without the involvement of the IT organization due to this desire for instant gratification. (They call it “Shadow IT”). Today, in-house resources simply need to be transformed into Private Clouds.

The Business of Growth

The biggest role of IT has become managing these pools of resources to make sure there is just enough over every point in time. Putting the Public Cloud portions aside for a minute, the details of which server or what switch is far less important because everything looking forward is cookie-cutter, virtualized and modular. As resources are consumed, the business planning function results in more capacity being brought online in manageable increments. An analogy would be a typical private municipal water district which is chartered to deliver water to residents, and who must negotiate long-term wholesale contracts to do so. When they contract water, they look at the statistical growth in their service areas and plan accordingly, not the specific houses that are under construction at any point in time. These contracts span 25 years or more, similar to the lifespan of a data center itself.

Door #1 or Door #2?

So what does this mean for DCIM? Remember I said there were two camps? The first camps deals with managing in-house asset life cycles. The need for individual asset management in this new macro and virtualized world is at best tactical, and in worse case irrelevant. We no longer focus too heavily on any the life cycle of single device any longer. We don’t really care about one server or one port or one Rack-Unit of space. We do need an absolute ‘as built’ model of what is in the data center, and first generation DCIM is very good at this, but the change management granularity is now at a much larger scale. (i.e. entire rooms are changed for tech refresh purposes, not individual servers). We all need to think bigger.

Gratuitous Mention of IoT?

No, not really. Data center automation is turning out to be the secret sauce. Delivering IT services cost-effectively in a virtualized (or software-defined) data center requires comprehensive instrumentation and action-oriented automation. The science behind capture instrumentation and then massaging it into policies that can be automatically executed comes from the new generation of data science practitioners. A whole new crop of people are learning how to deal with millions of data points that come in real-time, and turn them into rules and policies that can be fed into a orchestrator engine. For example, to accurately change the set point of a CRAC which services a specific pod, there may be more than 1000 data points that need to be understood. The new data scientists who practice IoT today can apply this same science to automating a data center.

Today, Gartner estimates that more than 86% of all servers are virtualized, and moving workloads around the data center is fully automated by the virtualization vendors. With all of that application movement, so goes their storage and networking. Essentially the IT function is already automated, and the work being processed above the “raised floor” is already more dynamic than you can imagine. So what about the data center facilities structure itself?

“DCIM-nextgen” is Automation

Automation is the key to everything in a modern data center, modular or otherwise. Custom automation approaches have already been used for years by the co-location providers. And now that embedded instrumentation is widely available on most IT devices today, and the science to interpret and action it exists, the most forward looking DCIM players will adapter their offerings to not only SHOW how the data center is built, but will make precise decisions about what it takes to operate that data center and automatically do it! As IT workloads increase and decrease, or as virtualization technologies move these workloads from one aisle to another, an automated data center itself will optimize power and cooling most effectively. Over time, the “IoT” style data science will make these feedback loops tighter and increasingly more efficient.

Where DCIM is Going…

That is where DCIM is going. DCIM will become the plan of record for what is on the data center floor, in-house or co-lo. And DCIM-nextgen will take a much bigger role in the coordination and application of facility resources which align with the automated movement of IT workloads in those centers. The days where actual humans will be involved in the ‘management’ of a data center are coming to a close. I look forward to the days when you can almost hear the buzzing of business when walking through a data center, and feel the varying breeze as you move from pod to pod, knowing full well that the business is optimized and running cost-effectively.

DCIM and Automation

DCIM and Automation

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Best Bet DCIM Providers for 2016

Now that 2016 is upon us, I find that the DCIM industry is really starting to settle out. Most of you know that I joined the DCIM industry before it was actually called “DCIM” (in fact for a brief period of time we all called it “PRIM” (Physical Resource Infrastructure Management, but through a combination of coincidences at both Gartner and Forrester, the term migrated to DCIM).  I worked for many of the players in this space, and became deeply involved with all of the key vendors attempting to participate in this solution area, and in the process I even became friends with the vast majority of the pundits in the industry.

The DCIM Dust is Settling in 2016

The DCIM Dust is Settling in 2016

While I have ‘mostly’ left the DCIM category professionally in my ‘day job’, I still stay tightly connected to what is going on. In fact, I still get quite a few calls from researchers and other vested parties regarding what is happening in the industry or with one company or another. And as you know, I have my opinions and am happy to share.

First, let me say that DCIM is real. It can be the answer to real questions that are on the table now. Data Centers themselves are changing quickly, but the need to understand more about what is happening inside those centers has never been more urgent. Whether you own the cement or not, the “data center” is still effectively yours.

Did I say ‘urgent’?  Yup, I did. let me explain.

Assertion: It is urgent to get your data center house in order. With the easy access to rented transactions and services through providers like AWS and GOOGLE, your challenge is to figure out how those cloud providers do the magic they do, and then either subscribe to them in a big way, or learn from them ASAP!  You can choose either path, but time is ticking by and if you don’t make that strategic choice, YOUR REPLACEMENT WILL! Getting your house in order is not just about new ways to assure availability and capacity, but instead focuses heavily on understanding the fiscal impacts of every choice you make to the business. And in specific, reducing your cost per unit of work. If you don’t know what YOUR cost is per unit of work, then your house is NOT in order. DCIM can help you understand unit of work costing but you need to understand what you are missing to get there.

Unit of Work? What the heck is that?

It’s the reason you exist. It’s the reason IT exists. It’s the way you provide capabilities and services to your business and it’s the plan you signed up for when you became an IT professional. If you are the IRS, perhaps the unit of work you care about is processing an individual 1040 tax form.  If you are eBay, then your unit of work might be displaying a page in its marketplace. At the end of the day, you want to know the cost for that unit of work (which for example at eBay is currently running about three-quarters of a penny). Tip: Thinking about cost per unit of work shows that you have traded your technologist hat for a business hat. And it is just common sense that over time your goal MUST BE to reduce the cost per unit of work.

So, which DCIM is right for you to help you get there?

Great question, and one I have been wrestling with for 10 years! Here’s the industry’s dirty little secret: there is no single answer! “DCIM” does not define a single set of capabilities, it refers to the whole roster of solutions that offer insight and/or control over the physical components found in a data center (or co-location facility, same story). Those physical components can be facilities oriented (CRAC, PDU, Genset, etc) or IT oriented (servers, switches, storage, etc). And to date, no single vendor has proven that they can address all (or even most) of the needs of the entire data center with any amount of credibility. (There has been a ton of posturing and wild vendor claims, but the reality is each vendor does one or two things very well, and the rest of YOUR actual and tangible needs will need to be met by additional vendor solutions)

So here’s my 2016 “DCIM GoTo”  list of vendors that can deliver EXACTLY what they say and who spend a lot LESS time posturing about what they really do…

  1. Want great facilities monitoring of well-behaved (Network enabled) mechanical and electric gear? Then FieldView is a great choice. They collect data, normalize it, and then present it in a clean set of views and reports. Their technology is installed in hundreds of the world’s biggest data centers and their deployment process is an afternoon project. (Well, it gets a bit more complex when you have older equipment which communicates using ancient protocols, RS232 interfaces, or 20mA signally but they can get you there too). Update: FieldView is now part of Nlyte, so I would expect that their sales messages and pricing would be more attractive when combined. I would also expect that the actual technical integrations will follow over the course of the coming year.
  2. Feel like high quality and intelligent power at the rack level is part of the insight you seek? You probably need to get Legrand or ServerTech PDUs. They can offer a level of instrumentation at the outlet level (if you care) or at the branch level (which you need) and each supplies highly capable management software to leverage their hardware PDUs intelligence. Above all, they are solid citizens (that won’t fail to deliver on Job #1: providing clean power over long stressful periods of time). Keep in mind that dense racks exceed 20kW these days, so delivering solid power under these high loads is not an easy task. Legrand and STI understand this and can help you too. (And if you happen to have any other brand of intelligent PDU already, head on over to Sunbird for their PowerIQ vendor-neutral software solution to manage ANY brand of intelligent network-attached PDU)
  3. Want to understand the 1000’s mini workflows associated with all of the devices in a data center and all of the related  asset provisioning and decommissioning? Want to see where every IT device is installed and connected at any point in time? Or maybe where the next new server is best placed? Nlyte would be a good choice and their solution is focused on the contents and context of each rack. At any point in time, they can allow you to visualize all of the racks you maintain in near perfect fidelity as they exist today, or over the course of future projects once completed. Update: See FieldView notes above as Nlyte purchased FieldView in February 2016.
  4. How about 3D fly-through views of your data center with the ability to turn layers on and off to allow focused visibility into the physical subsystems in production? Commscope’s  iTracs makes sense. For a facilities and cabling oriented DCIM goal, this could be an essential element and visually stunning to watch. Note: “real work” in a data center gets done in 2D, so be careful not to get enamored by the art of iTracs, and try to stay focused on the value you seek. iTracs is an amazing solution for the maintainers of cable (data and power) and the associated flooring subsystems.
  5. Feel like working deep through the fiscal model of your data center and studying the impacts of your various proposed changes on Opex and Capex? Want to quantify the TCO in a defendable way? CFO breathing down your neck to translate your tech-speak into dollars and timeframes? Think Romonet as they understand every aspect of the data center business and allow each component to be included in their fiscal modelling.
  6. And what about those environmental sensors?  So many data centers are still running blind when it comes to heat and humidity sensing, and yet the technology is cheap and effective. Temperature and humidity may no longer be a major factor in wide-scale equipment failure (since the manufacturers have widen the ranges quite a bit), but those issues play havoc on energy COSTS. RFcode does a great job in delivering sensors that work hard and feed into very visual at-a-glance dashboards. They are installed in seconds and begin reporting immediately. They are low-cost and included batteries last for years! And even better, their sensor data feeds are natively supported by most monitoring solutions on the market today.
  7. And finally, Discovery. This is an age-old problem. Now it would be great if some magical software could figure out WHERE a server or switch was installed, but thanks to the 80-year old 19-inch rack standard, we are not going to see that in any non-proprietary fashion anytime soon. There simply is no agreed mechanism to do so at the rack level and even the new OCP rack spec ignores this physical placement need once again. That said, No Limits Software is currently the deepest and least intrusive way to figure out WHAT is installed in the rack. It digs deep inside the device operating software and paints a clear picture, right now to firmware version numbers. Discovery can play a huge part in profiling what you have in the data center, firmware versions, installed software, etc. and when fully executed, can provide an accurate inventory of everything you have.

So what about Emerson and Schneider? It is still not clear what their end-game is. They DO provide value to their customers, but tend to have their maximum value within their own installed base. The result, many end users consider Emerson and Schneider as element managers for users of their own equipment. Most exciting, as data centers become more software-defined, both Emerson and Schneider have indicated that they have a solid set of offerings and longer-term vision for delivery of automated and self-adjusting data centers over time.  (Emerson has a video on YouTube which puts that timeframe around the year 2025). And what happened to CA? Humm…. After they backed away from the whole DCIM wave recently, it is my expectation that they will simply extend their existing ITSM and ITAM  tools to include some of the extended asset information being sought.

Make 2016 the year that you challenge yourself to be an active listener when talking to DCIM providers. If you listen carefully you’ll hear them say what they do REALLY well, and you’ll hear what they consider a minor area of interest. Try not to put words in their mouths. They want to say “yes” to most everything, so be smarter this year. Understand that for the foreseeable future, you’ll need multiple tools to understand the physical layer and the resulting business metrics that go with it.

Posted in Predications | Tagged , , , , , , , | 1 Comment

Moore’s Law – It’s about Embracing the Business Opportunity

Gordon Moore's 1965 Graphic

Gordon Moore’s 1965 Graphic about technology doubling

I love pondering about the last 50 years of computing innovation. Although I knew nothing about technology in the mid-sixties when Gordon Moore observed that the number of components for integrated functions doubles every 12 months, it has been a guideline influencing literally millions of subsequent business choices that have been made by vendors and end-users alike for much of that period of time.

Now the curious thing is that Gordon Moore changed his projected timeframe to 24 months in the mid-seventies, at the very beginning of the first gen of the multi-purpose CPU revolution (refer to the general purpose CPU, the Intel 8080) since he realized that building multi-purpose CPUs was a much bigger undertaking than the function-level integrated circuits (refer to single-function chips like the 7400 Series) that were the state of the art until that point.

Wait, in 1965 Gordon Moore said component counts double every “12 months”  and then when big bad chips (like the 8080) were in their infancy ten years later he said the doubling rate had slowed to “24 months” and yet everything you and I read today quotes “Moore’s Law” (which really isn’t a LAW at all) as a doubling every “18 months”. What gives? Well, marketing does. Some clever marketing soul realized that the only way to make the facts and the fiction ‘kind of align’ was to take the average… 18 months in this case. It was believable, defendable, and has stood the test of time (with just a bit of hand-waving required).

Transisitor Counts for CPUs

Transistor Counts for CPUs has loosely followed Gordon Moore’s observation

So, does it really matter which number is more accurate? No, not really. The point is that every year or two, most technology things double in capacity AND half in cost at the component level. Servers become twice as capable every couple of years. Network transport doubles too. And when you compound this effect over any reasonable period of time, it becomes staggering. In fact, we store more information in one day today then we did in all of the 1980’s. Most importantly, we don’t build technology for technology’s sake, we do so to access the VALUE of all of this information, which doubles too!

And with technologies like the Internet of Things and Software-Defined networking and storage, the rate of this doubling is accelerating. We as an industry are like a veracious animal, feeding on information with nothing but opportunity and creativity to guide us.  The social experience is getting your 3 year old daughter and your 93-year old grandmother into the game too. And all of it is made possible with the new generation of Information Technology which is doubling per the curve. Not the IT that existed when Gordon made his observations, but the IT that sits in your hand right now and is connected to the world. Keep in mind that the Facebook main screen that you probably looked at this morning during your first cup of coffee actually consists of a hundred or more applications working together, each driving some portion of your experience. Each app communicating with the others to bring you a rich, fun and VALUABLE experience. That is why we all do what we do in the tech industry, that’s where it all shines, and that is why this doubling concept is so essential.

At the end of the day, there are massive transformations of near every sector of business happening to take advantage of this new IT. Finally, the business is driving technology. Finance and Education, Government and Aerospace, Entertainment and Internet… the most successful businesses are re-tooling themselves to embrace and leverage these new technologies knowing that everything they do today will be HALF of what their opportunity is next year.

Thanks Gordon…

Posted in Uncategorized | Tagged | 1 Comment

DCIM Facts versus Myth – Time for a Reality Check!

Facts versus Myths for DCIM

Facts versus Myths for DCIM

Last week we conducted an online webinar devoted to discussing the common mis-understandings and myths associated with DCIM. Over 400 people registered for the webinar and we had a ton of questions and comments afterwards. Its very clear that DCIM is a brand new category of solution for many of the attendees and there are many assumptions and incorrect data points that are preventing many end-users from realizing the benefits of DCIM.

I have selected a handful of the more popular myths we explored during the webinar, and present them here along with a more detailed narrative about the reason for the “myth” and the informed facts that should be considered instead. My goal is to provide the necessary DCIM facts for your consideration and to seed your thought processes as you begin your DCIM journey. Read the whole article Nlyte Blog.

Posted in Uncategorized | Tagged | Leave a comment

Exposing IT Value to the Business

The IT industry is currently experiencing an amazing transformation. Whereas most long term IT professionals have spent their careers creating and supporting increasingly complex IT structures with primary metrics of availability and uptime, the new CIO mantra has become service delivery at the right cost. In effect, CIOs are formalizing many previous efforts and creating service products that can be delivered upon request, with a keen understanding about the costs incurred to deliver those services. CIOs today think about these Service Portfolios as the means to set expectations on how technology will be adopted, how it will be supported, and at what cost those technologies will come. – See more at here

Posted in Uncategorized | Tagged | 1 Comment

Taking the 2015 IT Infrastructure Scenic View

IT Infrastructure is changing dramatically in 2015 and beyond

The Ways in Which We deliver IT is Changing Dramatically in 2015 and beyond

The IT Infrastructure challenge in 2015 is daunting and only the strongest will survive: The strongest vendors and the strongest IT professionals. Nearly everything we knew about building and maintaining IT infrastructures is being superseded by a new wave of fresh new technologies, new processes, new people and new approaches.  I specifically used the word “New” rather than “Alternative” since these fundamental changes really are radically different from those in the past and they *ARE* happening with or without you. For instance, we aren’t just making disk drives faster or bigger, we are eliminating them from your data center floor or making them so smart they handle growth easily. Same thing for servers and networks. Not just more of the same old thing, but NEW WAYS of doing it. All of IT is going through radical change.

Think of the kind of change we saw in the data center after 1991 (the year Linux came on the scene),  or the desktop after to 1981 (when IBM started to ship the PC) or sharing and collaboration in 1995 (after the internet boomed). These foundational changes allowed/required everyone involved to (in Steve Jobs’ words) “Think Different”, and it also destroyed the luddites that chose to ignore these shifts. For those of us that were paying close attention and embraced those changes, we were handsomely rewarded. Those forward-looking companies that adopted these changes were wildly successful and a lot of people’s careers were made. Data Centers could now be filled with systems based on new hardware and software, virtualized loads, unlimited scale applications, all with a level of performance, interoperability, efficiency and cost structure that wasn’t even in the same ballpark as previous approaches.

So that brings us to IT circa 2015. It’s all happening again! Everything that we 40- and 50-somethings know about the IT business is up for grabs again and all of it is being retooled around us as we speak.

Here is my list of 10 of the most impactful changes that are occurring which are worth getting your hands around if you are an IT professional planning to stay in the IT segment:

1. IT Service Catalog. Most IT structures and the associated processes have grown into monsters. The complexity and delicate structures stifle creativity, and choke new initiatives at a time where stakeholders are asking for more creativity and agility.  The most successful IT professionals are now looking for the means to create service offerings as if by catalog. Each service “product” must have a known cost per user, a known delivery timeframe, has specific capability/deliverables and expectations, and a whole slew of escalation definition when things don’t go right. IT “products” (like email) are being defined, and the costs to deliver those “products” quantified. It is this Service Catalog mentality that makes the most admired CIOs in this new era smile.

2. Automating process and control. Whereas we used to have an sysadmin for every 50 or 100 servers, keeping them patched and operating, we now find sysadmins handling thousands of servers through the usage of automation tools. Automated patching, provisioning, migration. Application installs and password resets. All of this is becoming automated through the use of tools that capture the human intelligence and then dispatch that same knowledge automatically each time the same task needs to be performed. Think “copy and paste” at the macro level. And it doesn’t stop there. Virtualized loads can automatically shift from physical server to physical server, and HVAC gear can automatically sense the conditions and self adjust as needed. This is happening today.

3. Software Defined Networks are a new concept, with less than 5% penetration (in production), but are on a hockey-stick shaped curve today. Adoption is beginning to take off and the various approaches are finding their sweet-spots. What started out as an economic “OpenFlow” free-for-all, has quickly become a business discussion about capabilities, flexibility and value. The protocol itself has taken the backseat to new capabilities. Why did this happen? Well, we all grew up on networks which were built-in a north-south fashion. The vast majority of all traffic started or ended at the edge devices. Server to server communication was limited. That’s why all of the traditional switch vendors have two lines of products today; core and edge. With the highly connected web full of various ‘services’, we now see server-to-server communications skyrocketing. This can be thought of east-west communications and is demanding a ‘fabric’ approach to networking. And the icing on the cake: once you have built an SDN interconnect fabric, you have the perfect place to host virtualized function or services that essentially reside everywhere the fabric is. Load-balancers and firewalls come to mind. Let’s think of this SDN structure as a ‘thick connectivity fabric’. And what more, applications themselves can tune the fabric for their own needs and take advantage of lots of orchestration possibilities! And a bonus for the pure technologiests; some of the industry’s most advanced SDN players can offer full layer-1 monitoring within the fabric itself! No more expensive/duplicate networks just for performance analysis!

4. Self-Service mentality. I want IT now. Rakesh Kumar at Gartner presented a paper in December 2014 that stated that 37% of the nearly $4Trillion dollars of IT spending in 2014 occurred OUTSIDE of the IT organization. 37% of all IT projects didn’t involve the IT organization at all! Shocking. This is because the end-user now has the option to look at any number of 3rd party service catalogs and buy with a credit card. Want a new desktop? Consider a tablet via CDW. Need storage? Think Box. Need email in a hurry? Google to the rescue.  No longer is the IT organization the end users’ only source of services.  The strongest players moving forward will be the IT professionals that embrace speed and agility in the delivery of their capabilities. Projects with many-month delivery schedules are no longer realistic when a 3rd party can deliver next-day or next week.

5. Who would have thought that a 10U-12U chassis would be housing HUNDREDS of CPU cores and be able to move over 7Tb/s of data internally? The big server providers today offer really dense boxes you can buy for $100K or more ($250K fully loaded in some cases), and at 10-12Ucan put 3 or 4 of these monsters in one rack.  I dare say we’ll see 40kW per rack very soon, whereas just a few years ago we saw 40kW per ROW. This is a mindset that enables dramatic difference in the way we approach new data center build-outs and retrofits. It has been hypothesized that your existing data center would last for 30 years or more if you simply took advantage of all of the ‘Moore’s Law’ advances taking place at the device density level. (assuming that your utility company can get you ‘a few more megawatts’ every few years).

6. I like to think about the simpler days when we all built 24-inch square raised floors, and ran our cooling and cabling under these tiles. We talked about loading capacities and laughed at the racks getting heavier, but until recently it was just a curious discussion. No longer! Those same monster dense devices also weigh a ton (literally) and it is no longer a best practice to plan for using raised floor. Building your data center directly on concrete is all the rage. Data and power cabling run overhead, and cooling strategies are beginning to take advantage of the fact that COLD AIR likes to FALL (and HEAT rises). It’s hard to understand why we decided 25 years ago to create cold air on the perimeter and then PUSH it upward into the racks, fighting pressure physics. Cold air sinks and the new generation of data center designers get that.

7. Unbounded Infrastructure as a means to blur the lines between mechanics and functions delivered on the data center floor. As it turns out if we stop thinking about individual boxes as individual management islands with each individual box doing a bit of work and then somehow the results being aggregated externally, we can take a whole new approach to IT. Hardware and Software mechanisms are now commonplace that ignore the physical device boundaries and allow the capacity to be aggregated.  Want to know how Twitter or AirBnB handle all of their transactions in real-time? There is a project called Mesos that creates services out of boxes. Need more I/O? Simply add more I/O services which instantly become part of the ‘system’. How do you write applications for a world like this? You write applications for the platform not the server, and then let the platform independently take care of being scaled in whatever resource it needs.

8. We used to SIZE and BUILD data centers based upon some perceived top-level or watermark of capacity needed. In a typical scenario, data centers were built as big as anyone could imagine the load would be, and then organizations and their projects moved into that center over an extended period of time. The downside to this old school approach is the cost: Comparing the cost to occupy the first square foot of space on day-1 compared to the costs associated with the last square foot 3-5 years down the road were enormous.  Building a big data center was only an academic exercise to keep the CIO happy. In reality, it never made sense (to the CFO) to over-build and hope that the space would be needed down the road. Today, you can find modular designs to replace your old way of building a data center; both manufactured (like Baselayer) and brick-and-mortar (like Compass Data Centers) that can get you new space in 20-200 rack increments or less. (Companies like Elliptical even do a micro-modular design down to ONE rack at a time). I guess we’ll need a new term to use to replace ‘breaking ground’ when using these modular approaches.

9. We all laughed when data centers were so cold that you needed a jacket to walk through it. Over the last 10 years it has become quite another matter and everyone is talking about it. What started out as energy efficiency with the Green Grid publishing their “PUE” metric, has become a battle cry for every manufacturer and end-user alike. Use more power and in a smaller amount of space, and get the power bill down per unit of work. With the cost of power now being a top-3 concern for everyone in an Enterprise, whole new approaches are being used to make data centers more efficient. Starting with the location of cheaper power which is driving where they are being built, and then consider the ability to use free-air cooling at that location gets us thinking.  Add to it the advances in CPU designs and power supply design and we have everyone working on energy costs.

10. I purposely placed “The Cloud” here at the end since it is one of the most dramatic changes some of us may see in our entire career.  Gartner has predicated that by next year, 20% of all Enterprises will have NO backend IT functions in-house. Even the old-line corporations are already diving into the Cloud for various applications. But don’t panic. This is a drawn out process and will be going on for a dozen years. Certain applications are perfect for the Cloud today and others are too time-sensitive or confidential to lend themselves to today’s Cloud offerings. Remember Cloud has already been around for 10 years. Salesforce.Com was one of the earliest examples of an application/software level cloud. However the Cloud probably raised your eyebrows when companies like Amazon and Rackspace began offered platform-level cloud. Most exciting, the tools now exist to allow in-house workloads to be shifted to the Cloud and visa versa as demand changes. (Companies like VMware do a good job of this). Best guess on the mix of in-house IT services  versus  Public Cloud provided services: less than 15-20% today, approaching 30-40%% in 5 years and perhaps 50-60% in 10 years.

So what does all of this mean? Think Forward. Think about articulating the business problems first, and then solicit every smart guy you know to figure out how to solve that problem. Building more of the same old structure is likely the wrong answer. Simply adding more disk spindles or a new edge switch is likely throwing good money away. Embrace it all and be ready to defend your choices. “It’s always worked like this” is no longer an answer with much street credibility. And “If it’s not broken don’t fix it” still applies, but I would argue that nearly all of the IT stuff we’ve collectively inherited in our data center could be considered “broken” by today’s standards.

Let’s commit to fix it…

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment