The Ways in Which We deliver IT is Changing Dramatically in 2015 and beyond
The IT Infrastructure challenge in 2015 is daunting and only the strongest will survive: The strongest vendors and the strongest IT professionals. Nearly everything we knew about building and maintaining IT infrastructures is being superseded by a new wave of fresh new technologies, new processes, new people and new approaches. I specifically used the word “New” rather than “Alternative” since these fundamental changes really are radically different from those in the past and they *ARE* happening with or without you. For instance, we aren’t just making disk drives faster or bigger, we are eliminating them from your data center floor or making them so smart they handle growth easily. Same thing for servers and networks. Not just more of the same old thing, but NEW WAYS of doing it. All of IT is going through radical change.
Think of the kind of change we saw in the data center after 1991 (the year Linux came on the scene), or the desktop after to 1981 (when IBM started to ship the PC) or sharing and collaboration in 1995 (after Al Gore invented the internet). These foundational changes allowed/required everyone involved to (in Steve Jobs’ words) “Think Different”, and it also destroyed the Luddites that chose to ignore these shifts. For those of us that were paying close attention and embraced those changes, we were handsomely rewarded. Those forward-looking companies were wildly successful and a lot of people’s careers were made. Data Centers could now be filled with systems based on new hardware and software, virtualized loads, unlimited scale applications, all with a level of performance, interoperability, efficiency and cost structure that wasn’t even in the same ballpark as previous approaches.
So that brings us to IT circa 2015. It’s all happening again! Everything we 40- and 50-somethings know about the IT business is up for grabs again and all of it is being retooled around us as we speak.
Here is my list of 10 of the most impactful changes that are occurring which are worth getting your hands around if you are an IT professional planning to stay in the IT segment:
1. IT Service Catalog. The IT structure and all of its processes has grown into a monster. The complexity and delicate structure stifles creativity, and chokes new initiatives at a time where stakeholders are asking for more creativity and agility. The most successful IT professionals are now looking for the means to create service offerings as if by catalog. Each service “product” must have a known cost per user, a known delivery timeframe, has specific capability/deliverables and expectations, and a whole slew of escalation definition when things don’t go right. IT “products” (like email) are being defined, and the costs to deliver those “products” quantified. It is this Service Catalog mentality that makes the most admired CIOs in this new era smile.
2. Automating process and control. Whereas we used to have an sysadmin for every 50 or 100 servers, keeping them patched and operating, we now find sysadmins handling thousands of servers through the usage of automation tools. Automated patching, provisioning, migration. Application installs and Password resets. All of this is becoming automated through the use of tools that capture the human intelligence and then dispatch that same knowledge automatically each time the same task needs to be performed. Think “copy and paste” at the macro level. And it doesn’t stop there. Virtualized loads can automatically shift from physical server to physical server, and HVAC gear can automatically sense the conditions and self adjust as needed. This is happening today.
3. Software Defined Networks are a new concept, with less than 5% penetration (in production), but are on a hockey-stick shaped curve today. Adoption is beginning to take off and the various approaches are finding their sweet-spots. What started out as an economic “OpenFlow” free-for-all, has quickly become a business discussion about capabilities, flexibility and value. The protocol itself has taken the backseat to new capabilities. Why did this happen? Well, we all grew up on networks which were built-in a north-south fashion. The vast majority of all traffic started or ended at the edge devices. Server to server communication was limited. That’s why all of the switch vendors have two lines of products today; Core and Edge. With the highly connected web full of various ‘services’, we now see server-to-server communications skyrocketing. This can be thought of east-west communications and is demanding a ‘fabric’ approach to networking. And the icing on the cake: once you have built an SDN interconnect fabric, you have the perfect place to host virtualized services that essentially reside everywhere the fabric is. Let’s think of this as a ‘thick fabric’. And what more, applications themselves can tune the fabric for their own needs! And a bonus for the pure technologiests; full layer-1 monitoring can also be done within a robust SDN fabric! No more expensive/duplicate networks just for performance analysis!
4. Self-Service mentality. I want IT now. Rakesh Kumar at Gartner presented a paper in December 2014 that stated that 37% of the nearly $4Trillion dollars of IT spending in 2014 occurred OUTSIDE of the IT organization. 37% of all IT projects didn’t involve the IT organization at all! Shocking. This is because the end-user has the option to look at any number of service catalogs and buy with a credit card. Want a new desktop? Consider a tablet. Need storage? Think Box. Need email in a hurry? Google to the rescue. No longer is the IT organization the end users’ only source of services. The strongest player moving forward will be the IT professional that embrace speed and agility in the delivery of their capabilities. Projects with multi-month delivery schedules are no longer realistic.
5. Who would have thought that a 10U chassis would be housing HUNDREDS of CPU cores and be able to move over 7Tb/s of data internally? Look at the new HP C7000 for instance and you’ll see one of the most dense boxes you can buy (a cool $250K nicely loaded), and can put 4 of these monsters in one rack. I dare say we’ll see 40kW per rack more often than we ever imagined just a couple of years ago. This is a mindset that enables dramatic difference in the way we approach new data center build-outs and retrofits. It has been hypothesized that your existing data center would last for 30 years or more if you simply took advantage of all of the ‘Moore’s Law’ advances taking place. (assuming that your utility company can get you ‘a few more megawatts’ every few years).
6. I like to think about the simpler days when we all built 24-inch square raised floors, and ran our cooling and cabling under these tiles. We talked about loading capacities and laughed at the racks getting heavier, but until recently it was just a curious discussion. No longer! Those same monster dense devices also weigh a ton (literally) and it is no longer a best practice to plan for using raised floor. Building your data center directly on concrete is all the rage. Data and power cabling run overhead, and cooling strategies are beginning to take advantage of the fact that COLD AIR likes to FALL. It’s hard to understand why we decided 25 years ago to create cold air on the perimeter and then PUSH it upward into the racks, fighting pressure physics. Cold air sinks and the new generation of data center designers get that.
7. Unbounded Infrastructure as a means to blur the lines between mechanics and functions delivered on the data center floor. As it turns out if we stop thinking about individual boxes as individual management islands with each individual box doing a bit of work and then somehow the results being aggregated externally, we can take a whole new approach to IT. Hardware and Software mechanisms are now commonplace that ignore the physical device boundaries and allow the capacity to be aggregated. Want to know how Twitter or AirBnB handle all of their transactions in real-time? There is a project called Mesos that creates services out of boxes. Need more I/O? Simply add more I/O services which instantly become part of the ‘system’. How do you write applications for a world like this? You write applications for the platform, and then let the platform independently take care of being scaled.
8. We used to build data centers based upon some perceived level or watermark of capacity needed. In a typical scenario, data centers were built as big as anyone could imagine, and then organizations and their projects moved into over time. The downside to this old school approach is the cost: Comparing the cost to occupy the first square foot of space on day-1 compared to the costs associated with the last square foot 3 years down the road were enormous. Building big data center was only an academic exercise to keep the CFO happy. In reality, it never made sense to over-build and hope that the space would be needed down the road. Today, you can find modular designs to replace your old way of building a data center; both manufactured (like Baselayer) and brick-and-mortar (like Compass) that can get you new space in 20-200 rack increments or less. (Companies like Elliptical even do a micro-modular design down to ONE rack at a time). I guess we’ll need a new term to use to replace ‘breaking ground’ when using modular.
9. We are laughed when data centers were so cold that you needed a jacket to walk through it. Over the last 10 years it has become quite another matter and everyone is talking about it. What started out as energy efficiency with the Green Grid publishing their “PUE” metric, has become a battle cry for every manufacturer and end-user alike. Use more power and in a smaller amount of space, and get the power bill down per unit of work. With the cost of power now being a top-3 concern for everyone in an Enterprise, whole new approaches are being used to make data centers more efficient. Starting with the location of cheaper power which is driving where they are being built, and then consider the ability to use free-air cooling at that location gets us thinking. Add to it the advanced is CPU cheap and power supply design and we have everyone working on energy costs.
10. I purposely placed The Cloud here at the end since it is one of the most dramatic changes some of us may see in our entire career. Gartner has predicated that by next year, 20% of all Enterprises will have NO backend IT functions in-house. Even the old-line corporations are already diving into the Cloud for various applications. But don’t panic. This is a drawn out process and will be going on for a dozen years. Certain applications are perfect for the Cloud today and others are too time-sensitive or confidential to lend themselves to today’s Cloud offerings. Remember Cloud has already been around for 10 years. Salesforce.Com was one of the earliest examples of an application/software level cloud. However the Cloud probably raised your eyebrows when companies like Amazon and Rackspace began offered platform-level cloud. Most exciting, the tools now exist to allow in-house workloads to be shifted to the Cloud and visa versa as demand changes. (Companies like VMware do a good job of this).
So what does all of this mean? Think Forward. Think about articulating the business problems first, and then solicit every smart guy you know to figure out how to solve that problem. Building more of the same old structure is likely the wrong answer. Simply adding more disk spindles or a new edge switch is likely throwing good money away. Embrace it all and be ready to defend your choices. “It’s always worked like this” is no longer an answer with much street credibility. And “If it’s not broken don’t fix it” still applies, but I would argue that nearly all of the IT stuff we’ve collectively inherited in our data center could be considered “broken” by today’s standards.
Let’s commit to fix it…