Remember in 2001 when you heard about VMware GSX? It sounded like pure magic and seemed to do the impossible: it allowed you to run multiple instances of any real server operating system on a single hardware server. The operating system thought is was running on a hardware box, and yet it was just a slice of that box. Over the next few years, the buzz turned to a roar and quickly encouraged most commercial organizations to try a ‘pilot’. During those pilots, they realized that certain applications (like web servers) were a great fit for virtualized servers and these organizations set their buying sights on a new generation of big beefy servers which were needed to take full advantage of virtualization. Virtualization delivered just what it promised, and then some!
It turns out there are several technical ways to virtualize, so VMware found competitors like Citrix, Microsoft and Sun also jumped in and over the next few years the single-host/multiple-guest computing model came of age. Intel and AMD even changed their CPU chip architectures to directly support this type of virtualization innovation at the hardware core. Software providers changed their licensing models to account for virtualized servers directly too. From a timeline standpoint, virtualization was imagined, tested, tweaked and then adopted “en masse” over a period of a dozen years. According to Gartner, server virtualization accounted for 16% of all workloads in 2009, accounts for more than 50% of all server workloads today and will rise above 80% within the next 3 years (2017). This is an adoption curve we expect with game changing technology.
That brings me to the area of Software Defined Networking. Although programmable networking can be traced all the way back to 1995 with efforts at AT&T and SUN, the modern-day use of the term SDN is connected to the period around 2011 when the Open Networking Foundation (ONF) was formed to further the creation and use of “open” networking standard protocols. The idea of today’s SDN is simple: Rather than each vendor continuing to ship proprietary networking gear with each device carrying all of its transport intelligence in every device, why not separate the control plane from the forwarding plane? Decompose the problem into two distinct areas that can each individually be optimized. And most important, scale and visibility becomes just a matter of technical creativity since a distributed controller architecture that drives any number of physical switching ports can be easily created to offer ‘one view’ of the whole thing! And the icing on the cake is when you realize that this new decomposed architecture (when implemented well) allows APPLICATIONS to determine their specific performance needs, NOT a slew of network engineers that are buried trying to set traffic shaping rules for every new capability that is being added to the network across any number of individual boxes.
Today we are just a handful of years into this SDN journey, like where virtualization was circa 2006. SDN is all the buzz today, and we are clearly at the tipping point on the hockey stick curve. Many corporations are trying SDN pilots and investments are being made by the VCs, vendors and end-users alike. A growing number of production deployments are being seen with a few huge deployments (like Facebook and Google) proving it out, which goes a long way to demonstrate the scalability, security and commercial value of SDN. Startups have formed for nearly every aspect of SDN. Some create high density hardware (“Forwarding Plane” or Switches), some create high intelligence controllers (“Control Plane” or Operating System), some even create value-added Applications (like traffic management, visualization and analytics). The biggest old-line networking vendors have even released overlays to their existing products to allow some level of “participation” in SDN networks. (This participation is at best a defensive/transitional approach, since the old devices will still carry all their heavy baggage, but it may allow some level of migration for large installed bases until/if they get to REAL SDN). Given the huge potential and the original premise of SDN, that transitional approach will be short-lived and I would expect to see a significant number of new generation hardware and software suppliers that are built from the ground up to be SDN components.
We are also seeing the SDN revolution underscore the need to think about application-level business values and set expectations accordingly. The staff required to manage SDN networks is vastly different than that of the older CLI-based “box by box” and application by application approaches network administrators have practiced for years. With SDN, if you can “think” it, the network can be programmed to support it. Most importantly, you “think” networking in an SDN world at the business application level, not at the box or protocol level. And in the same vein, the performance of applications can be measured against those business needs and could (in theory) self-adjust the network performance to meet their precise contracted needs. While SDN protocols have certain built-in performance values being collected all the time, this next generation of tuning capabilities will come from software developers that orchestrate the performance data being collected at the application level and communicate changes needed directly to the control plane itself.
Time will tell where and when adoption will occur. OpenFlow has been an earlier leader in the technical approaches used by many vendors in the SDN community, and yet the real SDN story is NOT about the protocols in use, its about the ease in which business services can be delivered better and faster and at a lower cost. It’s about enabling the new generation of computing, what Frank Gens at IDC calls the “Third Platform“. This new era is based upon always connected handheld, IoT, etc. And just like Virtualization, Heros will be made who will look back on their early adoption and championing of SDN as the crowning moment of their careers.
Be an SDN Hero!