Moore’s Law – It’s about Embracing the Business Opportunity

Gordon Moore's 1965 Graphic

Gordon Moore’s 1965 Graphic about technology doubling

I love pondering about the last 50 years of computing innovation. Although I knew nothing about technology in the mid-sixties when Gordon Moore observed that the number of components for integrated functions doubles every 12 months, it has been a guideline influencing literally millions of subsequent business choices that have been made by vendors and end-users alike for much of that period of time.

Now the curious thing is that Gordon Moore changed his projected timeframe to 24 months in the mid-seventies, at the very beginning of the first gen of the multi-purpose CPU revolution (refer to the general purpose CPU, the Intel 8080) since he realized that building multi-purpose CPUs was a much bigger undertaking than the function-level integrated circuits (refer to single-function chips like the 7400 Series) that were the state of the art until that point.

Wait, in 1965 Gordon Moore said component counts double every “12 months”  and then when big bad chips (like the 8080) were in their infancy ten years later he said the doubling rate had slowed to “24 months” and yet everything you and I read today quotes “Moore’s Law” (which really isn’t a LAW at all) as a doubling every “18 months”. What gives? Well, marketing does. Some clever marketing soul realized that the only way to make the facts and the fiction ‘kind of align’ was to take the average… 18 months in this case. It was believable, defendable, and has stood the test of time (with just a bit of hand-waving required).

Transisitor Counts for CPUs

Transistor Counts for CPUs has loosely followed Gordon Moore’s observation

So, does it really matter which number is more accurate? No, not really. The point is that every year or two, most technology things double in capacity AND half in cost at the component level. Servers become twice as capable every couple of years. Network transport doubles too. And when you compound this effect over any reasonable period of time, it becomes staggering. In fact, we store more information in one day today then we did in all of the 1980’s. Most importantly, we don’t build technology for technology’s sake, we do so to access the VALUE of all of this information, which doubles too!

And with technologies like the Internet of Things and Software-Defined networking and storage, the rate of this doubling is accelerating. We as an industry are like a veracious animal, feeding on information with nothing but opportunity and creativity to guide us.  The social experience is getting your 3 year old daughter and your 93-year old grandmother into the game too. And all of it is made possible with the new generation of Information Technology which is doubling per the curve. Not the IT that existed when Gordon made his observations, but the IT that sits in your hand right now and is connected to the world. Keep in mind that the Facebook main screen that you probably looked at this morning during your first cup of coffee actually consists of a hundred or more applications working together, each driving some portion of your experience. Each app communicating with the others to bring you a rich, fun and VALUABLE experience. That is why we all do what we do in the tech industry, that’s where it all shines, and that is why this doubling concept is so essential.

At the end of the day, there are massive transformations of near every sector of business happening to take advantage of this new IT. Finally, the business is driving technology. Finance and Education, Government and Aerospace, Entertainment and Internet… the most successful businesses are re-tooling themselves to embrace and leverage these new technologies knowing that everything they do today will be HALF of what their opportunity is next year.

Thanks Gordon…

Posted in Uncategorized | Tagged | 1 Comment

DCIM Facts versus Myth – Time for a Reality Check!

Facts versus Myths for DCIM

Facts versus Myths for DCIM

Last week we conducted an online webinar devoted to discussing the common mis-understandings and myths associated with DCIM. Over 400 people registered for the webinar and we had a ton of questions and comments afterwards. Its very clear that DCIM is a brand new category of solution for many of the attendees and there are many assumptions and incorrect data points that are preventing many end-users from realizing the benefits of DCIM.

I have selected a handful of the more popular myths we explored during the webinar, and present them here along with a more detailed narrative about the reason for the “myth” and the informed facts that should be considered instead. My goal is to provide the necessary DCIM facts for your consideration and to seed your thought processes as you begin your DCIM journey. Read the whole article Nlyte Blog.

Posted in Uncategorized | Tagged | Leave a comment

Exposing IT Value to the Business

The IT industry is currently experiencing an amazing transformation. Whereas most long term IT professionals have spent their careers creating and supporting increasingly complex IT structures with primary metrics of availability and uptime, the new CIO mantra has become service delivery at the right cost. In effect, CIOs are formalizing many previous efforts and creating service products that can be delivered upon request, with a keen understanding about the costs incurred to deliver those services. CIOs today think about these Service Portfolios as the means to set expectations on how technology will be adopted, how it will be supported, and at what cost those technologies will come. – See more at here

Posted in Uncategorized | Tagged | 1 Comment

Taking the 2015 IT Infrastructure Scenic View

IT Infrastructure is changing dramatically in 2015 and beyond

The Ways in Which We deliver IT is Changing Dramatically in 2015 and beyond

The IT Infrastructure challenge in 2015 is daunting and only the strongest will survive: The strongest vendors and the strongest IT professionals. Nearly everything we knew about building and maintaining IT infrastructures is being superseded by a new wave of fresh new technologies, new processes, new people and new approaches.  I specifically used the word “New” rather than “Alternative” since these fundamental changes really are radically different from those in the past and they *ARE* happening with or without you. For instance, we aren’t just making disk drives faster or bigger, we are eliminating them from your data center floor or making them so smart they handle growth easily. Same thing for servers and networks. Not just more of the same old thing, but NEW WAYS of doing it. All of IT is going through radical change.

Think of the kind of change we saw in the data center after 1991 (the year Linux came on the scene),  or the desktop after to 1981 (when IBM started to ship the PC) or sharing and collaboration in 1995 (after the internet boomed). These foundational changes allowed/required everyone involved to (in Steve Jobs’ words) “Think Different”, and it also destroyed the luddites that chose to ignore these shifts. For those of us that were paying close attention and embraced those changes, we were handsomely rewarded. Those forward-looking companies that adopted these changes were wildly successful and a lot of people’s careers were made. Data Centers could now be filled with systems based on new hardware and software, virtualized loads, unlimited scale applications, all with a level of performance, interoperability, efficiency and cost structure that wasn’t even in the same ballpark as previous approaches.

So that brings us to IT circa 2015. It’s all happening again! Everything that we 40- and 50-somethings know about the IT business is up for grabs again and all of it is being retooled around us as we speak.

Here is my list of 10 of the most impactful changes that are occurring which are worth getting your hands around if you are an IT professional planning to stay in the IT segment:

1. IT Service Catalog. Most IT structures and the associated processes have grown into monsters. The complexity and delicate structures stifle creativity, and choke new initiatives at a time where stakeholders are asking for more creativity and agility.  The most successful IT professionals are now looking for the means to create service offerings as if by catalog. Each service “product” must have a known cost per user, a known delivery timeframe, has specific capability/deliverables and expectations, and a whole slew of escalation definition when things don’t go right. IT “products” (like email) are being defined, and the costs to deliver those “products” quantified. It is this Service Catalog mentality that makes the most admired CIOs in this new era smile.

2. Automating process and control. Whereas we used to have an sysadmin for every 50 or 100 servers, keeping them patched and operating, we now find sysadmins handling thousands of servers through the usage of automation tools. Automated patching, provisioning, migration. Application installs and password resets. All of this is becoming automated through the use of tools that capture the human intelligence and then dispatch that same knowledge automatically each time the same task needs to be performed. Think “copy and paste” at the macro level. And it doesn’t stop there. Virtualized loads can automatically shift from physical server to physical server, and HVAC gear can automatically sense the conditions and self adjust as needed. This is happening today.

3. Software Defined Networks are a new concept, with less than 5% penetration (in production), but are on a hockey-stick shaped curve today. Adoption is beginning to take off and the various approaches are finding their sweet-spots. What started out as an economic “OpenFlow” free-for-all, has quickly become a business discussion about capabilities, flexibility and value. The protocol itself has taken the backseat to new capabilities. Why did this happen? Well, we all grew up on networks which were built-in a north-south fashion. The vast majority of all traffic started or ended at the edge devices. Server to server communication was limited. That’s why all of the traditional switch vendors have two lines of products today; core and edge. With the highly connected web full of various ‘services’, we now see server-to-server communications skyrocketing. This can be thought of east-west communications and is demanding a ‘fabric’ approach to networking. And the icing on the cake: once you have built an SDN interconnect fabric, you have the perfect place to host virtualized function or services that essentially reside everywhere the fabric is. Load-balancers and firewalls come to mind. Let’s think of this SDN structure as a ‘thick connectivity fabric’. And what more, applications themselves can tune the fabric for their own needs and take advantage of lots of orchestration possibilities! And a bonus for the pure technologiests; some of the industry’s most advanced SDN players can offer full layer-1 monitoring within the fabric itself! No more expensive/duplicate networks just for performance analysis!

4. Self-Service mentality. I want IT now. Rakesh Kumar at Gartner presented a paper in December 2014 that stated that 37% of the nearly $4Trillion dollars of IT spending in 2014 occurred OUTSIDE of the IT organization. 37% of all IT projects didn’t involve the IT organization at all! Shocking. This is because the end-user now has the option to look at any number of 3rd party service catalogs and buy with a credit card. Want a new desktop? Consider a tablet via CDW. Need storage? Think Box. Need email in a hurry? Google to the rescue.  No longer is the IT organization the end users’ only source of services.  The strongest players moving forward will be the IT professionals that embrace speed and agility in the delivery of their capabilities. Projects with many-month delivery schedules are no longer realistic when a 3rd party can deliver next-day or next week.

5. Who would have thought that a 10U-12U chassis would be housing HUNDREDS of CPU cores and be able to move over 7Tb/s of data internally? The big server providers today offer really dense boxes you can buy for $100K or more ($250K fully loaded in some cases), and at 10-12Ucan put 3 or 4 of these monsters in one rack.  I dare say we’ll see 40kW per rack very soon, whereas just a few years ago we saw 40kW per ROW. This is a mindset that enables dramatic difference in the way we approach new data center build-outs and retrofits. It has been hypothesized that your existing data center would last for 30 years or more if you simply took advantage of all of the ‘Moore’s Law’ advances taking place at the device density level. (assuming that your utility company can get you ‘a few more megawatts’ every few years).

6. I like to think about the simpler days when we all built 24-inch square raised floors, and ran our cooling and cabling under these tiles. We talked about loading capacities and laughed at the racks getting heavier, but until recently it was just a curious discussion. No longer! Those same monster dense devices also weigh a ton (literally) and it is no longer a best practice to plan for using raised floor. Building your data center directly on concrete is all the rage. Data and power cabling run overhead, and cooling strategies are beginning to take advantage of the fact that COLD AIR likes to FALL (and HEAT rises). It’s hard to understand why we decided 25 years ago to create cold air on the perimeter and then PUSH it upward into the racks, fighting pressure physics. Cold air sinks and the new generation of data center designers get that.

7. Unbounded Infrastructure as a means to blur the lines between mechanics and functions delivered on the data center floor. As it turns out if we stop thinking about individual boxes as individual management islands with each individual box doing a bit of work and then somehow the results being aggregated externally, we can take a whole new approach to IT. Hardware and Software mechanisms are now commonplace that ignore the physical device boundaries and allow the capacity to be aggregated.  Want to know how Twitter or AirBnB handle all of their transactions in real-time? There is a project called Mesos that creates services out of boxes. Need more I/O? Simply add more I/O services which instantly become part of the ‘system’. How do you write applications for a world like this? You write applications for the platform not the server, and then let the platform independently take care of being scaled in whatever resource it needs.

8. We used to SIZE and BUILD data centers based upon some perceived top-level or watermark of capacity needed. In a typical scenario, data centers were built as big as anyone could imagine the load would be, and then organizations and their projects moved into that center over an extended period of time. The downside to this old school approach is the cost: Comparing the cost to occupy the first square foot of space on day-1 compared to the costs associated with the last square foot 3-5 years down the road were enormous.  Building a big data center was only an academic exercise to keep the CIO happy. In reality, it never made sense (to the CFO) to over-build and hope that the space would be needed down the road. Today, you can find modular designs to replace your old way of building a data center; both manufactured (like Baselayer) and brick-and-mortar (like Compass Data Centers) that can get you new space in 20-200 rack increments or less. (Companies like Elliptical even do a micro-modular design down to ONE rack at a time). I guess we’ll need a new term to use to replace ‘breaking ground’ when using these modular approaches.

9. We all laughed when data centers were so cold that you needed a jacket to walk through it. Over the last 10 years it has become quite another matter and everyone is talking about it. What started out as energy efficiency with the Green Grid publishing their “PUE” metric, has become a battle cry for every manufacturer and end-user alike. Use more power and in a smaller amount of space, and get the power bill down per unit of work. With the cost of power now being a top-3 concern for everyone in an Enterprise, whole new approaches are being used to make data centers more efficient. Starting with the location of cheaper power which is driving where they are being built, and then consider the ability to use free-air cooling at that location gets us thinking.  Add to it the advances in CPU designs and power supply design and we have everyone working on energy costs.

10. I purposely placed “The Cloud” here at the end since it is one of the most dramatic changes some of us may see in our entire career.  Gartner has predicated that by next year, 20% of all Enterprises will have NO backend IT functions in-house. Even the old-line corporations are already diving into the Cloud for various applications. But don’t panic. This is a drawn out process and will be going on for a dozen years. Certain applications are perfect for the Cloud today and others are too time-sensitive or confidential to lend themselves to today’s Cloud offerings. Remember Cloud has already been around for 10 years. Salesforce.Com was one of the earliest examples of an application/software level cloud. However the Cloud probably raised your eyebrows when companies like Amazon and Rackspace began offered platform-level cloud. Most exciting, the tools now exist to allow in-house workloads to be shifted to the Cloud and visa versa as demand changes. (Companies like VMware do a good job of this). Best guess on the mix of in-house IT services  versus  Public Cloud provided services: less than 15-20% today, approaching 30-40%% in 5 years and perhaps 50-60% in 10 years.

So what does all of this mean? Think Forward. Think about articulating the business problems first, and then solicit every smart guy you know to figure out how to solve that problem. Building more of the same old structure is likely the wrong answer. Simply adding more disk spindles or a new edge switch is likely throwing good money away. Embrace it all and be ready to defend your choices. “It’s always worked like this” is no longer an answer with much street credibility. And “If it’s not broken don’t fix it” still applies, but I would argue that nearly all of the IT stuff we’ve collectively inherited in our data center could be considered “broken” by today’s standards.

Let’s commit to fix it…

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

SDN: Are We There Yet?

Remember in 2001 when you heard about VMware GSX?  It sounded like pure magic and seemed to do the impossible: it allowed you to run multiple instances of any real server operating system on a single hardware server. The operating system thought is was running on a hardware box, and yet it was just a slice of that box. Over the next few years, the buzz turned to a roar and quickly encouraged most commercial organizations to try a ‘pilot’. During those pilots, they realized that certain applications (like web servers) were a great fit for virtualized servers and these organizations set their buying sights on a new generation of big beefy servers which were needed to take full advantage of virtualization. Virtualization delivered just what it promised, and then some!

It turns out there are several technical ways to virtualize, so VMware found competitors like Citrix, Microsoft and Sun also jumped in and over the next few years the single-host/multiple-guest computing model came of age. Intel and AMD even changed their CPU chip architectures to directly support this type of virtualization innovation at the hardware core. Software providers changed their licensing models to account for virtualized servers directly too. From a timeline standpoint, virtualization was imagined, tested, tweaked and then adopted “en masse” over a period of a dozen years. According to Gartner, server virtualization accounted for 16% of all workloads in 2009,  accounts for more than 50% of all server workloads today and will rise above 80% within the next 3 years (2017). This is an adoption curve we expect with game changing technology.

The SDN Market is coming together.

The SDN Market is coming together, but is highly competitive with different technical underpinnings.

That brings me to the area of Software Defined Networking. Although programmable networking can be traced all the way back to 1995 with efforts at AT&T and SUN, the modern-day use of the term SDN is connected to the period around 2011 when the Open Networking Foundation (ONF) was formed to further the creation and use of “open” networking standard protocols.  The idea of today’s SDN is simple: Rather than each vendor continuing to ship proprietary networking gear with each device carrying all of its transport intelligence in every device, why not separate the control plane from the forwarding plane? Decompose the problem into two distinct areas that can each individually be optimized. And most important, scale and visibility becomes just a matter of technical creativity since a distributed controller architecture that drives any number of physical switching ports can be easily created to offer ‘one view’ of the whole thing!  And the icing on the cake is when you realize that this new decomposed architecture (when implemented well) allows APPLICATIONS to determine their specific performance needs, NOT a slew of network engineers that are buried trying to set traffic shaping rules for every new capability that is being added to the network across any number of individual boxes.

Today we are just a handful of years into this SDN journey, like where virtualization was circa 2006. SDN is all the buzz today, and we are clearly at the tipping point on the hockey stick curve. Many corporations are trying SDN pilots and investments are being made by the VCs, vendors and end-users alike. A growing number of production deployments are being seen with a few huge deployments  (like Facebook and Google) proving it out, which goes a long way to demonstrate the scalability, security and commercial value of SDN. Startups have formed for nearly every aspect of SDN. Some create high density hardware (“Forwarding Plane” or Switches), some create high intelligence controllers (“Control Plane” or Operating System), some even create value-added Applications (like traffic management, visualization and analytics). The biggest old-line networking vendors have even released overlays to their existing products to allow some level of “participation” in SDN networks. (This participation is at best a defensive/transitional approach, since the old devices will still carry all their heavy baggage, but it may allow some level of migration for large installed bases until/if they get to REAL SDN). Given the huge potential and the original premise of SDN, that transitional approach will be short-lived and I would expect to see a significant number of new generation hardware and software suppliers that are built from the ground up to be SDN components.

We are also seeing the SDN revolution underscore the need to think about application-level business values and set expectations accordingly. The staff required to manage SDN networks is vastly different than that of the older CLI-based “box by box” and application by application approaches network administrators have practiced for years. With SDN, if you can “think” it, the network can be programmed to support it. Most importantly, you “think” networking in an SDN world at the business application level, not at the box or protocol level. And in the same vein, the performance of applications can be measured against those business needs and could (in theory) self-adjust the network performance to meet their precise contracted needs. While SDN protocols have certain built-in performance values being collected all the time, this next generation of tuning capabilities will come from software developers that orchestrate the performance data being collected at the application level and communicate changes needed directly to the control plane itself.

Time will tell where and when adoption will occur. OpenFlow has been an earlier leader in the technical approaches used by many vendors in the SDN community, and yet the real SDN story is NOT about the protocols in use, its about the ease in which business services can be delivered better and faster and at a lower cost.  It’s about enabling the new generation of computing, what Frank Gens at IDC calls the “Third Platform“. This new era is based upon always connected handheld, IoT, etc. And just like Virtualization, Heros will be made who will look back on their early adoption and championing of SDN as the crowning moment of their careers.

Be an SDN Hero!

Posted in Uncategorized | Tagged | Leave a comment

The Good Ol’ Days are gone – Security is the Basis of Your Future

Having just come back from Gartner’s Data Center Conference held annually in Las Vegas, I had the opportunity to reflect about what I heard at a macro level over the past few days. For those that didn’t attend, Gartner brings together 2500 or so of the industry’s leading IT professionals from the vast majority of the Fortune 500. Their titles range from IT and Data Center Manager to CIO and VP of IT or Infrastructure. Over the course of four days or so, this mass of IT folks get together and mingle, discuss business strategies and try to get a sense of where the various technologies and the industry itself is going. Obviously areas like Internet of Things, Cloud, BYOD, APM and Co-Lo were very hot topics, as were the latest generation of offerings of servers, storage and networks. But there was one topic that seemed to be an integral part of all other discussions: security.

Data Center Transformation is a function of Security

Data Center Transformation is a function of Security

As I sat back and listened, I realized that many of the folks in attendance had a solid reference point of “their IT” as it has existed for the past 20 years. Many of the vendor presentations and hallway discussions had a tone which longed for the simpler “Good Ol’ Days’ where their biggest concerns were capacity, availability and interoperability.  Sure, we had some good times dealing with those to be sure and many of us have made our entire career chasing those rabbits.

So here we are in 2014 and it is OURS to define the going-forward plans. And those plans will be dramatically different from the ones that got us here. And as it turns out, the massive connected mesh we are all striving for brings along with it the responsibility to deliver it all in a highly secured fashion with all the tentacles (endpoints) secured as well. The topic of Security is now clearly a sharp focus and prevails inside all of the other discussions about building IT structures today and tomorrow.

Now follow me on this next journey: According to Rakesh Kumar at Gartner, 1) Over $3.7 TRILLION dollars has been spent on IT and IT services in 2014, and 2) Over 37% of those dollars are being spent on IT solutions but OUTSIDE of the IT organization. Why are these important? What it means is that the safe and confined comfortable corporate world of IT that we all grew up on and protected is now littered with connections and other services that exist OUTSIDE of your control! And it’s not just remote users, but remote applications as well. Connecting to SaaS and other clouds, remote access and BYOD users forms a critical component of our going forward plan and yet many of us are still throwing 2005 vintage protection schemes at our corporate borders.

In the Good Ol’ Days, massive security commercial breaches were something we rarely heard of at the corporate level because those companies simply didn’t allow much external access, and when they did allow external access, they used VPN like technologies and felt safe and secure since they thought the endpoints were as good as ‘inside’ the corporate structure. Today, nearly every day, we hear about another major retailer or bank, government agency or telecom disclosing ‘issues’ with unauthorized data access. What changed? Are these companies simply not spending enough on firewalls or IDS? It should be so simple. In fact IDC says that they spent nearly $10 BILLION on protection systems this year. Money is not the problem. They want to protect, but the modern threats have simply matured so much that old school ‘signature’ based technologies (the ones deployed by most companies today) are dramatically ineffective. Today it doesn’t really matter how many ‘signatures’ those old school devices have built-in. We need to think different.

It’s about behavior. With $3.7 Trillion of new spend on the line, the forward thinkers are realizing that detection signatures are something that describes the past, where as behavior is something that defines the future. What do I mean by behavior?  Well the autopsy of a typical breach goes like this; 1) A simple system like a desktop, laptop or web server is hacked and some form of malware control app is placed upon it. 2) The malware becomes the ‘agent’ on that box and can be instructed to do anything the outside hacker wishes, 3) The hacker typical is looking for something in specific, sensitive data, so this control agent is told to search that data out and then once found, package it up and use a familiar friendly protocol (like HTTP) to send it back to hacker central. Lastly, 4) these agents are usually instructed to try to find similarly breach-able peer systems so that the process can be repeated.

With this behavior in mind, it’s just a matter of designing new protection systems from the ground up that try to identify this flow. They are designed to focus on zero-day threats (those never seen before) as well as all kinds of Advanced Persistent Threats (APTs). These newer protection systems understand the zillion variants of behavior and the best of these systems actually get smarter over time. These systems test their initial analysis and even try running certain payloads that are seen traversing the wire.

The point of this whole story? The Good Ol’ Days are fun to talk about and even share a few stories over a cup of Starbucks or a cocktail, but OUR real business needs going forward all hinge on solving the security challenge for a highly connected world. Frank Gens at IDC estimates that by 2018, HALF of ALL of our IT spending on computing and storage will be Public Cloud based. This means all that computing will be at the end of a wire which you have little or no control over, is outside of your comfortable brick and mortar walls and has its own set of security mechanisms.

We will find our footing, and we will collectively reach a consistent definition of what is expected from the new secured world of IT. This is Data Center Transformation in the making. But as we saw in Las Vegas, it will come with a primary desire to think about the new task at hand, and solving for that, rather than building on the structure we have inherited. Time to put on our big-boy pants…

Posted in Predications, Technology | Tagged , , , , | Leave a comment

Dynamic Workload Management, Physical to Virtual to Cloud – and back!

Enabling Workload Migration between Physical, Virtual and Cloud providers

Enabling Workload Migration between Physical, Virtual and Cloud providers

Now that we are all becoming ‘experts’ about the transformation being seen in the world of computing, we should all be realizing that there is an exciting opportunity in front of us by leveraging the foundational platform innovations that have occurred. “Hybrid computing” is a phrase used by many of you to describe the various different combinations of traditional servers, virtualized servers and the various flavors of the cloud to conduct work. And to add a bit of a second dimension to this phrase, each of these different computing platforms has a number of major vendor-choices, each with a particular set of capacities and specific dependencies. For the server, major choices include HP, DELL, Lenovo and a slew of other Tier-2 vendors like Intel, Supermicro and Quanta. For virtualization we see VMware, Microsoft, Xen and KVM. And the Cloud is dominated by players like Amazon, Softlayer, NTT, Rackspace, VMware, Sungard and CenturyLink.

Now wearing your newly acquired IT business hat (XL, courtesy of your CFO), you should keep in mind that your goal is to find the most suitable platform for each of your applications, and that the definition of “most suitable” changes from application to application and as a function of time.  The actual work to be performed doesn’t actually change in this process, it is just the foundational means to deliver that computing that could change. The catalyst for you to consider moving your workloads to other platforms could be for performance or capacity reasons, could be economically driven, or even based upon disaster-recovery or compliance needs.

While many of us have a great understanding about each of the discrete platforms, the ability to move workloads between traditional servers, virtualized servers and the cloud (public and Private) is less understood and yet has already become a critical success factor to managing the costs of doing work and the reliability of IT. As it turns out, moving workloads is conceptually very easy. Simply package up each of your applications as ‘workloads’ (carefully identifying the dependencies that each unit of work has), and then characterize the various platforms on their ability to deliver those dependencies. To put a workload on a particular platform, you simply grab the unit of workload, add to it the specific target platform wrapper, load it on the target platform and press “GO”. In the reverse direction it works in the exact same fashion. Want to move work between Cloud-A and Cloud-B? No problem. Grab the workload, strip off the Cloud-A wrapper, add back in the Cloud-B wrapper and you are all set.

Conceptually it’s about separating the application doing the work from the intended platform to run that work upon. There are a handful of startups, like Silicon Valley-based Rackware and Rightscale from Santa Barbara, that have set their sights on doing this and a few of these companies have become experts at doing so. They make short-work of migrating workloads between public clouds, private clouds, physical servers and virtual instances. Basically any source can be migrated to any destination.

These are really exciting times and those IT pros that fully embrace hybrid computing and workload migration as a function of value and performance will be rewarded over and over again. So while you have spent the last 25 years talking about doing work in ‘the data center’, going forward we should probably start using a term like ‘the data function’ or other less restrictive phrase that encompasses computing that may also occur outside of our original four walls.

Posted in Uncategorized | Tagged , , , , | Leave a comment