40% of Your IT Services are Delivered by Someone Else (in the Cloud)!

Amount of IT Spending OUTSIDE of IT

Amount of IT Spending OUTSIDE of IT

Today, nearly 40% of all spending on IT services is done outside of your plan and view. By people who don’t work for your company and in infrastructures that you have no control over, using policies and security mechanisms that someone else designed. And at a cost that may be 30-60% higher than you could deliver the same services (if you wanted to). Welcome to the era of the Cloud!

Yes, of the $2.7 TRILLION dollars that Gartner estimates was spent on delivering IT services worldwide in 2016, nearly 40% at that figure is consumed by service providers outside of the IT organization.

Whew! This is mind-boggling. At a time where IT organizations have every opportunity to take control of their own destiny and shine, with all of the new toys that the technologists can bring, their constituents are migrating in droves to delivery sources which simply have less drama. And the trend is UP, year over year with no end in sight.

So what happened? Lack of accountability and vision. Depressed innovation and competitiveness.  Sure the current IT leaders have some kind of plans and always “get the proper signatures”, but IT organizations still behave as if they are the only game in town. They still deliver projects in months rather than minutes, and still have technical foundations that are so fragile that they have few other choices.

So in 2017, there is a huge opportunity to break the chain. Nearly all private cloud (a.k.a. Software-Defined Data Center, SDDC) technologies can be had for a fraction of what their rigid equivalent cost just 3-4 years ago. Storage, Computing and Networking can all be virtualized and deployed as general resources which can be carved out with the touch of a button, or with any of the orchestration frameworks already heavily deployed.

You just GOTTA-WANNA! The IT business leaders need to re-think how they are measured and consider the rising usage of the cloud as an indictment of their own prowess in delivering the services they are chartered to deliver. And keep in mind that 80% of all non-IT corporate leaders admin that they purchase their IT services without even telling the IT folks that they have done so.

So what do you do? Your job. Rather than focusing on all of the technical and organizational reasons that you can’t deliver new services in minutes like the outside cloud providers, focus on how you could do so in 2o17… and then start planning to get there. Act with commitment and a sense of urgency. If your house was on fire, you would figure out a way to put it out fast, right? Well, consider your IT Services delivery capabilities are on fire…. and they will burn to the ground unless you find a way to stop them.

Does this matter in the long run? You bet it does. While the cloud become popular nearly overnight, it comes at a price.  You lose almost complete control on what capabilities will be available over time. You become a ticket and escalation process in a much larger pool of users. The security of your data and the availability of services are defined by people you have no control over. And don’t forget the premium in spending required to enjoy IT services delivered by the cloud.

Time to Get Back in the Driver's Seat

Time to Get Back in the IT Driver’s Seat

Now is the time to ring in the new year with a new IT plan and get in back in the driver’s seat, rather than sitting in the passenger’s seat watching the scenery roll by…  IT is perhaps the most strategic asset your digital-centric business relies on and now is not the time to take your foot off the gas or hands off the wheel. Now is the time to get a plan. Plan for a hybrid arrangement with capacity in-house for your essential needs, and bursting capacity or specialized applications in the cloud. Remember, this is not a one-size-fits-all situation, so that is where IT leadership comes in.

Your opportunity is here, NOW!

Posted in Uncategorized | Tagged , , , , | Leave a comment

Are My 25 Years of IT Experience Valuable Today?

Are Experienced IT Workers Relevant?

Are Experienced IT Workers Relevant?

Who would have thought that in 2016 many of us would be wrestling with the question of whether our entire 25+ year career of IT experience was worth anything to modern businesses today. We always assumed that our deep experience was valued and the more experience we had, the more valued we would be. Right?

According to the US Labor Dept, the average IT worker is 42 years old, which means that half of the IT folks we work with started their careers in the mid 1990’s. They started building file and printer sharing networks (remember Netware) and have basically been building bigger and faster versions of their computing environments ever since. Sure, they had challenges and over those years, we all worked through all the compatibility and translation issues in both hardware and software, we found ways to build clusters, and share storage, we added security to the mix, we conquered the scale and resiliency topics like gladiators, and in the past 5 years we even figured out ways to embrace the ‘Co-Lo’ and ‘Cloud’ things without upsetting our emotional apple carts too much.

But now many of the IT workers are at a point where they are asking the question: In today’s always-connected world where everything talks to everything, everything is standardized and the location of those pieces really just doesn’t matter… am I personally relevant?

The short answer is YES. In fact, a HUGE YES!  But you have to step back and think about WHAT you actually learned in those 25 years. Sure you probably have a hundred stories about writing programs when you were just starting out, the time you spent all night building out a new network during a company move, or bringing an e-commerce site online, but those are really just stories, since the details just don’t matter any longer. They make you smile and make you proud, but WHAT you did does not really matter any longer.

HOW you did it matters. Today, it’s all about business and business is all about PROBLEM SOLVING. Yup, for the same reason that many of us went to undergrad school, we find our biggest value in today’s market is our ability to think through complex problems and using a healthy dose of discipline, create action plans and take action. We have learned how to solve complex problems using a litany of data points drawn from those years of technology battles to guide our efforts. We can translate business to technology better, we can estimate project time-frames better, we can look at costing models , and we can even gauge the impacts of technology on staffing.

So are YOU relevant? Absolutely!  Do you need to show people how to program a bubble-sort in COBOL, or explain how to connect FDDI networks to Ethernet? Or even how to patch an SSL library on Linux?  Nope. Those types of minutia are now handled in a much more elegant and automated fashion, and IT users need not worry about discrete build-outs… their intelligent infrastructure options can do that for them.

Your value is your experience in solving business problems using technology as the enabler. When you focus on the business aspects and metrics of IT, your 25 years of experience shines. Those around you will appreciate your metrics, methods and approaches. They will listen carefully to your business guidance and your articulation of the fiscal impacts of technology. Remember that Cloud and Web-scale  (and their little bothers Private Clouds and Converged Infrastructures) essentially have solved all the technology challenges (the very detailed stuff that we all wrestled with for most of our careers) for us, so it is your experience in aligning the business needs with the ability for IT to deliver the right amount of work processing that matters. Even players like Google and Facebook have huge amounts of this “adult supervision” in their flip-flop filled hallways.

Make no mistake, YOUR 25 years of experience are desperately needed to run every modern business. Think business, think value and offer your guidance.

Posted in Uncategorized | Tagged , | Leave a comment

A Funny Thing Happened on the Way to DCIM

DCIM and Automation

DCIM and Automation

The IT world around us changed! The very way we approach delivering IT services was re-imagined and re-invented. While the underlying technologies themselves got faster, smaller and lower in cost, each of those pieces became more commoditized and virtualized which added a layer of abstraction that served to make the physical componentry found in the data center even less important. And at the same time, the Public Cloud went from curiosity to contender, and the enterprise use of co-location space became the norm. When was the last time you heard about an Enterprise building a new brick and mortar data center?

10 Years and Counting!

When DCIM was started ten years ago, times were simpler. Much simpler. We lived in a world where IT organization delivered grand projects in bespoke data centers. New business initiatives resulted in long planning cycles and purpose-built projects which spanned months or years to deploy. Each project resulted in custom topologies that had to fit delicately into the structure that was already in place. The world looked to DCIM as the management solution for change and two camps of first generation DCIM players formed; those that addressed all of the constant change seen with IT assets, and those that addressed the optimization of the building itself and its energy usage. A few companies tried to do both, but failed to deliver on the utopia they promised.

So in the days when a stream of custom IT solutions were being created, first generation DCIM as an asset manager made a lot of sense. “Where is the best place to put my 6 new servers to run a new ERP?” was a great question in 2006 and DCIM’s asset management capabilities could answer this question handily. Every data center was different, and every project needed a unique combination of hardware devices to implement the required functions at the required scale. Where was the best space to put servers and how to connect everything to available resources was a project-centric process, and pioneers in the DCIM adoption camp realized just how powerful DCIM could be at shortening the time it took to react to changing business needs. And first generation DCIM shined at allowing individual devices to be located and serviced.

Capacity is now a Business Function

In 2016, all that has changed. IT had to run much faster, and provide instant gratification. To do this, IT has become a planning function which assures that the data center always has enough resources for the next 6 months or growth. Those resources must be sitting in a capacity pool in ANTICIPATION of the business which can be carved out with the touch of a button. In 2016, applications are virtualized and run on resources which are also are virtualized, so these pooled resources can be used for any application and the specific device where an application, or a part of an application doesn’t really matter. All that matters is that suitable levels of resources must always be available, which is a business planning function, not a technology exercise. This is similar to the Public Cloud story and in fact those companies that are embracing the Public Cloud in a big way cite the elimination of the need to care about the physical structure elasticity as a main driver for their choice. That said, very few companies today are wholly Public Cloud based, so in-house data centers and co-location facilities are the mainstay of all of our livelihoods.

New Approaches and New Challenges

So what does this mean to you as an IT professional? You may be considering a way to leverage the Public Cloud, and you likely are trying to balance Public Cloud and in-house resources to maximize value. For your in-house structure, if you are deploying new gear in response to specific business applications that have already become a requirement, then you are ‘kicking it old school’. You are delivering IT services in a manner that was state of the art 10 years ago. Over the subsequent years, the a Public Cloud providers proved that IT could be built as a pool of resources and THEN utilized for ANY application to realize instant gratification. Pushbutton IT was delivered by public cloud providers and proved that IT agility was possible, highly valued and very cost-effective. According to Gartner, more than 37% of all IT services are delivered without the involvement of the IT organization due to this desire for instant gratification. (They call it “Shadow IT”). Today, in-house resources simply need to be transformed into Private Clouds.

The Business of Growth

The biggest role of IT has become managing these pools of resources to make sure there is just enough over every point in time. Putting the Public Cloud portions aside for a minute, the details of which server or what switch is far less important because everything looking forward is cookie-cutter, virtualized and modular. As resources are consumed, the business planning function results in more capacity being brought online in manageable increments. An analogy would be a typical private municipal water district which is chartered to deliver water to residents, and who must negotiate long-term wholesale contracts to do so. When they contract water, they look at the statistical growth in their service areas and plan accordingly, not the specific houses that are under construction at any point in time. These contracts span 25 years or more, similar to the lifespan of a data center itself.

Door #1 or Door #2?

So what does this mean for DCIM? Remember I said there were two camps? The first camps deals with managing in-house asset life cycles. The need for individual asset management in this new macro and virtualized world is at best tactical, and in worse case irrelevant. We no longer focus too heavily on any the life cycle of single device any longer. We don’t really care about one server or one port or one Rack-Unit of space. We do need an absolute ‘as built’ model of what is in the data center, and first generation DCIM is very good at this, but the change management granularity is now at a much larger scale. (i.e. entire rooms are changed for tech refresh purposes, not individual servers). We all need to think bigger.

Gratuitous Mention of IoT?

No, not really. Data center automation is turning out to be the secret sauce. Delivering IT services cost-effectively in a virtualized (or software-defined) data center requires comprehensive instrumentation and action-oriented automation. The science behind capture instrumentation and then massaging it into policies that can be automatically executed comes from the new generation of data science practitioners. A whole new crop of people are learning how to deal with millions of data points that come in real-time, and turn them into rules and policies that can be fed into a orchestrator engine. For example, to accurately change the set point of a CRAC which services a specific pod, there may be more than 1000 data points that need to be understood. The new data scientists who practice IoT today can apply this same science to automating a data center.

Today, Gartner estimates that more than 86% of all servers are virtualized, and moving workloads around the data center is fully automated by the virtualization vendors. With all of that application movement, so goes their storage and networking. Essentially the IT function is already automated, and the work being processed above the “raised floor” is already more dynamic than you can imagine. So what about the data center facilities structure itself?

“DCIM-nextgen” is Automation

Automation is the key to everything in a modern data center, modular or otherwise. Custom automation approaches have already been used for years by the co-location providers. And now that embedded instrumentation is widely available on most IT devices today, and the science to interpret and action it exists, the most forward looking DCIM players will adapter their offerings to not only SHOW how the data center is built, but will make precise decisions about what it takes to operate that data center and automatically do it! As IT workloads increase and decrease, or as virtualization technologies move these workloads from one aisle to another, an automated data center itself will optimize power and cooling most effectively. Over time, the “IoT” style data science will make these feedback loops tighter and increasingly more efficient.

Where DCIM is Going…

That is where DCIM is going. DCIM will become the plan of record for what is on the data center floor, in-house or co-lo. And DCIM-nextgen will take a much bigger role in the coordination and application of facility resources which align with the automated movement of IT workloads in those centers. The days where actual humans will be involved in the ‘management’ of a data center are coming to a close. I look forward to the days when you can almost hear the buzzing of business when walking through a data center, and feel the varying breeze as you move from pod to pod, knowing full well that the business is optimized and running cost-effectively.

DCIM and Automation

DCIM and Automation

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Best Bet DCIM Providers for 2016

Now that 2016 is upon us, I find that the DCIM industry is really starting to settle out. Most of you know that I joined the DCIM industry before it was actually called “DCIM” (in fact for a brief period of time we all called it “PRIM” (Physical Resource Infrastructure Management, but through a combination of coincidences at both Gartner and Forrester, the term migrated to DCIM).  I worked for many of the players in this space, and became deeply involved with all of the key vendors attempting to participate in this solution area, and in the process I even became friends with the vast majority of the pundits in the industry.

The DCIM Dust is Settling in 2016

The DCIM Dust is Settling in 2016

While I have ‘mostly’ left the DCIM category professionally in my ‘day job’, I still stay tightly connected to what is going on. In fact, I still get quite a few calls from researchers and other vested parties regarding what is happening in the industry or with one company or another. And as you know, I have my opinions and am happy to share.

First, let me say that DCIM is real. It can be the answer to real questions that are on the table now. Data Centers themselves are changing quickly, but the need to understand more about what is happening inside those centers has never been more urgent. Whether you own the cement or not, the “data center” is still effectively yours.

Did I say ‘urgent’?  Yup, I did. let me explain.

Assertion: It is urgent to get your data center house in order. With the easy access to rented transactions and services through providers like AWS and GOOGLE, your challenge is to figure out how those cloud providers do the magic they do, and then either subscribe to them in a big way, or learn from them ASAP!  You can choose either path, but time is ticking by and if you don’t make that strategic choice, YOUR REPLACEMENT WILL! Getting your house in order is not just about new ways to assure availability and capacity, but instead focuses heavily on understanding the fiscal impacts of every choice you make to the business. And in specific, reducing your cost per unit of work. If you don’t know what YOUR cost is per unit of work, then your house is NOT in order. DCIM can help you understand unit of work costing but you need to understand what you are missing to get there.

Unit of Work? What the heck is that?

It’s the reason you exist. It’s the reason IT exists. It’s the way you provide capabilities and services to your business and it’s the plan you signed up for when you became an IT professional. If you are the IRS, perhaps the unit of work you care about is processing an individual 1040 tax form.  If you are eBay, then your unit of work might be displaying a page in its marketplace. At the end of the day, you want to know the cost for that unit of work (which for example at eBay is currently running about three-quarters of a penny). Tip: Thinking about cost per unit of work shows that you have traded your technologist hat for a business hat. And it is just common sense that over time your goal MUST BE to reduce the cost per unit of work.

So, which DCIM is right for you to help you get there?

Great question, and one I have been wrestling with for 10 years! Here’s the industry’s dirty little secret: there is no single answer! “DCIM” does not define a single set of capabilities, it refers to the whole roster of solutions that offer insight and/or control over the physical components found in a data center (or co-location facility, same story). Those physical components can be facilities oriented (CRAC, PDU, Genset, etc) or IT oriented (servers, switches, storage, etc). And to date, no single vendor has proven that they can address all (or even most) of the needs of the entire data center with any amount of credibility. (There has been a ton of posturing and wild vendor claims, but the reality is each vendor does one or two things very well, and the rest of YOUR actual and tangible needs will need to be met by additional vendor solutions)

So here’s my 2016 “DCIM GoTo”  list of vendors that can deliver EXACTLY what they say and who spend a lot LESS time posturing about what they really do…

  1. Want great facilities monitoring of well-behaved (Network enabled) mechanical and electric gear? Then FieldView is a great choice. They collect data, normalize it, and then present it in a clean set of views and reports. Their technology is installed in hundreds of the world’s biggest data centers and their deployment process is an afternoon project. (Well, it gets a bit more complex when you have older equipment which communicates using ancient protocols, RS232 interfaces, or 20mA signally but they can get you there too). Update: FieldView is now part of Nlyte, so I would expect that their sales messages and pricing would be more attractive when combined. I would also expect that the actual technical integrations will follow over the course of the coming year.
  2. Feel like high quality and intelligent power at the rack level is part of the insight you seek? You probably need to get Legrand or ServerTech PDUs. They can offer a level of instrumentation at the outlet level (if you care) or at the branch level (which you need) and each supplies highly capable management software to leverage their hardware PDUs intelligence. Above all, they are solid citizens (that won’t fail to deliver on Job #1: providing clean power over long stressful periods of time). Keep in mind that dense racks exceed 20kW these days, so delivering solid power under these high loads is not an easy task. Legrand and STI understand this and can help you too. (And if you happen to have any other brand of intelligent PDU already, head on over to Sunbird for their PowerIQ vendor-neutral software solution to manage ANY brand of intelligent network-attached PDU)
  3. Want to understand the 1000’s mini workflows associated with all of the devices in a data center and all of the related  asset provisioning and decommissioning? Want to see where every IT device is installed and connected at any point in time? Or maybe where the next new server is best placed? Nlyte would be a good choice and their solution is focused on the contents and context of each rack. At any point in time, they can allow you to visualize all of the racks you maintain in near perfect fidelity as they exist today, or over the course of future projects once completed. Update: See FieldView notes above as Nlyte purchased FieldView in February 2016.
  4. How about 3D fly-through views of your data center with the ability to turn layers on and off to allow focused visibility into the physical subsystems in production? Commscope’s  iTracs makes sense. For a facilities and cabling oriented DCIM goal, this could be an essential element and visually stunning to watch. Note: “real work” in a data center gets done in 2D, so be careful not to get enamored by the art of iTracs, and try to stay focused on the value you seek. iTracs is an amazing solution for the maintainers of cable (data and power) and the associated flooring subsystems.
  5. Feel like working deep through the fiscal model of your data center and studying the impacts of your various proposed changes on Opex and Capex? Want to quantify the TCO in a defendable way? CFO breathing down your neck to translate your tech-speak into dollars and timeframes? Think Romonet as they understand every aspect of the data center business and allow each component to be included in their fiscal modelling.
  6. And what about those environmental sensors?  So many data centers are still running blind when it comes to heat and humidity sensing, and yet the technology is cheap and effective. Temperature and humidity may no longer be a major factor in wide-scale equipment failure (since the manufacturers have widen the ranges quite a bit), but those issues play havoc on energy COSTS. RFcode does a great job in delivering sensors that work hard and feed into very visual at-a-glance dashboards. They are installed in seconds and begin reporting immediately. They are low-cost and included batteries last for years! And even better, their sensor data feeds are natively supported by most monitoring solutions on the market today.
  7. And finally, Discovery. This is an age-old problem. Now it would be great if some magical software could figure out WHERE a server or switch was installed, but thanks to the 80-year old 19-inch rack standard, we are not going to see that in any non-proprietary fashion anytime soon. There simply is no agreed mechanism to do so at the rack level and even the new OCP rack spec ignores this physical placement need once again. That said, No Limits Software is currently the deepest and least intrusive way to figure out WHAT is installed in the rack. It digs deep inside the device operating software and paints a clear picture, right now to firmware version numbers. Discovery can play a huge part in profiling what you have in the data center, firmware versions, installed software, etc. and when fully executed, can provide an accurate inventory of everything you have.

So what about Emerson and Schneider? It is still not clear what their end-game is. They DO provide value to their customers, but tend to have their maximum value within their own installed base. The result, many end users consider Emerson and Schneider as element managers for users of their own equipment. Most exciting, as data centers become more software-defined, both Emerson and Schneider have indicated that they have a solid set of offerings and longer-term vision for delivery of automated and self-adjusting data centers over time.  (Emerson has a video on YouTube which puts that timeframe around the year 2025). And what happened to CA? Humm…. After they backed away from the whole DCIM wave recently, it is my expectation that they will simply extend their existing ITSM and ITAM  tools to include some of the extended asset information being sought.

Make 2016 the year that you challenge yourself to be an active listener when talking to DCIM providers. If you listen carefully you’ll hear them say what they do REALLY well, and you’ll hear what they consider a minor area of interest. Try not to put words in their mouths. They want to say “yes” to most everything, so be smarter this year. Understand that for the foreseeable future, you’ll need multiple tools to understand the physical layer and the resulting business metrics that go with it.

Posted in Predications | Tagged , , , , , , , | 1 Comment

Moore’s Law – It’s about Embracing the Business Opportunity

Gordon Moore's 1965 Graphic

Gordon Moore’s 1965 Graphic about technology doubling

I love pondering about the last 50 years of computing innovation. Although I knew nothing about technology in the mid-sixties when Gordon Moore observed that the number of components for integrated functions doubles every 12 months, it has been a guideline influencing literally millions of subsequent business choices that have been made by vendors and end-users alike for much of that period of time.

Now the curious thing is that Gordon Moore changed his projected timeframe to 24 months in the mid-seventies, at the very beginning of the first gen of the multi-purpose CPU revolution (refer to the general purpose CPU, the Intel 8080) since he realized that building multi-purpose CPUs was a much bigger undertaking than the function-level integrated circuits (refer to single-function chips like the 7400 Series) that were the state of the art until that point.

Wait, in 1965 Gordon Moore said component counts double every “12 months”  and then when big bad chips (like the 8080) were in their infancy ten years later he said the doubling rate had slowed to “24 months” and yet everything you and I read today quotes “Moore’s Law” (which really isn’t a LAW at all) as a doubling every “18 months”. What gives? Well, marketing does. Some clever marketing soul realized that the only way to make the facts and the fiction ‘kind of align’ was to take the average… 18 months in this case. It was believable, defendable, and has stood the test of time (with just a bit of hand-waving required).

Transisitor Counts for CPUs

Transistor Counts for CPUs has loosely followed Gordon Moore’s observation

So, does it really matter which number is more accurate? No, not really. The point is that every year or two, most technology things double in capacity AND half in cost at the component level. Servers become twice as capable every couple of years. Network transport doubles too. And when you compound this effect over any reasonable period of time, it becomes staggering. In fact, we store more information in one day today then we did in all of the 1980’s. Most importantly, we don’t build technology for technology’s sake, we do so to access the VALUE of all of this information, which doubles too!

And with technologies like the Internet of Things and Software-Defined networking and storage, the rate of this doubling is accelerating. We as an industry are like a veracious animal, feeding on information with nothing but opportunity and creativity to guide us.  The social experience is getting your 3 year old daughter and your 93-year old grandmother into the game too. And all of it is made possible with the new generation of Information Technology which is doubling per the curve. Not the IT that existed when Gordon made his observations, but the IT that sits in your hand right now and is connected to the world. Keep in mind that the Facebook main screen that you probably looked at this morning during your first cup of coffee actually consists of a hundred or more applications working together, each driving some portion of your experience. Each app communicating with the others to bring you a rich, fun and VALUABLE experience. That is why we all do what we do in the tech industry, that’s where it all shines, and that is why this doubling concept is so essential.

At the end of the day, there are massive transformations of near every sector of business happening to take advantage of this new IT. Finally, the business is driving technology. Finance and Education, Government and Aerospace, Entertainment and Internet… the most successful businesses are re-tooling themselves to embrace and leverage these new technologies knowing that everything they do today will be HALF of what their opportunity is next year.

Thanks Gordon…

Posted in Uncategorized | Tagged | 1 Comment

DCIM Facts versus Myth – Time for a Reality Check!

Facts versus Myths for DCIM

Facts versus Myths for DCIM

Last week we conducted an online webinar devoted to discussing the common mis-understandings and myths associated with DCIM. Over 400 people registered for the webinar and we had a ton of questions and comments afterwards. Its very clear that DCIM is a brand new category of solution for many of the attendees and there are many assumptions and incorrect data points that are preventing many end-users from realizing the benefits of DCIM.

I have selected a handful of the more popular myths we explored during the webinar, and present them here along with a more detailed narrative about the reason for the “myth” and the informed facts that should be considered instead. My goal is to provide the necessary DCIM facts for your consideration and to seed your thought processes as you begin your DCIM journey. Read the whole article Nlyte Blog.

Posted in Uncategorized | Tagged | Leave a comment

Exposing IT Value to the Business

The IT industry is currently experiencing an amazing transformation. Whereas most long term IT professionals have spent their careers creating and supporting increasingly complex IT structures with primary metrics of availability and uptime, the new CIO mantra has become service delivery at the right cost. In effect, CIOs are formalizing many previous efforts and creating service products that can be delivered upon request, with a keen understanding about the costs incurred to deliver those services. CIOs today think about these Service Portfolios as the means to set expectations on how technology will be adopted, how it will be supported, and at what cost those technologies will come. – See more at here

Posted in Uncategorized | Tagged | 1 Comment