Tuesday, February 02, 2010

The Open Group's Cloud Work Group Advances Understanding of Cloud-Use Benefits for Enterprises

Transcript of a BriefingsDirect podcast on The Open Group's efforts to help IT and businesses understand how to best exploit cloud computing.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group. Follow the conference on Twitter: #OGSEA.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the ongoing activities of The Open Group’s Cloud Computing Work Group. We'll meet and talk to the new co-chairmen of the Cloud Work Group, learn about their roles and expectations, and get a first-hand account of the group’s 2010 plans.

We'll look at the evolution of cloud, how businesses are grappling with that, and how they can learn to best exploit cloud-computing benefits, while fully understanding and controlling the risks. The Open Group's Architecture Practitioners and Security Practitioners conferences are this week in Seattle.

In many ways, cloud computing marks an inflection point for many different elements of IT, and forms a convergence of other infrastructure categories that weren’t necessarily working in concert in the past. That makes cloud interesting, relevant, and potentially dramatic in its impact. What has been less clear is how businesses stand to benefit. What are the likely paybacks and how enterprises can prepare for the best outcomes?

We're here with an executive from The Open Group, as well as the new co-chairmen of the Cloud Work Group, to look at the business implications of cloud computing and how to get a better handle on the whole subject.

Please join me in welcoming David Lounsbury, Vice President for Collaboration Services at The Open Group. Welcome, David.

David Lounsbury: Thank you, Dana. Happy to be here.

Gardner: We're also here with Karl Kay, IT Architecture Executive with Bank of America, and one of the co-chairmen of The Open Group’s Cloud Work Group. Welcome to the show, Karl.

Karl Kay: Thank you, Dana.

Gardner: We're also here with Robert Orshaw, IBM Cloud Computing Executive, and also the co-chair of the Cloud Work Group. Welcome to the show, Robert.

Robert Orshaw: Hi, everyone. Thanks for inviting us.

Gardner: Let's start out with a look at cloud generally and take a state of the art on this one -- not necessarily the state of the art of technology, but of the adoption. Let's start with you, David Lounsbury. What's being done with cloud adoption and where are there some gaps in understanding or even expectation of paybacks?

Lounsbury: One of the things that everybody has seen in cloud is that there has been a lot of take up by small to medium businesses who benefit from the low capital expenditure and scalability of cloud computing, and also a lot by individuals who use software as a service (SaaS). We've all seen Google Docs and things like that. That’s fueled a lot of the discussion of cloud computing up to now, and it's a very healthy part of what's going on there.

But, as we get into larger enterprises, there's a whole different set of questions that have to be asked about return on investment (ROI) and how you merge things with the existing IT infrastructure. Is it going to meet the security needs and privacy needs and regulatory needs of my corporation? So, it's an expanded set of questions that might not be asked by a smaller set of companies. That's an area where The Open Group is trying to focus some of its activities.

Gardner: Robert Orshaw, congratulations on being named to the group as a co-chair. How do you think things are different now than what people expected a few years ago in terms of how cloud is rolling out and being adopted?

We're there

Orshaw: A few years ago, there was a tremendous amount of hype, and the dynamics, flexibility, and pricing structures weren’t there. It's an exciting time now that you're seeing that from a flexibility, dynamic, and pricing standpoint, we're there. That's both in the private cloud and the public cloud sector -- and we'll probably get into more detail about the offerings around that.

A tremendous amount has happened over the past few years to improve the market adoption and overall usability of both public and private clouds.

Gardner: Karl Kay, as an architect, what is it about cloud computing that appeals to you specifically, and what do you need to do in order to convince the business side of some of those benefits?

Kay: Certainly the leading items like cost savings and time to market are two of the big motivators that we look to for cloud. In a lot of cases, our businesses are driving IT to adopt cloud as opposed to the opposite. It's really a matter of how we blend in the cloud environment with all of our security and regulatory requirement and how we make it fit within the enterprise suite of platform offerings.

Gardner: David Lounsbury, that’s an interesting observation -- that it's the business side that wants to do this. What do you suppose is holding back the IT side? What do they need to put in place around security, ROI, or spending requirements?

Lounsbury: This is interesting, because I've actually wondered about, and welcome Karl’s view on, whether this is replicating the adoption curve we saw, way back when, in the PC days. People had enterprise IT suites and then said, "I could do the same thing on my laptop or on my personal computer" and it came in that way.

Of course, we have all had interactions with Google Docs -- or name your favorite cloud computing thing -- and have said, "How can I use that at work?" Of course, good business people think about, "There is this new capability out there. How do I turn it into a competitive advantage for my company?"

So, you bring that in, but there is a whole different scale that has to occur when you go into an enterprise, where you have got to think of all the users in the enterprise. What does it take to fund it? What does it take to secure it, protect the corporate assets and things like that, and integrate it, because you want services to be widely available?

The questions that those bring are: Are there new kinds of cost and ROI decisions that you need to make? Do we have the tools out there to say how to do an ROI analysis for a cloud service, in the same way we would be able to do an ROI analysis for investing in a new set of blade servers within our company? That’s one dimension.

The second questions that we have seen from our members is, "What are the security questions I should be asking? Are they different from the ones that I've used before?" Cloud, almost necessarily, particularly if you have got a hybrid or public cloud involvement, isn’t going to be subject to the same level of perimeter security and privacy controls that you've put on your IT infrastructure. So what are the right set of questions for that?

New interfaces

The third, of course, is architectural. Cloud brings new technologies and new interfaces to those technologies and new business processes to use them, provision them, and things like that. How do I knit those into my corporate IT governance infrastructure?

Those are the kinds of questions that are being asked by corporations, as they move up. Now, I'll ask Robert, because he's on the side that’s providing many of these, and we could verify whether he is seeing some of those similar questions from his perspective as well.

Orshaw: Yes. In fact, in a former life, I was CIO of a large industrial manufacturing company that had 49 separate business units.

Cloud today can be an issue in the beginning for CIOs. For example, at that large manufacturing company, in order for a business unit to provision new development test environments or production environments for implementing new applications and new systems, they would have to go through an approval process, which could take a significant amount of time.

Once approved, we would have centralized data centers and outsourced data centers. We would have to go through and see if there was existing capacity. If there wasn’t, we would then go ahead and procure that and install it. So, we're talking weeks, and perhaps even a few months, to provision and get a business unit up and running for their various projects.

These autonomous business units that weren’t very happy with that internal service to begin with, are now finding it very easy to go out with a credit card or a local purchase order to Amazon, IBM, and others and get these environments provisioned to them in minutes.

This is creating a headache for a lot of CIOs, where there is a proliferation of virtual cloud environments and platforms being used by their business units, and they don’t even know about it. They don’t have control over it. They don’t even know how much they're spending. So, the cloud group can have a significant effect on this, helping improve that environment.

Gardner: Let's learn a little bit more about the Cloud Group. David could I could ask you to briefly describe The Open Group, its heritage, and what its role is, for those listeners who might not be that familiar.

Lounsbury: The Open Group is a member-based consortium with the vision of boundaryless information flow, how do you get the right information, to the right people, at the right time?

And we also have a byline of making standards work, and that, for me, is in the DNA of The Open Group. We want to consider things, not just from a technical perspective, but also from how businesses are going to adopt the capabilities and technology that are delivered by open standards and emerging standards like cloud.

Number of activities

There are a number of activities inside The Open Group. Enterprise architecture is a very large one, but also real-time and embedded systems for control systems and things of that nature. We've got a very active security program, and also, of course, we've got some more emerging technologically focused areas like service oriented architecture (SOA) and cloud computing.

We have a global organization with a large number of industrial members. As you've seen, from our cloud group, we always try to make sure that this is a perspective that’s balanced between the supply side and the buy side. We're not just saying what a vendor thinks is the greatest new technology, but we also bring in the viewpoint of the consumers of the technology, like a CIO, or as Karl represents on the Cloud Group, an architect on the design side. We make sure that we're balancing the interests.

Gardner: So, as you cross the chasms between these different constituencies and groups, it seems that with cloud we're now, in a sense, crossing the chasm between the expectations and requirements on the business side, and what IT need to now bring to the table in terms of making cloud computing safe or reliable for what they consider to be mission critical or enterprise ready.

Could any of you give me a quick history of how the Cloud Work Group came about and perhaps an encapsulation of its mission and goals.

Lounsbury: As I mentioned, The Open Group is a member-led consortium. Our members, over the past year or so, have been growing in interest in cloud. We did a number of presentations reaching back to our Seattle conference about a year ago on cloud computing. We've reached out to other organizations to work with them to see if there is interest in working together on cloud activities. We've staged a series of presentations.

From October 2009 onwards, we've gotten about 500 participants virtually, and that represents about 85-90 companies participating.



The members decided in mid-2009 to form a work group around cloud computing. The work group is a way that we can bring together all aspects of what's going on in The Open Group, because cloud computing touches a lot of areas: security, architecture, technology, and all those things. Also, as part of that we've reached out to other communities to open a nonmember aspect of the Cloud Work Group as well.

The work group was formed in 2009 and, towards the end, we went through the necessary formation steps, setting up the governance, and as you have seen, electing the chair. From October 2009 onwards, we've gotten about 500 participants virtually, and that represents about 85-90 companies participating.

They went through a fast exercise to organize themselves into groups, and that’s happened. We've now got four of these approved -- four activities within the Cloud Work Group on the business artifacts. We've got business use cases work group. We've got our SOA and service oriented infrastructure (SOI) architecture merger work group -- I know that’s not quite the right name -- and also a group that's starting to look at security in the cloud.

Gardner: Karl Kay, what are your expectations? What are your hopes for what can be accomplished in the near term with the work group?

Kay: All the work groups are really focused on trying to deliver some short-term value and get the items out. In the business use cases, they're really trying to define a clear set of business cases and financial models to make it easier to understand how to evaluate cloud with certain scenarios. How do you determine whether it makes sense to build a consistency across that? They're working not only within their own group, but also working with groups like the Google Use Case Group and some of the other use case groups that are out there.

The cloud architecture group is looking to deliver a reference architecture in 2010. One of the things we've discovered is that there are a lot of similarities between the reference architecture that we believe we need for cloud and what already has been built in the SOA reference architectures. I think we'll see a lot of alignment there. There are probably some other elements that will be added, but there's a lot of synergy between the work that’s already going on in SOA and SOI and the work that we are doing in cloud.

Gardner: Robert, do you have any further comments on your expectations and where you think the group can go in the next year or two?

Interrelated groups

Orshaw: I'm excited about the way we've formatted this, because all of the groups are interrelated. We have a steering committee that brings these groups together to define the parallel points and the collision points between them.

For example, on all of these, we're starting with a business use case. Why, from a business perspective, would you use public? Why would you use private? What are the business benefits around that? And then, what are the reference architectures to achieve that? What are the security models necessary to achieve that? What's the SOA model associated with all of that?

At the end of this, we'll have a complete model for both public and private cloud. It's an exciting endeavor by the team, and I'm excited to see the outcome. We'll have short-term milestones, where we'll produce, document, and publish results every two months or so. We hope, towards the end of the year, to have all of these wrapped up into these global models that I described.

Gardner: How about the skill sets? As I've been listening to you describe some of the challenges, it strikes me that perhaps we are talking about different skill sets. Or, perhaps we're looking at skill sets we apply to architecture or other frameworks and can now apply to cloud. Is there a distinct cloud skill set, or are we really continuing on some sort of a maturation of the role of architect and IT leadership?

Orshaw: We have a great example of that in the work group and even with the co-chairs. I come from a business background. I ran an application service provider business. I ran IBM’s hosting and applications management business, and I'm a cloud business executive. Karl is a leader on the cloud architecture side, the more technical side. So, as co-chairs, we bring both sides to it. Then, throughout the subcommittees, we have varying skill sets that make up these committees.

One of the things you have to think about is the body of knowledge that you need to have available to you in order to make effective business use of this.



On the business use cases, we have people both on the business side and the technical side, and that's scattered throughout the rest of the teams as well. It's a very nice balance. Karl, do you want to add few comments to that?

Kay: We're seeing a skill-set change on the technical side, in that, if you look at the adoption of cloud, you shift from being able to directly control your environments and make changes from a technical perspective, to working with a contractual service level agreement (SLA) type of model. So it's definitely a change for a lot of the engineers and architects working on the technical side of the cloud.

Gardner: Do you have anything further David on what's needed in the field in terms of skills, certification, or some advancement or changes?

Lounsbury: So many technological innovations start out as a bit of a "wild west." One of the things you have to think about is the body of knowledge that you need to have available to you in order to make effective business use of this. That’s why you see the emphasis on some of the artifacts that are being produced by the cloud group. We've got the business use-case template and financial templates under production, adoption strategy work, and some metadata to help you analyze and categorize stuff.

We're starting to build up that body of knowledge and separate the wheat from the chaff in terms of real business value and hype. That’s necessary. But, then you're also going to face the issue of how to determine the people who have that body of knowledge. That’s something for downstream, but it's something that every business person must be thinking about. I'm sure that every consultant out there just added "cloud computing expert" to their resume. How do you know who those people are?

But, that’s a thing for the future. Right now, we have to focus on getting that body of knowledge in place for business people to use and assess what's going on in cloud computing.

Gardner: I know it’s a bit early, but do we have any examples of enterprises that have already dabbled in cloud computing, experimented, and then adopted it at a certain level? Do we have any metrics of success or paybacks that we can take away from that as an indicator or bellwether of where others might be heading?

Wireless network example

Orshaw: We have almost 200 examples here, but I'll highlight one. SK Telecom, Korea’s largest wireless provider, has created a public cloud for their partners, where the partners can develop and then put into production WAP services for their wireless devices on that wireless network. It's a completely a public cloud that offers both a development platform and a SaaS model to the WAP devices and to their customers. That’s a terrific, terrific model.

There are examples of several large banks now signing up for the SaaS model of email and collaboration. Several very large corporations in the Fortune 100 are starting to use cloud for non-production environments of all types. As opposed to purchasing hardware and building it on their own data centers in the old traditional way, they're signing alliances with various cloud providers for non-production development platforms.

Gardner: Karl, do you have any favorite examples that perhaps illustrate in your mind the potential for cloud computing?

Kay: That would be the development environment that Robert mentioned. Among most of our peers in the Fortune 100, almost everybody has some development project out there, and they're seeing pretty quick return on investment in terms of time to market, getting things up and running, flexibility, and not expending capital on short-term hardware. That’s a pretty powerful use case where it's easy to demonstrate value.

Lounsbury: Dana, if I could add one, the one thing we don’t want to ignore here is that the ability of cloud computing to enable new lines of business on a scale that might not have been feasible if you had to have your own dedicated infrastructure.

There are lots of examples where that global scale delivery of more sophisticated service has really been enabled by the fact that there are computing resources and globally reachable infrastructure out there.



There was a great example from our Hong Kong conference -- which is available on The Open Group’s website, www.opengroup.org -- of a security company that put up web-enabled security cameras, very low cost items. They put them in a premises that somebody wanted to monitor. Then, they put the imagery from the security cameras up in the cloud. In the cloud, they could do analysis on motion, sound, and things like that to assess whether there was an intrusion or not.

It could be done at a much more sophisticated level than it could be in any single small security device. Of course, the cloud also made it much easier for them to make it available to anybody around the world who was allowed to monitor that premise.

There are lots of examples where that global scale delivery of more sophisticated service has really been enabled by the fact that there are computing resources and globally reachable infrastructure out there.

That’s going to be an area that you will see increasingly taken advantage of by enterprises, not so much for managing ROI or capital expenditure, but also just having the technology available to put these new business models and new business capabilities out on a global basis.

Gardner: That’s an interesting point, David. Perhaps many people approach this from an idea of efficiency, of repaving cow paths a little bit better, cutting costs, and maybe reducing IT by outsourcing certain aspects of it. But, you were also talking about being able to do things that couldn’t have been done before.

There is an extended process and innovation capacity here. Experimenting and getting ready will now put you in a position where you can take advantage of some of these new business models and do things that, as you say, couldn’t have been done before. Do you have any thoughts along those lines, some of the future implications of cloud computing?

Ability to scale

Lounsbury: Certainly, cloud brings the ability to scale, and scale quickly, that we haven’t had, at least not at a cost-effective level. There are a lot of opportunities to tackle problem sets that we wouldn't even tackle before, because it was cost-prohibitive. Now, with cloud, there's an opportunity to take on those problems, use those resources, and then release those resources back into the pool.

Orshaw: That’s a good point, because the fact that you've got that scalability without capital expenditure really lowers the risk of trying out a new innovative business model.

Lounsbury: Google is a perfect example. Their whole technology model doesn’t work without massive scale. There are problems other businesses have to which they can apply the same economies of scale in that same size. We can tackle those problems.

Gardner: So it's an opportunity to really reduce the risk from financial exposure, when you can try out new business models, but without necessarily having to build out the underlying infrastructure to do so.

Lounsbury: Right.

Gardner: David Lounsbury, do you have any other thoughts about relaying what’s going to be happening at The Open Group’s conference in Seattle in February, in terms of the work group, and perhaps let people know how they might learn more or even get involved?

Google is a perfect example. Their whole technology model doesn’t work without massive scale.



Lounsbury: The best thing to do is go to www.opengroup.org and you can see the Seattle conference prominently featured. We've got some great presenters there. We've got Peter Coffee from Salesforce.com and Tim Brown from CA. We've got an interesting a formal debate on, "Is the cloud more or less secure than enterprise IT," between Peter Coffee and the CISO of the University of Washington. We've got some technical discussions on cloud taxonomies from Hewlett-Packard and Fujitsu. So it’s going to be a really exciting conference.

We also have a "Cloud Camp" in the evening, so that people can come and discuss their cloud directions and needs in a more unstructured way. That is open to members and non-members. So, I just invite everybody in the area to make sure that they check out the site and sign up for it.

We have a public list for our Cloud Work Group. If you want to see what’s going on in the Cloud Group, we have got, what I call, our "cloudster's list," and you can sign up to from that site.

Gardner: Very good. I want to thank you very much for participating. We've been talking about the ongoing activity of The Open Group’s Cloud Work Group. Joining us has been David Lounsbury, Vice President of Collaboration Services at The Open Group. Thank you very much, David.

Lounsbury: You're welcome. Thank you for the invitation.

Gardner: We've also been joined by Karl Kay, IT Architecture Executive at the Bank of America, and one of the new co-chairs of the work group. Thank you, Karl.

Kay: Thank you for the opportunity.

Gardner: And also, Robert Orshaw, IBM Cloud Computing Executive and the other co-chair of the work group. I appreciate your input, Robert.

Orshaw: Yes, indeed. Thank you very much.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group. Follow the conference on Twitter: #OGSEA.

Transcript of a BriefingsDirect podcast on The Open Group's efforts to help IT and businesses understand how to best exploit cloud computing. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Security, Simplicity and Control Ease Make Desktop Virtualization Ready for Enterprise Uptake

Transcript of a BriefingsDirect podcast on the future of desktop virtualization and how enterprises can benefit from moving to this model.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we provide a sponsored podcast discussion on the growing interest and value in PC desktop virtualization strategies and approaches. Recently, a lot has happened technically that has matured the performance and economic benefits of desktop virtualization and the use of thin-client devices.

In desktop virtualization, the workhorse is the server, and the client assists. This allows for easier management, support, upgrades, provisioning, and control of data and applications. Users can also take their unique desktop experience to any supported device, connect, and pick up where they left off. And, there are now new offline benefits too.

At the same time as this functional maturity improved, we are approaching an inflection point in a market that is accepting of new clients and new client approaches like desktop virtualization.

Indeed, the latest desktop virtualization model empowers enterprises with lower total costs, greater management of software, tighter security, and the ability to exploit low-cost, low-energy thin client devices. It's an offer that more enterprises are going to find hard to refuse.

Here now to help us learn more about the role and outlook for desktop virtualization, we're joined by Jeff Groudan, vice president of Thin Computing Solutions at HP. Welcome to the show, Jeff.

Jeff Groudan: Thanks for having me, Dana.

Gardner: As I mentioned, there's a lot happening in the trends in the market that are supporting more interest in virtualization generally. We see server, storage, network, and now this desktop thing really catching on. I think it's because of the economics.

Market drivers

Groudan: There certainly are some things in the market that are sure driving a potential inflection point here. The market-driven things coming out of the recession are opening a lot of customers up to re-looking at some deployments that they may have delayed or specific IT projects that they have put on hold.

In addition, there has been an ongoing desire to increase security and a lot of new compliance requirements that the customers have to address. In addition, in general, as they are looking for ways to save on costs, they are consistently and constantly looking for different ways to more efficiently manage their distributed PC environments. All of these things are driving the high level of interest in PCs.

Gardner: With regards to this pent-up demand issue, we've certainly seen the Windows desktop environment, the operating system, now coming out with a very important upgrade and improvement with Windows 7. We've also seen of course some improvements on the hypervisor market for desktop virtualization. Do you have any sense of where this pent-up demand is really going to lead in terms of growth?

Groudan: In addition to the market drivers, we're seeing technology drivers that also are going to help line up for a real uptick in the size and rate of deployments on client virtualization.

You touched on the operating system trends. I think there has been some pause in operating system upgrades with Vista, as companies wait for Windows 7, and with that coming out in addition to Server 2008 R2 from Microsoft, as well as other updates from other virtualization software providers. You're really seeing a maturing of the client virtualization software in conjunction with the maturing of the next-generation Microsoft operating systems that are a catalyst here.

. . . You're seeing more powerful, yet cost-effective, thin clients that you can put on the desk and that really ensure those end-users get the experience that you want them to get.



You're also seeing better performance on the hardware side and the infrastructure side. It's really also helping bring the cost per seat of the client virtualization deployment down into ranges that are lot more interesting for large deployments. Last, and near and dear to my heart, you're seeing more powerful, yet cost-effective, thin clients that you can put on the desk and that really ensure those end-users get the experience that you want them to get.

Gardner: It seems like enterprises are going to be faced with some major decisions about their client strategies, and if you are going to be facing this inflection point you might as well look at the full panoply of options at your disposal.

Groudan: Absolutely. Just to put it into context, there was recently some data from Gartner. They feel like there are well over 600 million desktop PCs in offices today. Their belief is that over the next five years, upwards of 15 percent of those could be replaced by thin clients. So that's quite a number of redeployments and quite an inflection point for client virtualization.

Gardner: I suppose another motivation for IT departments and enterprises is that they're looking at security, compliance, and regulatory issues that also make them re-evaluate their management approach as to how data and applications are delivered.

Security nightmare

Groudan: Absolutely. There are a variety of areas that are relevant for customers to look at right now. On security, you're absolutely right. Every IT manager's nightmare scenario is to have their company on the front page of The Wall Street Journal, talking about a lost laptop, a hack, or some other way that personal data, patient data, or financial data somehow got out of their control into the wrong hands.

One of the key benefits of client virtualization is the ability to keep all the data behind the firewall in the data center and deploy thin clients to the edge of the network. Those thin clients, by design, don't have any local data.

Gardner: I suppose another relevant aspect of this is that it's not necessarily rip-and-replace. You are not going to take 600 million PCs and put in thin clients, but you can start working at the edge to identify certain classes of users, certain application sets, perhaps a call center environment, and start working on this on a graduated basis.

Groudan: You certainly can. Our general coaching to customers is that it's not necessary for everyone, for every user group, or every application set. But, certainly, for environments where you need to get them more manageable, you need more flexibility.

When you think about the cost savings of client virtualization, usually the costs come from some of the long-term acquisition costs.



You need higher degrees of automation in order to manage a high number of distributed PCs with the benefits from centralized control, reduced labor costs, and the ability to manage remote or hard to get at locations -- things like branches, where you don't have a local IT. Those are great targets for early client virtualization deployments.

Gardner: I suppose another big issue in the marketplace now is how to increase automation. When you control the desktop experience from a server or data-center infrastructure, you've got that opportunity to automate these processes and get off that treadmill of trying to deal with each and every end point physically or at least through a labor approach.

Groudan: Exactly. When you think about the cost savings of client virtualization, usually the costs come from some of the long-term acquisition costs. Because the lifecycle of these solutions are closer to four or five years, you haven't acquired the same amount of equipment on the same cadence.

But, the big savings come from the people savings. The automation and the manageability mean you need fewer people dedicated to managing distributed PCs and the break-fix and help desk associated with that.

You can do two things with those efficiencies. You can either cut some cost, which, at some point, is the right approach. Increasingly, what we see is that rather than just cut cost, people re-deploy resources toward more value-generation oriented activities versus a cost center that you have to have to manage PCs. You can take resources and focus them on value-add generation projects that add to the bottom-line from the business efficiency perspective versus just our cost.

Gardner: In other ways, there is an interesting point because the total solution here has to involve those data center operators, the architects, and then the PC edge client folks. Now these may have been separate in some organizations, but what's HP's advice? Are you encouraging more collaboration and cooperation to strategize between the client group, and then the delivery of the infrastructure side?

Think beyond technical

Groudan: You really need to. That's been one of the inhibitors to earlier growth on client virtualization -- figuring out the business processes to get the data center guys and the edge of the network guys working on a combined plan. One key to success is clearly to be thinking beyond simply the technical architecture to how the business processes inside a company need to change.

All of a sudden, the data-center guys need to be thinking about the end-user. The end-user guys need to be thinking about the data center. Roles and responsibilities need to be hammered out. How do you charge the capital expense versus operational expense? What gets budgeted where? My advice is: as you're thinking about the technical architecture and all of the savings end-to-end, you need to also be thinking about the internal business processes.

Gardner: What that tells me is that this is not just about buying components and slapping in thin clients. This is really something you need to look at from a total solutions perspective. Do some planning, but the more total approach you take, the bigger economic payoff will be.

Groudan: That's absolutely right.

Gardner: Let's go back quickly to security. I remember when I first started hearing about desktop virtualization, somebody mentioned to me that all those agencies in Washington with the three-letter acronyms, the spooky guys, are all using desktop virtualization, because they can lock down the device and close off the USB port.

One of the beautiful things about a thin client is that when you unplug it from the network, it's basically a paperweight . . .



When that thing is shut off or that user logs out, there is no data and no inference. Nothing is left on the client. Everything is on the server. It's how you can really manage security. We are talking about taking that same benefit now to your enterprise users, your road warriors, and perhaps even remote branches. Right?

Groudan: That's absolutely correct. One of the beautiful things about a thin client is that when you unplug it from the network, it's basically a paperweight, and, from a security perspective, thin clients are getting pretty small too. People could take that thin client, put it in their briefcase, walk out with it, and they have nothing. They have no IT assets, no personal data, no R&D secrets, or whatever else there may be.

From a security perspective, they're very, very low power, designed to be remotely managed, and designed to be plug-and-play replaceable. From a remote IT perspective, on the very rare chance that a thin client breaks, you take one from the storage closet where you keep a couple of spares, plug it in, and you're up and running in five or 10 minutes.

Gardner: So, even if all things were equal in terms of the cost of operating and deploying these, just the savings in securing up your data and application seems like a pretty worthwhile incentive?

Groudan: It really does. Not all customers may have that kind of burning needs to secure data, but it's a drop-dead simple way of ensuring that there is no data out there on the edge of the network that you don't know about. It really gives you some confidence that you know where the data is and you know there are limited ways to get into that data. If you put the right security process in place, you know they're going to work independent of whether thousands of end-users follow all the processes, which is hard to mandate.

Gardner: What does HP mean by desktop virtualization? There has been some looseness around the topic. Some people focus on a business to consumer (B2C) approach, highly scaling, perhaps a limited number of apps, and through a telecom provider. Other folks are now in the market with solutions that are business to employee (B2E), that is your employee-focused solutions. Where does HP come down on this? What do you think is the most important approach and how do you define it in the market?

Views of the market

Groudan: We look at this market in two ways, in the context of client virtualization and in the broader context of thin computing. Just zeroing in on client virtualization, we call it Client Virtualization HP. It's desktop virtualization. It's the same animal.

We look it as a specific set of technologies and architectures that dis-aggregate the elements of a PC, which allows customers to more easily manage and secure their environment. What we're really doing is taking advantage of a lot of the new software capabilities that matured on the server side, from a server virtualization and utilization perspective. We're now able to deploy some of those technologies, hypervisors, and protocols on the client side.

We still see it is a fairly B2E-focused paradigm. You can certainly draw up on a whiteboard other models for broader audiences, but today we see most of the attraction and interest as more of a B2E model. As you touched on earlier, it's generally targeted at specific user groups and specific applications versus everybody in your environment.

Our specific objective is figuring out how to simplify virtualization, so that customers get past the technology, and really start to deliver the full benefit of virtualization, without all the complexity.

Gardner: There is a significant integration aspect of this. We talked about how you've got different groups within IT that are going to be affected, but you've got to be able to integrate component software, hypervisors, and management of data. It's a shift.

If you think about PCs 20-25 years ago, customers didn't know how to architect a distributed PC environment. In 25 years, everybody has gotten good at it.



Groudan: We've were an early entrant in client virtualization, so we've got quite a track record behind us. What we learned led us to focus on a few things.

The first is that you don't want to have customers having to figure out how to architect the stuff on their own. If you think about PCs 20-25 years ago, customers didn't know how to architect a distributed PC environment. In 25 years, everybody has gotten good at it. We're still at the early stages on client virtualization.

So our focus is to deliver more complete integrated solutions, end to end from the desktop to the data center, lay it all out, and reference designs so customers can very comfortably understand how to go build out a deployment. They certainly may want to customize it. We want to get them 80-90 percent there just by telling them what our learnings have been.

The second thing we try to do is to give them best-in-class platforms. From a thin-client perspective, this is important, because you need to make sure that the end-user actually gets the experience that they are used to. One of the best ways to install a deployment is having the end-users say, "Hey, I've got a better experience on my desktop." Having thin clients that are designed from the ground up to deliver a desktop class experience is really critical.

Last, we need to make sure we've got the right ease of use and manageability tools in place, so this IT complexity can be removed. They know they can manage the virtual environments. They can manage the physical environments. They can manage the remote thin clients. We don't make these things too complex for the IT guys to actually deploy and manage.

Some trepidation

Gardner: Now, there has been some trepidation in the market. People say, "Is this ready for prime-time?" Let's focus a little bit on what's been holding people up. I don't think it's necessarily the software.

When I talk to Microsoft people, they seem to be jazzed about desktop virtualization. Of course, you're still getting a license to use that desktop, and perhaps even it's aligned with a lot of the other server side products and services that Microsoft provides.

So, there is alignment by the software community. What's been holding up people, when they think of this desktop virtualization?

Groudan: There's been a handful of things. In the early days, there were still some gaps in the experience that the end-users would get -- multimedia, remoting, USB peripherals, and those kinds of things. HP and the broader industry ecosystem has done a lot in a year or two to close those gaps with specific pieces of software, high-performing thin clients, etc. We're at a point now, where you can feel pretty good that the end-users are going to get a very relevant experience as they compare to a desktop.

Second, the solutions are complicated, or we let them be complicated, because we put a lot of components in front of our customers, rather than complete solutions. By delivering more reference design models and tools you take away some of the complexity around the design, the set up, and the configuration that customers were facing in the early days.

There are opportunities for just about every industry.



Third, management software. Earlier, you didn't have single tool that would let you manage both the physical and the virtual elements of the desktop virtualization environment. HP and others have closed those gaps, and we have very powerful management tools that make this easy on an IT staff.

Last, it was hard to initially quantify where some of the cost savings have come from. Now, there are total cost of ownership (TCO) analysis tools, understanding where the savings can come from, and how you can take advantage of those savings. It's a lot better understood, and customers are more comfortable that they understand the return on investment (ROI).

Gardner: Are there certain types of enterprises that should be looking at this? In my mind, if you've already dived into virtualization, you're getting comfortable with that and you're getting some expertise on it. If you're also thinking about IT shared services in a service bureau approach to IT, your culture and organization might be well aligned to this. Are there any other factors that you can think of, Jeff, that might put up a flag that says, "We're a good candidate for this?"

Groudan: There are opportunities for just about every industry. We've seen certain verticals on the cutting edge of this. Financial services, healthcare, education, and public sector are a few examples of industries that have really embraced this quickly. They have two or three themes in common. One is an acute security need. If you think about healthcare, financial services, and government, they all have very acute needs to secure their environments. That led them to client virtualization relatively quickly.

Parallel needs

Financial services and education both have some consistency around having large groups of knowledge workers in small locations. That lends itself very well to client virtualization type deployments. Education and healthcare both have a need for large, remote, campus type environments, where they have a need for a lot of PCs or desktop virtualization seats, a mobile campus environment. That's another sort of environments and use case that lends itself very well to these kinds of architectures.

Gardner: As I said earlier, it seems like an offer that's hard to refuse. It's just getting everything lined up. There are so many rationales that support this. But, in this economy, it's the dollar and cents that are the top concern, and will be for a while.

Do you have any examples of companies that have taken a plunge, done some desktop virtualization, perhaps with a certain class of user, perhaps in a call center environment or remote branch? What's been the experience and what are the paybacks at least economically?

Groudan: I'll give you two examples. First, is out of the education environment. They were trying to figure out how to increase reliability, while improving student access and increasing the efficiency of their IT staffs, because the schools are always challenged having sufficient IT resources.

They're able to rest easy that that kind of information isn't going to somehow get out into the public domain.



They deployed desktop virtualization deployment with HP infrastructure and thin clients. They felt like they would lower the total cost, increase the up time for the students and in the classroom, increase the teacher productivity, because they are able to teach instead of trying to maintain PCs in the classroom that weren't necessarily working. They freed up their IT staff to go work on other value-added projects.

And, most important for a school, they increased the access and productivity of the students. To make that very real for you, students may only have one or two hours in front of the computer a day in school and they maybe doing many, many different things. So, they don't get that much time on an application or a project in school.

The solution that this Hudson Falls School deployed let the students access those applications from home. So, they could spend two or three hours a night from home on those applications getting very comfortable with them, getting very productive with them, and finishing their projects. It was a real productivity add for the students.

The second example is with Domino's Pizza. Many of us are familiar with them. They were struggling with the challenges of having a lot of remote sites and a lot of terminals that are shared. Supporting those remote sites, trying to maintain reliability, and keeping customer data secure were their burning needs, and they were looking for an alternative solution.

They deployed client virtualization with HP thin clients and they found they could lower their costs on an annual basis by $400 per seat, and they've gotten much longer life out of the terminals. They increased the up-time of the terminals and, by extension, limited the support required on site.

Then, by using this distributed model, where the data is back in a data center somewhere, they really secured customer data, credit card information, and those kinds of things. They're able to rest easy that that kind of information isn't going to somehow get out into the public domain.

Gardner: A couple of things that jump out at me from this is that all that data back on the server is really going to benefit your business intelligence (BI), analytics, auditing, reporting and those sorts of activities, when you don't have all that data out on all those clients, where you can't really easily get to it or manage it.

Value of data mining

Groudan: For, any company that has a lot of customer data, the ability to mine that data for trends, information, opportunity, or promotions is incredibly valuable.

Gardner: The other thing that jumped out at me is that this brings up the notion that if this works for PCs and thin clients, what about kiosks? What about public-facing visual interfaces of some kind? Can you give us a hint of what the future holds, if we take this model a step further?

Groudan: Sure, it brings up one of the themes I want to talk about. HP's unique vision is that client virtualization is just one of many ways of using thin computing to enable a lot of different models beyond just replacing the traditional desktop. As you mentioned, anywhere that's hard to get to, hard to maintain, or hard to support is a perfect opportunity to deploy thin computing solutions.

Kiosks and digital signage are generally in remote locations. They can be up on a wall somewhere. The best answer for them is to be connected remotely, so you can just manage them from centralized location.

. . . Thin computing ultimately is going to be much broader than the B2E client virtualization models that we're probably most familiar with.



We certainly see kiosks and signage as a great opportunity for thin computing. We do see some other opportunities to bring thin computing into the home and into small-medium business through the use some of the cloud trends and cloud applications and services. We've all seen some of the trends on. To me, thin computing ultimately is going to be much broader than the B2E client virtualization models that we're probably most familiar with.

Gardner: Obviously, HP has a lot invested here, a good stake in the future for you. Anything we should expect in the near future in terms of some additional innovation on this particularly on the B2B?

Groudan: Yeah, well, I can't talk about it too much, but we certainly have some very exciting launches coming up in the next couple of months where we're really focused on total cost per seat. How do we let people deploy these kinds of solutions and continue to get further economic benefits, delivering better tighter integration across the desktop to the data center?

The ease of deployment of these solutions can get easier-and-easier, and then ease of use and manageability tools. They allow the IT guys to deploy large deployments of client virtualization with as little touch and as little complexity as we can possibly make it. We're trying to automate these kinds of solutions. We're very excited about some of the things we'll be delivering to our customers in the next couple of months.

Gardner: Okay, very good. We've been talking about the growing interest and value in PC desktop virtualization strategies and approaches. I've learned quite a bit. I want to thank our guest today, Jeff Groudan, vice president of Thin Computing Solutions at HP. Thanks for joining, Jeff.

Groudan: My pleasure, Dana. Thanks for having us.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the future of desktop virtualization and how enterprises can benefit from moving to this model. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

Monday, February 01, 2010

Technology, Process and People Must Combine Smoothly to Achieve Strategic Virtualization Benefits

Transcript of a BriefingsDirect podcast on how to take proper planning, training and management steps to avoid virtualization sprawl and achieve strategic-level benefits.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on planning and implementing data-center virtualization at the strategic-level in enterprises.

Because companies generally begin their use of server virtualization at a tactical level, there is often a complex hurdle in expanding the use of virtualization. Analysts predict that virtualization will support upwards of half of server workloads in just a few years. Yet, we are already seeing gaps between an enterprise’s expectations and their ability to aggressively adopt virtualization without stumbling in some way.

These gaps can involve issues around people, process and technology and often, all three in some combination. Process refinement, proper methodological involvement, and swift problem management often provide proven risk reduction, and provide surefire ways of avoiding pitfalls as virtualization use moves to higher scale.

The goal becomes one of a lifecycle orchestration and governed management approach to virtualization efforts so that the business outcomes, as well as the desired IT efficiencies, are accomplished.

Areas that typically need to be part of any strategic virtualization drive include sufficient education, skilled acquisition, and training. Outsourcing, managed mixed sourcing, and consulting around implementation and operational management are also essential. Then, there are the usual needs around hardware, platforms and system as well as software, testing and integration.

So, we’re here with a panel of Hewlett Packard (HP) executives to examine in-depth the challenges of large scale successful virtualization adoption. We’ll look at how a supplier like HP can help fill the gaps that can hinder virtualization payoffs.

Please join me in welcoming our panel: Tom Clement, worldwide portfolio manager in HP Education Services. Welcome to BriefingsDirect, Tom.

Tom Clement: Thank you, Dana. Great to be here.

Gardner: We're also here with Bob Meyer, virtualization solutions lead with HP Enterprise Business. Hey, Bob.

Bob Meyer: Hey, Dana.

Gardner: And we’re here with Dionne Morgan, worldwide marketing manager at HP Technology Services. Hello, Dionne.

Dionne Morgan: Hello, Dana.

Gardner: Ortega Pittman, worldwide product marketing, HP Enterprise Services, joins us. Hello, Ortega.

Ortega Pittman: Hi, Dana.

Gardner: And lastly, Ryan Reed, worldwide marketing manager at HP Enterprise Business. Hello, Ryan.

Ryan Reed: Hi, Dana, thanks for having me.

Gardner: I want to start by looking at this notion of a doubling of the workload supported by virtualization in just a few years. Why don’t we start with Bob Meyer? Bob, tell me why companies are aggressively approaching the move from islands of servers to now oceans of servers.

Headlong into virtualization

Meyer: Yeah, it's interesting. People, had they known an economic downturn was coming, might have thought that it would have slowed down like the rest of IT spending, but the downturn really forced anybody who is on the front to go headlong into virtualization. Today, we are technically ahead of where we were a year or two years ago with virtualization experience.

Everybody has experience with it. Everybody has significant amounts of virtualization in the production environment. They’ve been able to get a handle on what it can do to see what the real results and tangible benefits are. They can see, especially on the capital expenditure side, what it could do for the budgets and what benefits it can deliver.

Now, looking forward, people realize the benefits, and they are not looking in it just as an endpoint. They're looking down the road and saying, "Okay, this technology is foundational for cloud computing and some other things." Rather than slowing down, we’ll see those workloads increase.

They went from just single percentage points a year and a half ago to 12-15 percent now. Within two years, people are saying it should be about 50 percent. The technology has matured. People have a lot of experience with it. They like what they see in results, and, rather than slow down, it's bringing efficiency to things like the new service model.

Gardner: Ortega Pittman, do you see any other issues around these predictions? The expansion of virtualization seems to be outstripping the skill sets that are available to support it.

Pittman: That's where HP Enterprise Services comes to add value with meeting customers' needs around skills. Many, times small, medium, and large organizations have the needs, but might not have the skills on hand. In providing our outsourcing services, we have the experienced professionals who can step right in and immediately begin the work and the strategic path towards their business outcomes.

The skill demand and the instant ability to get started is something that we take a lot of pride in, and in the global track record of doing that very well is something that HP Enterprise Services can bring from an outsourcing perspective.

Gardner: Dionne Morgan, what are some of the risks, if folks start embarking on this without necessarily thinking it through at a life-cycle level? Are there some examples that you have experienced, where the hope for benefits -- economic and otherwise: agility benefits, flexibility, and elasticity -- somehow end up being imperiled by not being prepared?

Morgan: Many people have probably heard the term "virtual machine sprawl" or "VM sprawl," and that's one of the risks. Part of the reason VM sprawl occurs is because there are no clear defined processes in place to keep the virtualized environment under control.

Virtualization makes it so easy to deploy a new virtual machine or a new server, that if you don’t have the proper processes in place, you could have more and more of the these virtual machines being deployed and you lose control. You lose track of them.

That's why it's very important for our clients to think about not only how they're going to design and build this virtualization solution, but how they're going to continue to manage it on an on-going basis, so they keep it under control and they prevent that VM sprawl from occurring.

Gardner: We’ve talked about this people, process, and technology mixture that needs to come together well. Tom Clement, from that perspective of education, are there things about virtualization that are dramatically or significantly different than what we might consider traditional IT operations or implementation?

Clement: Certainly, there are. When you talk about people, process, and technology, you hit upon the key elements of virtualization project success. There is no doubt in my mind that HP provides the best-in-class virtualization technology to our clients hands down. But, our 30-plus years of experience in providing customer training has shown, time and time again, that technology investments by themselves don’t ensure success.

The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.



The business results that clients want in virtualization won’t be achieved until those three elements you just mentioned -- technology, process and people -- are all addressed and aligned.

That's really where training comes in. Our education team can help address both the people and process parts of the equation. Increasing the technical skills of our customers' people is often one of the most effective ways for them to grow, increase their productivity and boost the success rates of their virtualization initiatives.

In fact, an interesting study just last year from IDC found that 60 percent of the factors leading to the general success in the IT function are attributed to the skills of people involved. In that regard, in addition to a suite of technical training, we also offer training in service management, project management, business analysis, all with an eye to helping customers improve and integrate their virtualization projects to better processes -- just as Dionne was speaking about a moment ago -- and to better process management.

Of course, we have stable and experienced instructors, whose practical, hands-on expertise provides clients with valuable tips and tricks that they can immediately use when back on the job. So, Dana, you hit it right on the head. It's when all three of those components -- people, process, and technology -- are addressed, especially in virtualization situations, that customers will maximize the business results that they get back from their virtualization solutions.

Gardner: We’ve also seen in the field that, as people embark on virtualization and move from the tactical to the strategic, it forces a rethinking of what it is core and what might be tangential or commoditized.

Ryan Reed, are we seeing folks who, as they explore virtualization, start also to explore their sourcing options? What are some of the trends that you're seeing around that?

Seeing a shift

Reed: Thank you for asking that question. We do see a shift in the way that IT organizations have considered what they think would be strategic to their end business function. A lot of that is driven through the analysis that goes into planning for a virtual server environment.

When doing something like a virtual server environment, the IT organizations have to take a step back and analyze whether or not this is something that they’ve got the core competency to support. Often times, they come to the conclusion that they don’t have the right set of skills, resources, or locations to support those virtual servers in terms of their data-center location, as well as where those resources are sitting.

So, during the planning of virtual server environments, IT organizations will choose to outsource the planning, the implementation, and the ongoing management of that IT infrastructure to companies like HP.

They apply our best practices and our standard offerings that are available to IT organizations from HP data centers or from data centers that are owned by our clients, which would be considered an on-premise type of virtual server environment. Then, they're managed by the IT professionals that Ortega Pittman had mentioned earlier in either an on-shoring or off-shoring scenario, whichever is the best-case scenario for the IT organization that's looking for that skilled expertise.

It's definitely a good opportunity for IT organizations to take a step back and look at how they want to have that IT infrastructure managed, and often times outsourcing is a part of that conversation.

Gardner: It also sounds like that rethinking allows them to focus on the things that are most important to them, their applications, their business logic, and their business processes and look to someone else to handle the plumbing. In the analogy of a utility, somebody else provides electricity, while they build and manage the motors. Is that fair?

Reed: That's a very fair statement. By choosing a partner to team up with to manage that internal plumbing, as you’d referred to it, it allows the IT organization to get back to basics, to understand how to best provide the best-in-class, lowest-cost service to their end users -- increasing business productivity and helping them maximize the return on their IT investment. This powers the business outcomes that their end-users are looking for.

Gardner: I'm intrigued by this notion that these organizations are going to be encountering virtualization sprawl and trying to expand the use of it, but in different ways are they going to be exercising strengths and weaknesses. What are some of the gaps that are typical? What do we usually see now in the field that create a stumbling block to the wider adoption of virtualization?

Pittman: One of the things we observe in the industry is that many customers will start with a kind of phase one of virtualization. They'll consolidate their servers and maybe stop just there. They get that front-end benefit, but that exhausts the internal plumbing that you referred to in a lot of different ways, and can actually cause challenges and complexities that were not in their immediate expectation. So, it's a challenge to think that you're going to start with virtualization and not go beyond the hypervisor.

The starting point

We’d like to work with our customers to understand that it's the starting point to consolidate, but there is a lot more in the broader ecosystem consider, as they think about optimizing their environment.

One of HP’s philosophies is the whole concept of converged infrastructure. That's thinking about the infrastructure more holistically and addressing the applications, as you said, as well as your server environments and not doing one off, but looking more holistically to get the full benefit.

Moving forward, that's something that we certainly could help customers do from an outsourcing standpoint in enabling all of the parts, so there aren’t gaps that cause bigger problems than the one hiccup that started the whole notion of virtualization in the beginning.

Gardner: Does anyone else has some observations from the field about what gaps these organizations are encountering as they try to expand virtualization use?

Clement: One of the good things for our clients is the fact that within HP we have a great deal of experience and knowledge regarding virtualization. Through no fault of their own, many clients don’t understand or don’t realize the breadth or depth of virtualization options and alternatives that are available for them.

We want to make sure that the customers are thinking about this first from the business perspective.



The good news is that we at HP have a wide range of training services, ways that we can work with a client to help them figure out what the best implementation options are for them, and then for us to help them make sure that those options are implemented with excellence and truly do result in the business benefits that they desire.

Gardner: Now that you’ve mentioned some of the strengths that HP is bringing to the table, how do you get those to work in concert? It seems that it's a hurdle for these organizations themselves to look at things holistically? When they go out to a supplier that has so many different strengths and offerings, how do you customize those offerings individually to these organizations. How do they get started?

Morgan: We think about this in terms of their life cycle. We like to start with a strategy discussion, where we have consultants sit down with the client to better understand what they’re trying to accomplish from a business objective perspective. We want to make sure that the customers are thinking about this first from the business perspective. What are their goals? What are they trying to accomplish? And, how can virtualization help them accomplish those goals?

Then, we also can help them with their actual return on investment (ROI) analysis and we have ROI tools that we can use to help them develop that analysis. We have experts to help them with the business justification. We try to take it from a business approach first and then design the right virtualization solution to help them accomplish those goals.

Gardner: It sounds like there's a management element here. As we pointed out a little earlier, IT departments themselves have been divvied up by the type of infrastructure that they were responsible for. That certainly makes a lot of sense, and it follows the development of these different technologies at different times in the past.

Now, we're asking them, as we virtualize, to take an entirely different look, which is more horizontal across this converged infrastructure. Is there a management gap that needs to be filled or at least acknowledged and adjusted to in terms of how IT departments run?

Blurring the connections

Meyer: What it calls into focus is that one thing virtualization does very nicely is blur the connections between the various pieces of infrastructure, and the technology has developed quite a bit to allow that to ebb and flow with the business needs.

And, you're right. The other side of that is getting the people to actually work and plan together. We always talk about virtualization as not an end-point. It's an enabler of technology to get you there.

If you put what we’re talking about in context, the next thing that people want to go to is maybe build a private-cloud service delivery model. Those types of things will depend on that cooperation. It's not just virtualization that that's causing but it's really the newest service delivery models. Where people are heading with their services absolutely requires management and a look at new processes as well.

Gardner: In many cases, that requires a third party of some sort to be involved, at least, to get that management shift or acknowledgment under way.

Which of you can offer an example of how we move to a higher level of virtualization and got those payoffs that people are so enticed by -- that much lower number of servers, lower footprint, lower carbon and energy use, total cost, etc.? Can you provide an example of an organization that's done that and has also bitten the bullet on some of the management issues that allows that economic benefit?

They decided to virtualize, because that would help, of course, with the ability to consolidate and to improve on those service levels.



Morgan: I can give one example. There's an organization called Intrum Justitia, a financial services organization in Europe. We worked with them as they were embarking out their virtualization journey. The challenge they had was that they have multiple organizations and multiple data centers across Europe, and they wanted to consolidate from 40 different locations around Europe into two data centers.

At the same time, they wanted to improve the service level they were providing back to their business. They decided to virtualize, because that would help, of course, with the ability to consolidate and to improve on those service levels.

The way we helped them was by first having that strategy discussion. Then, we helped them design the solution, which included the HP Blade System, VMware software, EVA Storage, as well as other hardware and software products. We went through the full lifecycle with them helping with the strategy and the design.

We helped them build the solution. We managed their project office. We managed the migration from the 40 locations. Then, once everything was transitioned, we were able to help them go on the right path to further managing them. Some of the results were that they were able to manage that consolidation to the twin data centers, and they're beginning to see some of the benefits now.

Gardner: Let me put you on the line. What do you think HP brought to the table in this example that the Intrum wouldn’t be able to find anywhere else?

For more information on HP's Virtual Services, please go to: www.hp.com/go/virtualization and www.hp.com/go/services.

Wide expertise

Morgan: There are a couple of things. One is that we actually have the expertise, not only in the HP products, but also in the software products. We have the expertise, of course, for the Blade Systems and the EVA Storage, but also the expertise around VMware.

So, they had hardware and software expertise from one vendor -- from HP. We also have the expertise across the lifecycle, so they could just come to one place for strategy, design, development, and the ultimate migration and implementation. It's expertise, as well as a comprehensive focused life goal.

Gardner: Are there any other examples of a larger scale, top tier organization that has moved aggressively into virtualization and had a success?

Pittman: Yes, Dana, HP Enterprise Services worked with the Navy/Marine Corps Intranet (NMCI), which is the world’s largest private network, serving and supporting sailors, marines, and civilians in more than 620 locations worldwide.

They were experiencing business challenges in productivity and innovation and in the security areas. Our approach was to consolidate 2,700 physical servers down to 300, reducing outage minutes by almost half. This decreased NMCI’s IT footprint by almost 40 percent and cut carbon emissions by almost 7,000 tons.

We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.



Virtualizing the servers in this environment enabled them to eliminate carbon emissions equivalent to taking 3,600 cars off the road for one year. So, there were tremendous improvements in that area. We minimized their downtime and controlled cost. We accelerated transfer times, transparency and optimal performance.

All of this was done through the outsourcing virtualization support of HP Enterprise Services and we're really proud that that had a huge impact. They were recognized for an award, as a result of this virtualization improvement, which was pretty outstanding. We talked a little earlier about the broader benefits that customers can expect, the services that help make all of this happen.

In our full portfolio within the IT organization of HP, that would be server management services, data center modernization, network application services, storage services, web hosting services, and network management services. All combined, they made this happen successfully. We're really proud of that, and that's an example of the very large-scale impact that's reaping a lot of benefit.

Gardner: We've talked about how this can scale up, I suppose it's also interesting in the future, as more companies look to virtualization and think about services and infrastructure as a service (IaaS), that this could probably start going down market as well. Does anyone have some thoughts about how a company like HP, perhaps through their outsourcing capabilities, could move somebody’s values into an organization smaller than the Navy and Marines?

Mission-critical systems

Reed: What's interesting about the NMCI is that, as Ortega mentioned, this is a very large complex and mission-critical system. Thousands of servers were virtualized, having a major impact on how the service is being delivered. The missions that are being performed on such an infrastructure are still mission critical. You can't really have a much more impactful implication, because lives actually depend on the successful missions that are performed on this infrastructure.

Now, if you take that and have it scaled to lower level implementations of virtual server environments, the lessons learned, the best practices, the technology, the people, the processes, and the skills are all absolutely relevant, when trying to scale this down to small- and medium-sized businesses.

That's because the standardized procedures for managing this type of infrastructure is documented for our service delivery organizations around the world to take advantage of. They’re repeatable, standardized, and consistently delivered.

Gardner: As we get into the future, and the use of virtualization becomes integral to more companies -- not as an island, but more of the ocean that they are sailing on -- this kind of changes the way the companies function. They'll become more IT services and service management oriented. Perhaps, they'll have more services orientation in terms of their architecture.

Does anyone have any thoughts about where this is going to lead next, if you bite the bullet, become holistically adept at virtualization partnering with companies like HP to use the skills and understanding they have and learn the lessons of the past? What are the next stages or steps? Bob, any thoughts?

Virtualization becomes a foundational element for the next set of service delivery model that people are looking at.



Meyer: We mentioned this in the beginning. Virtualization becomes a foundational element for the next set of service delivery model that people are looking at. So, from an IT provider’s perspective, if you get virtualization right, if you get the converged infrastructure that Ortega was talking about, you get the right mix and close the skill gaps. You get a strong foundation to move on to things like private cloud, and it really opens up your options for different service delivery models.

With this is this notion of pushing out virtualization more broadly, the next step leads you to a good place to build on top of those delivery models and ultimately lower the cost and increase the quality of the services you deliver to the business.

Pittman: You asked how it all fits in moving to the future. Recently, in a Gartner report, there were some key findings. One of the items that was reported was that mid-sized businesses are seeking a much more intimate relationship with IT providers. There is a perception out there that they can have a closer relationship with smaller vendors as opposed the large ones.

[Editor's Note: “The penetration of virtual machines in the market at year-end 2008 was 12%; by year-end 2012, it will be nearly 50%.” Source: Gartner October 7 2009. Research Title: Virtual Machines and Market Share through 20012. Research ID #G00170437.]

One thing I’d like to just put out there for the IT community that may be is thinking about virtualization is that HP offers solutions for small, medium and large organizations. The way we are set up in terms of the account support with our account leaders, we certainly can meet the needs of the small to medium to large menu. We are set up to engage, support, and be that trusted advisor at all three of those levels.

Just to dispel any misconception that "They’re large, and I'm not sure if I'm going to get the attention," we're ready and have the products and services to deliver outcomes that they are looking for at all levels.

Gardner: Sort of "have it your way" opportunity.

Pittman: Exactly.

Expertise and flexibility

Clement: Just to follow on to that point, which I think is a great one. As we've been hearing here, it boils down to expertise and flexibility. Does HP have the expertise strategically to help clients of any size? Do we have the expertise from a service delivery perspective, from an instructor perspective, from a course development perspective? And the answer is, we do.

Do we provide these services, these products, these training classes in a variety of flexible ways and are we willing to tailor these to our clients. The answer, again, is a resounding, yes, we are.

Gardner: I wonder if we could offer some concrete ways to get started. Are there some places people can go, some Google searches they should do, as they are thinking about virtualization and their expansion and their way of managing the risk?

Morgan: There is definitely HP.com. We have many pages on HP.com to talk about virtualization and our virtualization offerings. So, that is the one area. They could also contact their local HP representative. If they work with HP authorized channel partners, they can also have discussions with the channel partners as well.

Meyer: There's a very simple way to find out more about virtualization solutions. You could just type in www.hp.com/go/virtualization, and it will take you to virtualization home page. If specifically you want to find more about services, it's just www.hp.com/go/services. That shortcut will take you right to the very relevant information.

Gardner: Well, very good. We've been here with a panel of HP executives examining the in-depth challenges of moving to large scale successful virtualization adoption. We looked at some of the ways that HP has worked with some customers to help them make that leap successfully. I want to thank our panel today. We've been talking with Tom Clement, worldwide portfolio manager in HP Education Services. Thank you, Tom.

Clement: You're most welcome, Dana. Again, thanks for having me.

Gardner: Bob Meyer, virtualization solutions lead, HP Enterprise Business. Thank you Bob.

Meyer: Thank you.

Gardner: Dionne Morgan, worldwide marketing manager, HP Technology Services. Thank you, Dionne.

Morgan: You're welcome.

Gardner: Ortega Pittman, worldwide product marketing, HP Enterprise Services. Thank you.

Pittman: Thank you for having me.

Gardner: And, Ryan Reed, worldwide marketing manager, HP Enterprise Services.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how to take proper planning, training and management steps to avoid virtualization sprawl and achieve strategic-level benefits. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

For more information on virtualization and how it provides a foundation for private clouds, plan to attend the HP Cloud Virtual Conference in March. Register now for this event:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

You may also be interested in: