Showing posts with label UCMDB. Show all posts
Showing posts with label UCMDB. Show all posts

Friday, June 17, 2011

Discover Case Study: Holistic ALM Helps Blue Cross and Blue Shield of Florida Break Down Application Inefficiencies, Redundancy

Transcript of a BriefingsDirect podcast from HP Discover 2011 on how Blue Cross and Blue Shield of Florida gains better visibility into application lifecycles for improved operational efficiency and reliability.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas. We're here on the Discover show floor the week of June 6 to explore some major enterprise IT solutions, trends, and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Discover live discussions.

We're now going to focus on Blue Cross and Blue Shield of Florida and a case study about how they’ve been able to improve their applications' performance -- and even change the culture of how they test, provide, and operate their applications.

We're here today with Victor Miller, Senior Manager of Systems Management at Blue Cross and Blue Shield of Florida in Jacksonville. Welcome. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Victor Miller: Thank you.

Gardner: Tell me a little bit about this cultural dynamic? When you shift from one way of doing applications, you do employ technology, you do employ products. There are methodologies and process, but I am interested about how you changed your vision of how applications should be done.

Miller: The way we looked at applications was by their silos. It was a bunch of technology silos monitoring and managing their individual ecosystems. There was no real way of pulling information together. We didn’t represent what the customer is actually feeling inside the applications.

One of the things we started looking at was that we have to focus on the customers, seeing exactly what they were doing in the application to bring the information back. We were looking at the performance of the end-user transactions or what the end-users were doing inside the app, versus what Oracle database is doing, for example.

When you start pulling that information together, it allows you to get full traceability of the performance of the entire application from a development, test, staging, performance testing, and then also production side. You can actually compare that information to understand exactly where you're at. Also, you're breaking down those technology silos, when you're doing that. You move more toward a proactive transactional monitoring perspective.

Gardner: It sounds as if you started looking at the experience of the application, rather than the metrics or the parts. Is that fair?

Miller: That’s correct. We're looking at how the users are using it and what they're doing inside the applications, like you said, instead of the technology around it. The technology can change. You can add more resources or remove resources, but really it's all up to the end-user, what they are doing in their performance of the apps.

Overcome hurdles

Gardner: In order to make this shift and to enjoy better performance and experience with your applications, you had to overcome some hurdles. Maybe you could explain what Blue Cross and Blue Shield of Florida is. I think I have a pretty good idea, but you can probably do a better job than I. After we learn a bit about your organization, what were some of the hurdles you had to overcome to get toward this improved culture?

Miller: Blue Cross and Blue Shield is one of the 39 independent Blue Crosses throughout the United States. We're based out of Florida. We've been around since about 1944. We're independent licensee of the Blue Cross Blue Shield Association. One of our main focuses is healthcare.

We do sell insurance, but we also have our retail environment, where we're bringing in more healthcare services. It’s really about the well-being of our Florida population. We do things to help Florida as a whole, to make everyone more healthy where possible.

Gardner: Let’s look at that problem set. In order to have a better experience for the health and welfare of your clients and constituents, what was the problem? What did you need to change?

Miller: Well, when we started looking at things we thought we were doing fine until we actually started bringing the data together to understand exactly what was really going on, and our customers weren’t happy with IT performance of their application, the availability of their applications.

From an availability perspective, we weren’t looking very good. So, we had to figure out what we could do to resolve that.



We started looking at the technology silos and bringing them together in one holistic perspective. We started seeing that, from an availability perspective, we weren’t looking very good. So, we had to figure out what we could do to resolve that. In doing that, we had to break down the technology silos, and really focus on the whole picture of the application, and not just the individual components of the applications.

Gardner: So this sounds like you had to go deeper into the network, looking at the ecosystem of the applications. What did you have to do to start to get that full picture?

Miller: Our previous directors reordered our environment and brought in a systems management team. It’s responsibility is to monitor and help manage the infrastructure from that perspective, centralize the tool suites, and understand exactly what we're going to use for the capabilities. We created a vision of what we wanted to do and we've been driving that vision for several years to try to make sure that it stays on target and focused to solve this problem.

Gardner: And how did you go about choosing the products and the management capabilities you're going to employ?

Miller: We were such early adopters that we actually chose best-in-breed. We were agent-based monitoring environment, and we moved to agent-less. At the time, we adopted Mercury SiteScope. Then, we also brought in Mercury’s BAC and a lot of Topaz technologies with diagnostics and things like that. We had other capabilities like Bristol Technology’s TransactionVision.

Umbrella of products

H
P purchased all the companies and brought them into one umbrella of product suites. It allowed us to bind the best-of-breed. We bought technologies that didn’t overlap, could solve a problem, and integrated well with each other. It allowed us to be able to get more traceability inside of these spaces, so we can get really good information about what the performance availability is of those applications that we're focusing on.

Gardner: In addition to adopting these products, I imagine you also had to change some of your processes and methodologies like ITIL. Tell me about the combination of the products and the processes that led you to some pretty impressive results?

Miller: One of the major things was that it was people, process, and technology that we were focused on in making this happen. On the people side, we moved our command center from our downtown office to our corporate headquarters where all the admins are, so they can be closer to the command center. If there were a problem that command center can directly contact them and they go down in there.

We instituted what I guess I’d like to refer to as "butts in the seat." I can't come with a better name for it, but it's when the person is on call, they were in the command center working down there. They were doing the regular operational work, but they were in the command center. So if there was an incident they would be there to resolve it.

In the agent-based technologies we were monitoring thousands of measurement points. But, you have to be very reactive, because you have to come after the fact trying to figure out which one triggered. Moving to the agent-less technology is a different perspective on getting the data, but you’re focusing on the key areas inside those systems that you want to pay attention to versus the everything model.

In doing that, our admins were challenged to be a little bit more specific as to what they wanted us to pay attention to from a monitoring perspective.



In doing that, our admins were challenged to be a little bit more specific as to what they wanted us to pay attention to from a monitoring perspective to give them visibility into the health of their systems and applications.

Gardner: I imagine that this is translated back into your development earlier into the requirements. Is there a feedback loop of sorts now that you can look to that perhaps you didn’t have in the past?

Miller: Yeah, there is a feedback loop and the big thing around that is actually moving monitoring further back into the process.

We’ve found out is if we fix something in development, it may cost a dollar. If we fix it in testing, it might cost $10. In production staging it may cost $1,000 It could be $10,000 or $100,000, when it’s in production, because that goes back to the entire lifecycle again, and more people are involved. So the idea is moving things further back in the lifecycle has been a very big benefit.

Also, it involved working with the development and testing staffs to understand that you can’t throw application over the wall and say, "Monitor my app, because it’s production." We have no idea which is your application, or we might say that it’s monitored, because we're monitoring infrastructure around your application, but we may not be monitoring a specific component of the application.

Educating people

The challenge there is reeducating people and making sure that they understand that they have to develop their app with monitoring in mind. Then, we can make sure that we can actually give them visibility back into the application if there is a problem, so they can get to the root cause faster, if there's an incident.

Gardner: This is all well and good, and it sounds fabulous for a handful of apps. But I imagine you have to scale this. How do you take what you’ve been describing in terms of this journey, but make it for dozens or hundreds of applications? What is it that you rely on to automate this?

Miller: We’ve created several different processes around this and we focused on monitoring every single technology. We still monitor those from a siloed perspective, but then we also added a few transactional monitors on top of that inside those silos, for example, transaction scripts that run at the same database query over-and-over again to get information out of there.

At the same time, we had to make some changes, where we started leveraging the Universal Configuration Management Database (UCMDB) or Run-time Service Model to bring it up and build business services out of this data to show how all these things relate to each other. The UCMDB behind the scenes is one of the cornerstones of the technology. It brings all that silo-based information together to create a much better picture of the apps.

Gardner: Some people call that a system of record.

Miller: That’s correct. We don’t necessarily call it the system of record. We have multiple systems of record. It’s more like the federation adapter for all these records to pull the information together. It guides us into those systems of record to pull that information out.

We’ve created several different processes around this and we focused on monitoring every single technology.



Gardner: What does this get for you? Are there any metrics or examples you can point to that validate that how effective this can be?

Miller: About eight years ago when we first started this, we had incident meetings where we had between 15 and 20 people going over 20-30 incidents per week. We had those every day of the week On Friday, we would review all the ones for the first four days of the week. So, we were spending a lot of time doing that.

Out of those meetings, we came up with what I call "the monitor of the day." If we found something that was an incident that occurred in the infrastructure that was not caught by some type of monitoring technology, we would then have it monitored. We’d bring that back, and close that loop to make sure that it would never happen again.

Another thing we did was improve our availability. We were taking something like five and six hours to resolve some of these major incidents. We looked at the 80:20 rule. We solved 80 percent of the problems in a very short amount of time. Now, we have six or seven people resolving incidents. Our command center staff is in the command center 24 hours a day to do this type of work.

Additional resources

W
hen they needed additional resources, they just pick up the phone and call the resources down. So, it’s a level 1 or level 2 type person working with one admin to solve a problem, versus having all hands on deck, where you have 50 admins in a room resolving incidents.

I'm not saying that we don’t have those now. We do, but when we do, it’s a major problem. It’s not something very small. It could be a firmware on a blade enclosure going down, which takes an entire group of applications down. It's not something you can plan for, because you're not making changes to your systems. It's just old hardware or stuff like that that can cause an outage.

Another thing that is done for us is those 20 or 30 incidents we had per week are down to one or two. Knock on wood on that one, but it is really a testament to a lot of the things that our IT department has done as a whole. They're putting a lot of effort into into reducing the number of incidents that are occurring in the infrastructure. And, we're partnering with them to get the monitoring in place to allow for them to get the visibility in the applications to actually throw alerts on trends or symptoms, versus throwing the alert on the actual error that occurs in the infrastructure.

Gardner: Now, we started talking earlier about your philosophy and the experience of the user. Are there any metrics or anecdotes from the welfare and benefit of your end-customers that have developed from the way that you’ve been able to improve your applications?

Miller: Customer satisfaction for IT is a lot higher now than it used to be. IT is being called in to support and partner with the business, versus business saying, "I want this," and then IT does it in a vacuum. It’s more of a partnership between the two entities to be able to bring stuff together. Operations is creating dashboards and visibility into business applications for the business, so they can see exactly what they're doing in the performance of their one department, versus just from an IT perspective. We can get the data down to specific people now.

Customer satisfaction for IT is a lot higher now than it used to be. IT is being called in to support and partner with the business.



Gardner: Because these activities are a journey, you never perhaps get to an end destination. What are you looking forward to next? What’s the roadmap for improving even beyond where you are now?

Miller: Some of the big things I am looking at are closed-loop processes, where I have actually started to work with making some changes, working with our change management team to make changes to the way that we do changes in our environment where everything is configuration item (CI) based, and doing that allows for that complete traceability of an asset or a CI through its entire lifecycle.

You understand every incident, request, problem request that ever occurred on that asset, but also you can actually see financial information. You can also see inventory information and location information and start bringing the information together to make smart decisions based on the data that you have in your environment.

Gardner: That sounds like it could lead to some significant cost savings in the long run?

Miller: That’s my hope. The really big thing is really to help reduce the cost of IT in our business and be able to do whatever we can to help cut our cost and keep a lean ship going.

Gardner: Well, great. We’ve been hearing about a user case study, Blue Cross and Blue Shield of Florida, and how they’ve been improving their application performance and the user experience, and then ultimately providing a better visibility for IT and the perception of IT along with overall reduction in total cost. We’ve been hearing this story from Victor Miller, Senior Manager of Systems Management at Blue and Cross Blue Shield of Florida in Jacksonville. Thank you.

Miller: Thank you.

Gardner: And thanks to our audience for joining this special BriefingsDirect podcast coming to you from the HP Discover 2011 Conference in Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of user experience discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from HP Discover 2011 on how Blue Cross and Blue Shield of Florida gains better visibility into application lifecycles for improved operational efficiency and reliability. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Tuesday, July 14, 2009

Rethinking Virtualization: Why Enterprises Need a Sustainable Virtualization Strategy Over Hodge-Podge Approaches

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Download a pdf of this transcript.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on rethinking virtualization. We’ll look at a series of three important considerations when moving to enterprise virtualization adoption.

First, we'll investigate the ability to manage and control how interconnections impact virtualization. Interconnections play a large role in allowing physical servers to support multiple virtual servers, which themselves need multiple network connections. The connections themselves can be virtualized, and we are going to learn how HP Virtual Connect is being used to solve these problems.

Second, we're going to examine the role and importance of configuration management databases (CMDBs) in deploying virtualized servers in production. When we scale virtualized instances of servers, we need to think about centralized configuration, it really helps in bringing management to this crucial part of preventing server sprawl and an unwieldy complexity that can often impact the cost of virtualization projects.

Last, we're going to dig into how outsourcing in a variety of different forms, configurations, and values could help organizations get the most bang for their virtualization buck. That is to say, how they think about virtualization not only in terms of placement, but also in where that data center and even hybrid data centers will be residing and managed.

Here to help us to dig into these essential ingredients of successful and cost-effective virtualization initiatives, are three executives from Hewlett-Packard (HP).

When we scale virtualized instances of servers, we need to think about centralized configuration

We're going to be speaking with Michael Kendall, worldwide Virtual Connect marketing lead. We're also going to be joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions. And last, we're going to discuss outsourcing with Ryan Reed, a product manager for EDS Server Management Services.

First, I want to talk a little bit about how organizations are moving to virtualization. We certainly have seen a lot of the "ready, set, go," but when organizations start looking at the complexity, when they think about scale, when they think about the need to do virtualization for the economic pay-off, rather than simply moving one shell around from physical to virtual, or from on-premises to off-premises, the complexity in the issue starts to sink in.

Let me take our first question to Shay Mowlem. Shay, what is it that we're seeing in terms of how companies can make sure that they get a pay-off economically from this, and that it doesn’t become complexity-for-complexity's sake?

Shay Mowlem: The allure of virtualization is quite great. Certainly, many companies today have recognized that consolidating their infrastructure through virtualization can reduce power consumption and space utilization, and can really maximize the value of the infrastructure that they’ve already purchased.

Just about everybody has jumped on the virtualization bandwagon, and many companies have seen tremendous gains in their development in lab environments, in managing what I would consider to be non-mission-critical production systems. But, as companies have tried to apply virtualization to their Tier 2 and Tier 1 mission-critical systems, they're discovering a whole new set of issues that, without effective management, really run counter to the cost benefits.

The fact that virtualized infrastructure has more interdependencies means there’s more of a risk profile because of the services that are supported. The real challenge for those companies is putting in place the right management platform in order to be able to truly recognize those gains for those production environments.

Gardner: So, when we talk about rethinking virtualization, I suppose that it really means planning and anticipating how this is going to impact the organization and how they can scale this out?

Mowlem: Yeah. That’s exactly right.

Looking at connections

Gardner: First, we're going to look at the connections, some of the details in making physical servers become virtual servers, and how that works across the network. Mike Kendall is here to tell us about HP’s Virtual Connect technology.

It’s designed to help bridge the gap between the physical world and virtual world, when it comes to the actual nitty-gritty of making networks behave in conjunction with increased numbers of virtualized server instances. This is important when we start rethinking virtualization in terms of actually getting an economic payback from the investments and the expectations that enterprises are now supporting around virtualized activities.

So, let me take it to you Mike. When we go to virtualized infrastructures from traditional physical ones, what’s different about migrating when it comes to these network connections?

Michael Kendall: There are a couple of things. When you consolidate a lot of different application instances that are normally on multiple servers, and each one of those servers has certain number of I/O for data and storage and you put them all on one server, that does consolidate the number of servers we have.

Interestingly, people have found that as you do that, it has the tendency to expand the number of network interface controllers (NICs) that you need, the number of connections you need, the number of cables you need, and the number of upstream switch ports that you need to accommodate all that extra workflow that’s going on that sever.

So, just because you can either set up a new virtual machine or want to migrate virtual machines in a matter of minutes, it isn’t as easy in the connection space. Either you have to add additional capacity for networks and for storage, add additional host bus adapters (HBAs), or add additional NICs. But, even when you move it, you have to take down and re-setup those particular network connections. Being able to do that in a way that is harmonious is more challenging within a virtual machine environment.

Gardner: So, it’s not quite as easy as simply managing the hypervisor. We have to start thinking about managing the network. Perhaps you could tell us more about how the Virtual Connect product itself does that.

Basic rethinking


Kendall: Absolutely. Virtual Connect is a great example to follow. HP helps you achieve the full potential you get out of setting up virtual machines on a server and being able to consolidate all those workloads.

We did some basic rethinking around how to remove some of these interconnect bottlenecks. HP Virtual Connect actually can virtualize the physical connections between the server, the data network, and the storage network. Virtualizing these connections allows IT managers to set up, move, replace, or upgrade blade servers and the workloads that are on them, without having to involve the network or storage folks or being able to impact the network or storage topologies.

Rather than taking hours, days, or even weeks to get a move set up, by either setting up, adding to or moving virtual machines or physical machines, we're able to take that down literally to minutes. The result is that most deployments or moves can be accomplished a whole lot faster.

Another part of this is our new Flex-10 technology. That takes a 10-gigabit Ethernet connection and allocates that across four NIC connections. This eliminates the need for additional physical NICs in the forms of mezzanine cards or stand-up cards, additional cables, or additional switches, when setting up all of the extra connections required for virtual machines.

The average hypervisor is looking for anywhere from three to six NIC connections, and approximately two storage network connections.

If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

If you add that all up, that can be up to a total of six to eight NICs, along with the associated cables and switch ports. The same thing is true with the two storage network connections as well.

With Flex-10, on an average two-port NIC, you can have each one of those ports be able to leave four NICs for a total of eight, without having to add any additional stand-up cards, any additional switches, or the cables with it. As a result, from a cost standpoint, you can save up to 66 percent in additional network equipment cost over competing technology. So, with Virtual Connect you can wire everything once and then add, replace, or recover servers a whole lot faster.

Gardner: And, of course, not doing this in advance would erode your ability to save when it comes to these more utilized server instances.

Kendall: That’s also correct. If you can put this technology in place ahead of time, then you can save not only the purchase cost of all this additional hardware, but the operational complexity that goes along with having a lot of extra equipment to have to set up, manage, and run.

Gardner: One of the things that folks like about virtualization is an automated approach to firing off instances of servers to support an application -- for example, a database. Does that automated elasticity of generating additional server instances follow through with the Virtual Connect technology so that it’s, in a sense, seamless.

Seamless technology

Kendall: I'm glad you added in the Virtual Connect part, because if you had said "using standard switch technology," the answer to that would be no.

With standardized switch technology and standardized NIC and storage area network (SAN) HBA technology, you generally have to set up all these connections individually. Then, you have to manage them individually. Then, if you set up, add to, or migrate virtual machine instances from the virtual machine (VM) side of it, you can automate a lot of that through a hypervisor manager, but that does not extend to the attributes of the actual server connection, or the virtual machine connection.

Virtual Connect, because it does virtualize those instances in a way that you manage them, makes it very straightforward to migrate the server connections and their profiles, not only with the movement of virtual machines, but also the movement of whole hypervisors across physical machines as well. It extends the physical and the virtual, and handles the automation and the migration of all those connection profiles.

Gardner: So, we're gaining some speed here. We’re gaining mobility. We're able to maintain our cost efficiencies from the virtualization, because of our better management of these network issues, but don’t such technologies as soft switches pretty much accomplish the same thing?

Kendall: Soft switches can be an important part of the infrastructure you put together around virtual machines. One of the things about soft switches is that it’s really important how you use them. If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network. If you use Virtual Connect, which is based upon industry-standard protocols together with a soft switch operating in a simple pass-through type of mode, then you don’t have the latency problem. You maintain the flexibility of Virtual Connect.

The other thing you need to be careful of is that some of the new soft switches out there use proprietary protocol extensions to accomplish the ability to

If you use soft switches combined with some of the upstream switches to do all this right here, then you can also add latency to an already complex network.

track the movement of the virtual machine, along with its associated connection protocol. These proprietary protocol extensions sometimes require upstream products that can accept these protocol extensions and require new hardware, switches, and management tools. That can add a lot to the cost to upgrading an infrastructure.

Gardner: Thank you Michael. We’re now going to look at another important issue around virtualization, and that is configuration and management. This has become quite an issue in terms of complexity. Managing the physical servers, when we get into the large numbers, is, in itself, complex. When we add virtualization and dynamic provisioning and look to recover cost from energy and utilization, we add yet another dimension to the complexity.

We’re going back to Shay Mowlem. We’re going to talk a little bit about this notion of data collection, management, configuration, and automation along this line. So, we'll talk about how visibility into the requirements of what’s going on in the virtualization instances, data centers, and across the infrastructure becomes critical. How are companies gaining better visibility across the virtualized data center, compared to what they were perhaps doing to the purely physical ones?

Mowlem: IT infrastructures really are becoming more ambiguous. With the addition of virtual machines to data centers that are already leveraging other virtualization technologies in their storage area networks -- virtual LANs and so on -- all of that makes knowing where a problem exists much harder to identify and fix. That has an impact on management cost and service quality.

Proof for the business

For IT to realize the large-scale cost benefits of virtualization in their production environments they need to prove to the business that the service performance and the quality are not going to be lost, as they incorporate virtualized servers and storage to support the systems. We've seen that the ideal approach should include a central vantage point, from which to detect, isolate, and prevent service problems across all infrastructure elements, heterogeneous servers, spanning physical and virtual network storage, and all the subcomponents of a service.

It also needs to include the ability to monitor the health of the infrastructure, but also from the perspective of the business service. In other words, be able to monitor and understand all of the infrastructure elements, how they relate to one another, servers, networked storage, and then also be able to monitor the health and the performance of the service from the perspective of the business user.

It's sort of a bottom-up and top-down view if you will, and this is an area that HP Software has invested in very heavily. We provide tools today that offer native discovery and dependency mapping of all infrastructure, physical and virtual, and then store that information in our central universal configuration management database (UCMDB), where we then track the make-up of a business service, all of the infrastructure that supports that service, the interdependencies that exists between the infrastructure elements, and then manage that and monitor that on an ongoing basis.

We also track what has changed over time, what was implemented, and who made those changes. Then, we can leverage that information very carefully to solve important questions with regards to how a particular service has been behaving over time?

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

We can retrieve core metrics about performance and behavior on all layers of the virtualization stack, for example. Then, we can use this to provide very accurate and fast problem detection and isolation and deep application diagnostics.

This can be quite profound. We found that through an return on investment (ROI) model that we worked on, based on data from IDC, that effective utilization of HP’s Discovery and Dependency Mapping technology and storing that in a central UCMDB, on average can help reduce the mean time to repair of outages by 76 percent, which is a massive benefit through effective consolidation of this important data.

Gardner: Maybe I made a mistake that other people commonly make, which is to think of managing virtualized instances as separate and different. But, I suppose virtualization nowadays is becoming like any other system across the IT infrastructure.

Mowlem: Absolutely. It’s part of a mix of tools and capabilities that IT has that, in production environments, are ultimately there to support the business. Having an understanding of and being able to monitor all these systems, understanding their interdependencies, and managing them in an integrated way with the understanding of that business outcome, is a key part of how companies will be able to truly recognize the value that virtualization has to offer.

Gardner: Okay, I think we understand the problems around this management issue in trying to scale and bring it into a similar way in which the entire data center is managed. What about the solutions? What particularly didn’t organizations consider when approaching this total configuration issue?

Business service management

Mowlem: We offer a host of solutions that help companies manage virtualized environments end to end, but as we look at monitoring -- and essentially a configuration database attracts all of the core interdependencies of infrastructure and their configuration settings over time -- we talk about the business service management portfolio of HP Software. This includes the Discovery and Dependency Mapping product that I talked about earlier. UCMDB is a central repository, and a number of tools allow our customers to monitor their infrastructure at the server level, at the network level, but also at the service level, to ensure ongoing health and performance of their environment.

Gardner: You mentioned these ROI figures. Typically, is there any comparison to how organizations will start down the virtualization path? How they can then begin to recover more cost and cut their total cost by adopting some of these solutions?

Mowlem: We offer a very broad portfolio of solutions today that manage many different aspects of virtualization, from testing to ensuring that the performance of a virtualized environment in fact meets the business service level agreements (SLAs). We talked about monitor already. We have automation as part of our portfolio to achieve efficiency in provisioning and change execution. We have a solution to manage assets, so that software licenses are tracked carefully and properly.

We also have a market-leading solution in backup recovery with our Data Protector offering to help customers scale their backup and recovery capabilities across their virtualized

We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

servers. What we’ve found in the course of our discussions is that there are many customers that recognize that all of these are critical and important areas for them to be able to effectively incorporate virtualization into their production environments.

But, generally there are one or two very significant pain areas. It might be the inability to monitor all of their servers -- physical and virtual -- through one single pane of glass, or it maybe related to compliance enforcement, because there are so many different elements out there. So, the answer isn’t always the same. We find that companies choose to start down the path of effective management through some of these initial product areas, and then expand from there.

Gardner: Well, I suppose it’s never too late to begin. If you’re even partially into a virtualization initiative, or maybe even deep in and you’re starting to having problems, there are ways in which you can bring in management features at any particular point in that maturity.

Mowlem: We definitely support a very modular offering that allows people to focus on where they’re feeling the biggest pain first, and then expand from there as it makes sense to them.

Gardner: Let’s now move over to Ryan Reed at EDS. As organizations get in deeper with virtualization and as they consider on a larger scale their plans for their modernization and consolidation and overall cost efficiency of their resources, how do they approach this problem of placement? It seems that when you move towards virtualization it almost forces you to think about your data center in a more holistic and long-term and strategic perspective.

Raising questions

Ryan Reed: Right, Dana. For, a lot of companies when they consider large-scale virtualization and modernization projects, it often raises questions that help them to devise the plan and devise strategy around how they’re going to create a virtual infrastructure and where their infrastructure is going to be located.

Some of the questions that I see are around the physical data center itself. Is the data center itself meeting the needs of the business? Is it designed in a way that can be built for resiliency and provide the greatest value to the business services?

You’ll also find that a lot of times that’s not the case nowadays for the data centers that were built 10 or 15 years ago. Business services today demand higher levels of uptime and availability. Those data centers, if they were to fail due to a power outage or some other source of failure, are no longer able to provide the uptime requirements for those types of business services. So, it’s one of the first questions that a virtual infrastructure program raises to the program manager.

Another question that often comes up is around the storage network infrastructures. Where they are located physically. Are they in the right place? Is it available at the right times? A lot of organizations may be required by legislative regulatory requirements to keep their data within a particular state or country, national boundaries, or region. A lot of the times, when people are planning for virtual server infrastructures, that comes to be a pretty prominent discussion.

Another one would be around internal skill sets of the enterprise. Does the company or the organization have the skill set necessary in-house to do large-scale virtualization in data center modernization projects? Often times, they don’t, and if they don’t, then what is their action? What is their remedy? How are they going to resolve that skill gap?

Lastly, a lot of companies, when they’re doing virtualization projects, start to question, whether or not all of the activities around managing the infrastructure is actually core to their business. If it’s not core to their business, then maybe this is something that they don’t have to be doing themselves anymore.

Taking all that into consideration helps to drive a conversation around planning and being able to create the right type of process. Often times, it leads to a discussion around outsourcing. EDS, which is an HP company, does provide organizations and enterprises for full IT management, and IT infrastructure management. That includes everything from the implementation to ongoing management of virtual as well as non-virtual infrastructure environments.

Client data center or on-premises you called it, Dana, is an option that is available for a lot of enterprises out there that have already invested heavily into their current data-center facility, as well as the infrastructure. They don’t want to necessarily move it to an outsourcer supplied data center. So on-premises is a business model that’s available and becoming common for some of the larger virtualization projects.

The traditional outsourcing model is one where enterprises realize that the data center itself is not a strategic asset to the business anymore. So, they move the infrastructure to an outsourcer data center where the services provider, the outsourcing company, can provide the best services with virtual infrastructures during the design and plan phase.

Making the most sense

This makes the most sense for these types of organizations, because you’re going to be doing a migration from physical to virtual anyway. So, you might as well take advantage of the skills that are available from the outsourcing services provider to move that to their data center, and have them apply best-in-breed practices and technology to manage that infrastructure.

Then you also mentioned what would be considered like a hybrid model, which would be one where virtual infrastructure and non-virtual infrastructure can be managed from either client or organization-owned data center, or the services provider data center. There are various models to consider. A lot of the questions that lead into how to plan for this type of virtual infrastructure also lead into a conversation about how an outsourcer can be the most value-add.

Gardner: Is there anything about virtualizing your data center and more and more servers that makes outsourcing perhaps easier or an option that some people that hadn’t considered in the past and should?

Reed: Sure. Outsourcers nowadays are very skilled at providing infrastructure services to virtual server environments. That would include things like profiling, analysis planning, mapping of targets to source servers, and creating a business value for understanding how it’s going to impact the business in terms of ROI and total cost of ownership (TCO).

Doing the actual implementation, the ongoing management of the operating systems, both virtual and non-virtual for guests and hosts, patching of the system,

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

monitoring to make sure that the systems are up and running responding to events, escalating events, and then doing things like backup and restore activities of the systems are really core to an outsourcing services provider’s business. That’s what they do.

We don’t expect our clients to have the same level of expertise as EDS does. We’ve been doing this for 45 years, and it’s really the critical piece of what we do. So, there are many things to consider when choosing an outsourcing provider, if that’s the way to go. Benefits can range dramatically from reducing your TCO to increasing levels of availability within the infrastructure, and then also being able to expand and use the services provider, global delivery service centers that are available around the world.

Choose the right partner, and they can grow with you. As your business grows and as you expand your market presence, choosing the services provider that has the capability and capacity to deliver in the areas that you want to grow makes the most sense.

Additionally, you can take advantage of things like low-cost delivery centers that the services provider has built up over the years -- services centers that are from low-cost regions. EDS considers this to be the best strategy. Having resources available in low-cost countries to provide the greatest value to clients is important when it comes to understanding the best approach to selecting a good services provider.

Gardner: So, for those organizations that are looking at these various options for sourcing, how do they get started? What’s a good way to begin that cost benefit analysis?

Reed: Well, there’s information available through the eds.com website. Go there and search on "virtualization" and you’ll find the first search result that comes back that has lots of information around what to expect in terms of an engagement, as well as examples of where virtualization has been done with other organizations similar to what a lot of industries are facing out there.

You can see a comparison of like-for-like scenarios to determine whether or not a client engagement would make sense here, based on the case studies and success stories that are available out there as well. There are also industry tools that are available from our partner organizations. HP has tools available. VMR has tools available to help our clients understand where savings can come from. And, of course, EDS is also available to provide those types of services for our clients too.

Gardner: Okay. We’ve been looking at three important angles to consider when moving to virtualization, being aware at a detail level how the network, interfaces and connects work, moving towards more virtualized approach to interconnects. We also looked at the management issues -- configuration not only in the terms of how virtualized servers stand alone. They need to be managed, but managed in total, in terms of the part of the larger IT mix. We also looked at how to consider some different options in terms of cost and skills, availability of resources, energy cost, and general track record of being competent and proven with virtualization in terms of various sourcing options.

I want to thank our three guests today. We’ve been joined by Michael Kendall, worldwide Virtual Connect marketing lead at HP. We've been joined by Shay Mowlem, strategic marketing lead for HP Software and Solutions, and Ryan Reed, product manager for EDS Server Management Services.

This is Dana Gardner, principal analyst at Interarbor Solutions, we want also to thank the sponsor of our podcast discussion today, Hewlett-Packard, for underwriting its production. Thanks for listening and come back next time.

Attend a virtual web event from HP on July 28- July 30, "Technology You Need for Today's Economy." Register for the free event.

Download a pdf of this transcript.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the key elements of successful and cost-effective virtualization that spans general implementations. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.