Monday, July 06, 2009

Consolidation, Modernization, and Virtualization: A Triple-Play for Long-Term Enterprise IT Cost Reduction

Transcript of a BriefingsDirect podcast on how IT departments can provide better services with greater efficiency.

Listen
to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on combining some major efforts in IT administration and deployment, in order to both cut costs in the near term and also to put in place greater efficiencies, agility, enterprise business benefits, and long-term cost benefits.

We’re going to be talking about how consolidation, modernization, and virtualization play self-supporting roles alone and in combination for enterprises looking to improve how they deliver services to their businesses. Yet they also play a role in reducing labor and maintenance cost, and can have much larger benefits -- including producing far better server utilization rates -- that ultimately cut IT costs in total.

Here to help us dig into the relationship between a modern and consolidated approach to IT data centers and total cost, we welcome John Bennett. He’s the worldwide solution manager for Data Center Transformation Solutions at Hewlett-Packard (HP). Welcome to the show, John.

John Bennett: Thank you, very much. It's nice to be with you today.

Gardner: As I mentioned, cost is always an issue with organizations and IT departments are among those, facing a lot of pressure nowadays to justify their expenses, show improvements in cost cutting, and, at the same time, improve productivity. John, I wonder if you could help us understand this. We know well enough the cost pressures and economic environment that we’re in, but what has changed in terms of what can be brought to this problem set from the perspective of technology and process?

Bennett: Cost, itself, is easy and complex to deal with. It’s easy to say, "reduce costs." It’s very difficult to understand what types of costs I can reduce and what kind of savings I get from them.

When we look at reducing cost, one of the keys is to get a handle around what costs you're really looking to address and how you can address them. It turns out that many of the cost dimensions can be addressed through a common and integrated approach, building on recent advances in both technology and management and automation tools, on virtualization, and on the investments that companies like HP have been making in focusing on enhancing the energy efficiency and the manageability of the servers and infrastructure that we provide to customers.

This is why, in my mind, the themes of consolidation, which people have been doing forever; modernization, very consciously making decisions to replace existing infrastructure with newer infrastructure for gains other than performance; and virtualization, which has a lot of promise in terms of driving cost out of the organization can increase aspects like flexibility and agility that you mentioned earlier on. It's the ability to respond to grow quickly, to respond the competitive opportunity or threat very quickly, and the ability for IT to enable the business to be more aggressive, rather than becoming a limiting factor in the roll-out of new products or services.

Gardner: We’re certainly well aware of what’s changed in the macroeconomic climate over the last year or so, but what’s different from two or three years ago, in terms of what we can bring to the table to address these general issues about cost. In particular, how we can modernize, consolidate, and get those energy benefits?

Other issues pop up

Bennett: Besides the macro factors around economics that have come into play, we’ve seen some other issues pop up in the last several years as well. One of them is an increasing focus on green, which means a business perspective on being green as an organization. For many IT organizations, it means really looking to reduce energy consumption and energy-related costs.

We’ve also seen in many organizations, as they move to a bladed infrastructure and move to denser environments, that data center capacity and energy constraint, the amount of energy available to a data center, is also an inhibiting factor. It’s one of the reasons that we really advise customers to take a look at doing consolidation, modernization, and virtualization together.

As I briefly touched on earlier, this has been enhanced by a lot of the improvements in the products themselves. They are now instrumented for increasing manageability and automation. The products are integrated to provide management support not just for availability and for performance, but also for energy. They're instrumented to support the automation of the environment, including the ability to turn off servers that you don’t know or care about. They’re further enhanced by the enhancements in virtualization. A lot of people are doing virtualization.

What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively. In many cases, it's impacting governance of the data center.

This is why we look at them together. By combining them and taking an integrated approach to them, you not only don’t raise for yourself the issues that some other people may be experiencing, but you can use them to address a broad set of issues, and realize aspects of a data center transformation by approaching these things in an orderly and planned way.

Gardner: We’ve talked about how energy issues are now coming to be much more prominent, cost being a critical issue. Is there anything different about the load, about the characteristic of what we’re asking data centers to do now, than perhaps 5, 10, or 15 years ago that plays into why I would want to modernize and not just look to cut cost?

Bennett: The increasing density of devices in the data-center environment -- racks and racks of servers, for example -- have both increased the demand for power to run them, but, in many cases, have created issues related to cooling from heat in the environment. That has been a trend that has exposed people to risk factors related to energy that they hadn’t experienced before when they had standalone servers or mainframes in the environment.

With virtualization, we also see increasing density and concentration of devices, because you're really separating the assets -- servers, storage and the networking environment --

What we’re doing as a company is focusing on the management and the automation of that environment, because we see virtualization really stressing data center and infrastructure management environments pretty substantively.

from the implications in the business services they are providing. It becomes a shared environment and your shared environment is just more productive and more flexible if it’s one shared environment instead of 3, 4, 5 or 10 shared environments. That increases the density and it goes back to these other factors that we talked about. That’s clearly one of the more recent trends of the last few years in many data centers.

Gardner: I see. So, it’s where we may have had standalone hardware, software applications, siloed or mainframe. When you virtualize, you’re able to distribute the load and therefore look to have much greater ability to increase your utilization generally rather than just at a hit-or-miss basis.

Bennett: Absolutely. I don’t think I could have said it better myself.

Gardner: Tell us a little bit more about green. If we can increase the utilization rates vis-à-vis, what we’re doing with consolidation and virtualization, we also have to look at what we’re doing in the total consumption for electricity and what that means in terms of carbon footprint. Isn’t that possible that we could be looking at ceilings or even regulations on what we can do there?

Capacity is an issue

Bennett: You run into both aspects. Capacity is clearly an issue that has to be addressed, and increasing regulation and governance is as well. We saw the emergence in Europe in the last few months of the Data Center Code of Conduct emerging as a standard for recommending best practices for data centers.

We see an increasing focus in countries like the UK on regulation around energy. There are predictions that that’s going to accelerate in a number of places around the world. So those become part of the environment that data center managers have to deal with, and they can have severe implications for organizations, if they are not compliant.

Gardner: Those have really gone beyond "nice to have" or a way to reduce cost to a "must have."

Bennett: In many cases, that’s very true. Also, there are organizations that had made decisions to be green, where the senior executives and board of directors have made that decision. It’s a management directive and one you have to comply with, independent of government regulations. So, they're coming at you from all sides.

Gardner: I suppose another aspect of this is when you’ve modernized, consolidated, and virtualized your data centers over time, you're further able to automate. You're reducing the amount of labor and manual processes. This strikes me as something that provides an opportunity to manage change better.

Bennett: Yes. When you move to a shared infrastructure environment, the value of that environment is enhanced the more you have standardized that environment. That makes it much easier not only to manage the environment with a smaller numbers of sysadmins, but gives you a much greater opportunity to automate the processes and procedures.

What we see is the infrastructure enabling this. As I mentioned earlier, we're making significant investments in management, business service management, and automation tools to not only integrate infrastructure management with business service management, but also to have an integrated view of physical and virtual resources with line of sight from the infrastructure and the devices all the way up into the business services being provided.

So, you really have full control, insight, and governance over everything taking place in the data center. Many of those are very new capabilities in the HP product suite. Many of these have been announced within the last 12 months.

Gardner: Then, being able to get better automation and standardization across my data center, I should be able to react to the business requirements more quickly.

When you move to a shared infrastructure environment, the value of that environment is enhanced the more you have standardized that environment.

You scale up, scale down, or even shift course better than we would have done in the past.

Bennett: Yes, we use the marketing phrases "flexibility and agility" for that, but what it means is that I no longer have the infrastructure and the assets tied to specific business services and applications. If I have unexpected growth, I can support it by using resources that are not being used quite as much in the environment. It’s like having a reserve line of troops that you can throw into the fray.

If you have an opportunity and you can deploy servers and assets in the matter of hours instead of a matter of days or months, IT becomes an enabler for the business to be more responsive. You can respond to competitive threats, respond to competitive opportunities, roll out new business services much more quickly, because the processes are much quicker and much more efficient. Now, IT becomes a partner in helping the business take advantage of opportunities, rather than delaying the availability of new products and services.

Gardner: These are very important parts of the energy issue, the cost reduction upfront, the ability to be more fleet and agile, and improving the role and responsibility that IT can provide. You won’t have trouble getting people interested in solving these problems, but we get to the point of how we get into a solution, where we can bring these new technological innovations to bear. How do you get started? Where do you focus?

Experience is important

Bennett: How you get started and where you focus really depends on an individual customer and their organization, their capabilities, their staff capabilities, their staff resources, and their experience. Many people are well experienced at doing consolidation projects, and they've been doing virtualization. They have a staff very experienced in looking at things from a business service perspective. For many of them, modernization of the infrastructure, on top of what they've already been doing more aggressively than they’ve done in the past, may be a step to take.

There are certainly tools and capabilities like the Discovery and Dependency Mapping software to help keep an eye on assets and asset configurations, but we are seeing value in being more aggressive in modernizing infrastructure. Typically, people replace servers, for example, on a four to five year cycle, some as aggressively as three but typically four to five years.

In some of the generations of servers that we’ve released, we see 15 to 25 percent improvements from a cost perspective and an energy consumption perspective, just based on modernizing the infrastructure. So, there are cost savings that can be had by replacing older devices with newer ones.

People who have been growing through acquisitions or mergers or for whom individual lines of business control assets on their own, may need to be a little more methodical in building up the picture of just what they have and whether or not they have any -- what some in the industry refer to as -- ghost or zombie servers.

Ken Brill of the Uptime Institute, for example, figures that most people have about 15 percent of their servers not doing anything. The question is how you find out what they are.

Our recommendations to many customers would be, first of all, if you identify assets that aren’t being used at all, just get rid of them. The cost savings are immediate.

If you're going to do consolidation, how do you find out which things are connected to which? For people in that kind of situation, the Discovery and Dependency Mapping software is a wonderful way to go. That’s available for purchase, of course, or it can be delivered from HP services.

They identify all of the assets in the environment, the applications, software they're running, and the interdependencies between them. In effect, you build up a map of the infrastructure and know what everything is doing. You can very quickly see if there are servers, for example, not doing anything.

Gardner: I suppose from that perspective, you can say, We're going to take a couple of spot projects where we know we are going to get a big hit in terms of our return and savings," or "Because of our medium-level solution approach, we're going to start taking out full application sets or sets of services, based on some line of business or geographic definition." Or, we might even go whole hog, if that’s what we're looking at -- more of a data-center modernization to the next generation. All of those seem possible.

Bennett: Our recommendations to many customers would be, first of all, if you identify assets that aren’t being used at all, just get rid of them. The cost savings are immediate. You reduce software license cost, maintenance cost, energy consumption, etc. After that, there are several approaches you can take. You can do a peer consolidation.

If I've got 10 servers doing this particular application and I can have that support the environment by using 3 of those servers, get rid of 7. I can modernize the environment, so that if I had 10 servers doing this work before, and the consolidation gives me the opportunity to go to only to 6 or 7, if I modernize, I might be able to reduce it to 2 or 3.

On top of that, I can explore virtualization. Typically, in environments not using virtualization, server utilization rates, especially for industry standard servers, are under 10 percent. That can be driven up to 70 or 80 percent or even higher by virtualizing the workloads. Now, you can go from 10 to 3 to perhaps just 1 server doing the work. Ten to 3 to 1 is an example. In many environments, you may have hundreds of servers supporting web-based applications or email. The number of servers that can be reduced out from that can be pretty phenomenal.

Gardner: And, all the while, we're reducing physical footprint or the amount of labor required, and we're cutting the energy footprint.

Laying the groundwork

Bennett: All of the above -- and also laying the groundwork for a next-generation data center. We call it an adaptive infrastructure, but the idea is to have this shared resource environment that is virtualized, automated, and capable of shifting assets and putting assets where they’re needed, when they’re needed, and pretty dynamically being able to support growth, pretty seamlessly.

If you take an integrated approach to this by looking at consolidation, modernization, and virtualization together, you actually lay the foundation for that adaptive infrastructure. That’s a real long-term benefit that can come on top of all of the short- and near-term benefits that come with cost reductions and energy savings.

Gardner: We’ve certainly heard about the energy, utilization, and moving to virtualization. They’ve reduced the number of actual servers and therefore the number of people. Do you have examples of some organizations that have gone after these benefits and what sort of experience that they have?

Bennett: We have a lot of examples with people looking to save money. What’s more interesting is to look at a couple of examples of people who have had other objectives, and how they realized those objectives through consolidation, modernization, and virtualization.

An example is a company called MICROS-Fidelio. They provide integrated IT solutions for the hotel industry. They also were looking to improve their competitive advantage and they very specifically were looking at accommodating business growth, even though they had severe limitations in terms of data center space and power capacity. They really didn't want to be investing money in either of those two areas.

They standardized and virtualized their environment using HP blade systems and HP Insight Dynamics. They saw, in terms of business benefits,

If you take an integrated approach to this by looking at consolidation, modernization, and virtualization together, you actually lay the foundation for that adaptive infrastructure.

a 45 percent reduction in missed service-level agreement (SLA) objectives, which meant it reduced the penalties they were paying to their customers by being more predictive in providing better quality of service.

Gardner: In fact, immediate payback.

Bennett: Immediate payback, and not just in terms of cost savings, but in terms of brand reputation. They also had a 50 percent annual growth rate in the data center, which was supported with just a 25 percent increase in IT staff.

They didn’t provide us an absolute dollar figure, but they saved “six figures a year” in personnel cost. This was avoided by being able to do rolling updates to the environment, instead of static updates. Then, they had a threefold faster time in deploying new servers. Again, it was a pretty comprehensive set of benefits, not in just cost savings, but in terms of agility and flexibility, energy, and dealing with space and energy constraints by taking a systematic and integrated approach to consolidation, modernization and virtualization.

Gardner: Before we wrap up, John, I’m really fascinated by this notion of additional automation that the more modern the systems are, the more virtualized, the more ability you have to bring in management capabilities that allow that automation to the almost take off on a hockey stick kind of curve. Not that we want to take people out of the equation, but we want those people to be well utilized themselves. So, what does the future have in store for us in terms of moving the needle yet even further?

Tight control

Bennett: You’ll see improvements in a number of areas. Clearly, at the infrastructure level, we continue to make sure we’re doing everything possible to ensure that the assets themselves are instrumented to be controlled as tightly or as loosely as an organization would like to.

We’re making a lot of investments in ensuring that the physical and virtual assets are managed in a consistent and integrated way, because from a business service’s perspective, the business service doesn’t care where it’s running. But, if you have issues in terms of quality of service, you need to make sure you can track it down through the environment and for that, an integrated view of both is necessary.

Then third, we see the increasing focus on automating standard procedures in business processes and business service management and automation. That has to stretch from the business service down to the infrastructure management, down into the virtual resources, and down into the physical resources. So, it's an ongoing investment in integrating those capabilities, extending the capabilities for the software portfolios, and making sure that that control extends down into the depths of the hardware.

We also continue to make ongoing investments in improving the energy efficiency of the servers, the storage, and the networking devices in the data center. Our first patents in this go back 11 or 12 years now and we continue to see with each new generation of blade system, for example, pretty substantive increases or improvements in the energy consumption and energy demands.

Gardner: Well, great. We’ve been discussing how organizations should consider consolidation, modernization and virtualization as a tag team or a combo team. The payoffs, short-, medium-, and long-term by looking through these different approaches are rather substantial. They're both immediate and have those longer-term strategic benefits baked in.

We’ve been discussing this with John Bennett. He is a worldwide solution manager for Data Center Transformation Solutions at Hewlett-Packard. I truly appreciate your insights, John.

Bennett: Well, thank you very much. I encourage all of those listening to this to take a look at what they can do in their own environments. The potential is pretty significant.

Gardner: Well, great. I also want to thank the sponsor of this podcast, Hewlett-Packard, for underwriting its production. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen
to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how IT departments can provide better services with greater efficiency. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, June 29, 2009

T-Mobile Ramps Up Quality-Based Business Rewards from Applications Testing Improvements

Transcript of a BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe 2009 Conference in Las Vegas during the week of June 15, 2009.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you on location from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas. We’re here in the week of June 15, 2009 to explore the major enterprise software and solutions trends and innovations that are making news across the global HP ecology of customers, partners and developers.

I'm Dana Gardner, principal analyst at Interarbor Solutions, and I'll be your host throughout this special series of HP Sponsored Software Universe live discussions.

This customer interview is with another HP Software and Solutions Excellence Award winner, T-Mobile USA. Please join me in welcoming Michael Cooper, director of enterprise quality management at T-Mobile. Welcome back, Michael.

Michael Cooper: Good afternoon, Dana.

Gardner: When you're serving over 33 million mobile customers, you have a lot of apps that are or would be things that those customers need. They become pretty mission critical. You also have a lot of internal apps. Your enterprise resource maintenance applications also, of course, are classified as mission critical.

In order to get apps, customized apps, and new apps out the door in good shape, so that you don’t have downtime, the testing and quality assurance process is pretty important. Tell us a little bit about how you wanted to improve that process and what were the problems that you needed to address?

Cooper: You’re absolutely right. The problem that we needed to address was that testing was expensive. It took a lot of time, and, because we were doing it manually, the tests were not always consistent and repeatable. What we wanted to get to was an automated framework and we decided to focus on the business process that was important to our customers.

Gardner: We often hear that people, process, and product are what all come together to make these things repeatable, more efficient, and effective. Of course, being more efficient these days is top of mind. Tell me little bit more about the test methodologies. When you looked for solutions, what were your requirements or criteria?

Cooper: We were looking for something that was easy to use and that was an industry standard. We were looking for something that would give us good traceability. And, we were looking for something that would allow us to automate and be reusable. So we chose the Business Process Testing (BPT) framework.

Gardner: Tell me more about how that works?

Automating business processes

Cooper: We thought about what our real business processes are -- for example, ordering a phone, activating a phone, sending out bills. We organized components that describe these business processes. We extended those by automating them and used them for our regression testing primarily.

Gardner: So, in order to accomplish that, what actual products did you put in place?

Cooper: The actual products we have put in place were BPT, both for manual and automated testing, quality center and it’s all the modules of quality center. We extended that to leverage those scripts for monitoring with Business Availability Center (BAC). In some cases where we had service-orientated architecture (SOA), we actually used Service Tester, and for our performance testing we used Performance Center and LoadRunner.

Gardner: What were some of the results? Did you have any metrics of success that stick out in your mind as worthy of mentioning?

Cooper: The success metrics were really around time savings.

I would focus on defect prevention rather than defect detection. I would automate your test for reusability and consistency. And, I would like to say that HP has been a great partner in this journey.


We saved about 50 percent of each regression cycle each month. We cut the testing time in half. The second thing, and this is probably the most important one, we reduced the post-production defects by 75 percent. The benefit of that is that it reduced our cost of fixing those plus our operations cost.

Gardner: And that also translates into a lot of more satisfied customers, less churn, and that’s the name of the game, right?

Cooper: Exactly.

Gardner: What advice would you offer to others who are also looking to move from manual and siloed, or at least inconsistent approaches, to app testing and who are looking for a more holistic complete, repeatable methodologically consistent approach?

Cooper: I would focus on defect prevention rather than defect detection. I would automate your test for reusability and consistency. And, I would like to say that HP has been a great partner in this journey.

Gardner: You mentioned SOA. One of the tenets of that is repeatability and reuse. Did you find that using scripts across this more consistent environment saved you money, because those scripts could be used again and again and perhaps across multiple application-development activities?

Cooper: You’re absolutely right. Not only did we use them with each release, it allowed us to use it for monitoring as well.

Gardner: Great. We've been talking about moving to more efficient development test, and therefore, better post-production quality applications. We've been discussing that with the winner at the Awards of Excellence competition here, the HP Software and Solutions Awards, and the winner is T-Mobile, USA. Thanks very much. We've been joined by Michael Cooper, director of enterprise quality management. Thanks, Michael.

Cooper: Thank you, Dana.

Gardner: Thanks for joining us for this special BriefingsDirect podcast, coming to you on location from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas.

I'm Dana Gardner, principal analyst at Interarbor Solutions, your host for this series of HP-sponsored Software Universe Live Discussions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe 2009 Conference in Las Vegas during the week of June 15, 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Friday, June 26, 2009

IT Financial Management Provides Required Visibility into Operations to Reduce Total IT Costs

Transcript of a BriefingsDirect podcast on how IT departments should look deeply in the mirror to determine and measure their costs and how they bring value to the enterprise.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on bringing improved financial management capabilities to enterprise IT departments. The global economic downturn has accelerated the need to reduce total IT cost through identification and elimination of wasteful operations and practices. At the same time, IT departments need to better define and implement streamlined processes for operations and also for proving how new projects begin and unfold.

Knowing the true cost and benefits of complex and often sprawling IT portfolios quickly helps improve the financial performance of how to quantify IT operations. Gaining real-time visibility into dynamic IT cost structures provides a powerful tool for reducing cost, while also maintaining and improving overall performance. Holistic visibility across an entire IT portfolio also develops the visual analytics that can help better probe for cost improvements and uncover waste.

Here to help us understand the relationship between IT, financial management, and doing more for less in tough times are two executives from Hewlett-Packard (HP). Please help me welcome Ken Cheney, director of product marketing for IT Financial Management at HP Software and Solutions. Welcome, Ken.

Ken Cheney: Thanks, Dana, I appreciate the opportunity.

Gardner: We’re also joined by John Wills. He’s a practice leader for the Business Intelligence Solutions Group at HP Software and Solutions. Welcome, John.

John Wills: Hi, thank you, Dana.

Gardner: Ken, let’s start with you. We’ve heard for quite sometime that IT needs to run itself more like a business, to do more with less, and provide better visibility for the bean counters. But, now that we’re in a tough economic climate, this is perhaps a more pressing concern. Give me a sense of what’s different about running IT as an organization and a business now versus two years ago.

Cheney: Dana, the economy has definitely changed the game in terms of how IT executives are operating. I and the others within HP are hearing consistently from IT executives that cost-optimization, cost-containment, and cost-reduction initiatives are the top priority being driven from the business down to IT.

IT organizations, as such, have really shifted the focus from one of a lot of new initiatives that are driving innovation to one of dealing with situations such as how to manage merger and acquisition (M&A) processes, how to deleverage existing IT assets, and how to provide better decision-making capabilities in order to effectively control cost.

The landscape has changed in such a way that IT executives are being asked to be much more accountable about how they’re operating their business to drive down the cost of IT significantly. As such, they're having to put in place new processes and tools in order to effectively make those types of decisions.

Gardner: Now, John, tell me about the need for better visibility. It seems that you can’t accomplish what Ken's describing, if you don’t know what you have.

Wills: Right, Dana. That’s absolutely correct. If all of your information

Historical data is a prerequisite for knowing how to go forward and to look at a project’s cost . . .

is scattered around the IT organization and IT functions, and it’s difficult to get your arms around, then you’re exactly right. You certainly can’t do a good job managing going forward.

A lot of that has to do with being able to look back and to have historical data. Historical data is a prerequisite for knowing how to go forward and to look at a project’s cost and where you can optimize cost or take cost down and where you have risk in the organization. So, visibility is absolutely the key.

Gardner: It’s almost ironic that the IT department has been helping other elements of the enterprise do the exact same thing -- to have a better sense of their data and backwards visibility into process and trends. Business intelligence (BI) was something that IT has taken to the business and now has to take back to itself.

Wills: It is ironic, because IT has spent probably the last 15 years taking tools and technologies out into the lines of business, helping people integrate their data, helping lines of business integrate their data, and answering business questions to help optimize, to capture more customers, reduce churn in certain industries, and to optimize cost. Now, it’s time for them to look inward and do that for themselves.

Gardner: When we start to take that inward look, I suppose it’s rather daunting. Ken, tell us a little bit about how one gets started. What is the problem that you need to address in order to start getting this visibility that can then provide the analytics and allow for a better approach to cost containment?

From managed to siloed

Cheney: If you look at the situation IT is in, businesses actually had better management systems in place in the 1980s than the management systems in place today. The visibility and control across the investment lifecycle were there for the business in the 1980s with the likes of enterprise resource planning (ERP) and corporate performance management capabilities. Today, IT operates in a very siloed manner, where the organization does not have a holistic view across all the activities.

In terms of the processes that it’s driving in a consistent manner, they’re often ad hoc. The reporting methods are growing up through these silos and, as such, the data tends to be worked within a manual process and tends to be error-prone. There's a tremendous amount of latency there.

The challenge for IT is how to develop a common set of processes that are driving data in a consistent manner that allows for effective control over the execution of the work going on in IT as well as the decision control, meaning the right kind of information that the executives can take action on.

Gardner: John, in getting to understand what’s going on across these silos in IT, is this a problem that’s about technology, process, people, or all three? What is the stumbling block to automating some of that?

Wills: That’s a great question. It’s really a combination of the three. Just to be a little bit more specific, when you look at any IT organization, you really see a lot of the cost is around people and around labor. But, then there is a set of physical assets -- servers, routers, all the physical assets that's involved in what IT does for the business. There is a financial component that cuts across both of those two major areas of spend.

As Ken said, when you look back in time and see how IT has been maturing as an organization and as a business, you have a functional part of the organization that manages the physical assets, a functional part that manages the people, manages the projects, and manages the operation. Each one of those has been maturing its capability operationally in terms of capturing their data over time.

Industry standards like the Information Technology Infrastructure Library (ITIL)

IT organizations are going to be starting in multiple places to address this problem. The industry is in a good position to address this problem.

have been driving IT organizations to mature. They have an opportunity, as they mature, to take advantage and take it to the next level of extracting that information, and then synthesizing it to make it more useful to drive and manage IT on an ongoing basis.

Gardner: Ken, you can’t just address new technology at this juncture. You can’t just say, "We’re going to change our processes." You can’t just start ripping people out. So, how do you approach this methodologically? Where do you start on all three?

Cheney: IT organizations are going to be starting in multiple places to address this problem. The industry is in a good position to address this problem. Number one, as John mentioned, process standardization has occurred. Organizations are adopting standards like ITIL to help improve the processes. Number two, the technology has actually matured to the point where it’s there for IT organizations to deploy and get the type of financial information they need.

We can automate processes. We can drive the data that they need for effective decision-making. Then, there is also the will there in terms of the pressure to better control cost. IT spend these days composes about 2 to 12 percent of most organizations’ total revenue, a sizable component.

Gardner: I suppose there also has to be a change in thinking here at a certain level. If you're going to start to finding cost and behaving like a business rather than a cost center, you have to make a rationale for each expenditure, both operationally and on a capital expenditure basis. That requires a cultural shift. Can you get into that a little bit?

Speaking the language of business

Cheney: It sure does. IT traditionally has done a very good job communicating with the business in the language of IT. It can tell the business how much a server costs or how much a particular desktop costs. But it has a very difficult time putting the cost of IT in the language of the business -- being able to explain to the business the cost of a particular service that the business unit is consuming.

For example, quote to cash. How much is a particular line of business spending on quote to cash or how much does email cost based on actual usage per employee. These are some of the questions that the business would love to know, because they're trying to drive business initiatives, and these days, the business IT initiative is really part of a business initiative.

In order to effectively asses the value of a particular business initiative, it’s important to know the actual cost of that particular initiative or process that they are supporting. IT needs to step up in order for you to be able to provide that information, so that the business as a whole can make better investment decisions.

Gardner: I suppose business services are increasingly becoming the coin of the realm and defining things such as a processor, number of cores, license per user/seat, and that sort of thing doesn’t really translate into what a business service costs. John, how does BI come onto the scene here and help gather all of the different aspects of the service so that it could be accounted for?

Wills: It all ties together very strongly. Listening to what Ken was saying

That’s one of the keys in helping IT shift from just being a cost center to being an innovator to help drive the business.

about tying to the investment options and providing that visibility ties directly to what you are asking about BI. One of the things that BI can help with at this point is to identify the gaps in the data that’s being captured at an operational level and then tie that to the business decision that you want to make.

So again, Dana, back to one of your earlier questions about whether it's a people, process or technology issue, my answer would be that it's really all of the above. BI comes along and says, "Well, gee, maybe you’re not capturing enough detailed information about business justification on future projects, on future maintenance activity, or on asset acquisition or the depreciation of assets."

BI is going to help you collect that and then aggregate that into the answers to the central question that a CIO or senior IT management may ask. As Ken said, it’s very important that BI, at the end of that chain or sequence of activities, helps communicate that back in terms that business can understand so they can do an apples-to-apples comparison of where they would like IT to satisfy their needs with a given budget at hand. Dana, that goes back to again one of your earlier questions. That’s one of the keys in helping IT shift from just being a cost center to being an innovator to help drive the business.

Gardner: I suppose that as we move from manual processes in IT toward these visualization analytical tools, there's also a cultural shift. A print out might work well in the IT department. If you take that to your decision maker on the business side, they're going to say, "Where's my dashboard? Where are my dials?" What’s the opportunity here to make this into more of a visual benefit in terms of understanding what’s going on in IT and in cost? Why don’t we take that to Ken?

Getting IT's house in order

Cheney: In terms of the opportunity, it’s really around helping IT get its own house in order. We look at the opportunity as being one of helping IT organizations put in place the processes in such a way that they are leveraging best practices, that they're leveraging the automation capabilities that we can bring to the table to make those processes repeatable and reliable, and that we can drive good solid data for good decision-making. At least, that’s the hope.

By doing so, IT organizations will, in effect, cut through a lot of the silo mentality, the manual error-prone processes, and they'll begin operating much more as a business that will get actionable cost information. They can directly look at how they can contribute better to driving better business outcomes. So, the end goal is to provide that capability to let IT partner better with the business.

Gardner: Tell me a little bit more about the solutions and services that you'll be announcing at HP’s Software Universe event?

Cheney: At Software Universe this year, we rolled out a new solution to help customers with IT financial management. For quite some time, we’ve been in the business of doing project portfolio management (PPM) with HP Project Portfolio Management Center, as well as in the business of helping organizations better manage their IT assets with HP Asset Manager.

We have large customer bases that are leveraging those tools. With our

This product effectively allows IT organizations to consolidate their budgets, as well as their costs. It can allocate the cost as appropriate to who they view as actually consuming those particular services that IT is delivering.

customers who are using the PPM product as well as the Asset Management product from HP we can effectively capture the labor cost. If you look at PPM, it really tracks what people are working on, effectively managing the resources and capturing the time and cost associated with what those resources are doing, as well as the non-labor. It's all of the assets out there -- physical, logical, and virtual. We're talking about servers and software so that we can pull that together to understand the total cost of ownership.

We’ve brought together what we’re doing within PPM as well as within Asset Management with a new product called HP Financial Planning and Analysis. This product effectively allows IT organizations to consolidate their budgets, as well as their costs. It can allocate the cost as appropriate to who they view as actually consuming those particular services that IT is delivering.

Then, we provide the analytic reporting capabilities on top of that to allow IT organizations to better communicate with the business, better control, optimize, make decision making around cost. They can effectively drive the decision making right down to the execution of the work that’s occurring within the various processes of IT. That’s a big part of what we are delivering with our IT financial management capability.

Gardner: So, tell us about the products and solutions that are coming into the market?

Cheney: We have a new solution that we’re announcing as part of the HP Financial Planning and Analysis offerings.

Gardner: Does that have several modules, or are there certain elements to it -- or more details on how that is rolling out?

Service-based perspective

Cheney: The HP Financial Planning Analysis product allows organizations to understand costs from a service-based perspective. We're providing a common extract transform load (ETL) capability, so that we can pull information from data sources. We can pull from our PPM product, our asset management product, but we also understand the customers are going to have other data sources out there.

They may have other PPM products they’ve deployed. They may have ERP tools that they're using. They may have Excel spreadsheets that they need to pull information from. We'll use the ETL capabilities to pull that information into a common data warehouse where we can then go through this process of allocating cost and doing the analytics.

Gardner: So, John, going back to that BI comparison. It sounds a lot like what people have been doing with trying to get single view of a customer in terms of application data.

Wills: It really is. It’s a single view, except in this case we're getting single view of cost across many different dimensions. It’s really important, as Ken said, that we really want to formalize the way they're bringing cost data in from all of these Excel spreadsheets and Access databases that sit under somebody’s desk. Somebody keeps the monthly numbers in their own spreadsheets in a different department and they are spread around in all of these different systems. We really want to formalize that.

As to your previous question about visualization, it’s not only about formalizing it and pulling it altogether. It’s about offering very powerful visualization tools to be able to get more value and to be able to see immediately where you could take advantage of cost opportunities in the organization.

Part of Financial Planning and Analysis is Cost Explorer, a very traditional BI capability in terms of visualizing data that’s applied to IT cost, while you search through the data and look at it from many different dimensions, color coding, looking at variants, and having this information pop out of you.

Gardner: It’s one thing to be able to gather, visualize, and put all this

. . . once you are able to actively price your services, you're able to charge back for the consumption of those services.

information in the context of a cost item or a service, but the larger payback comes from actually being able to associate that cost with the user or some organization that’s consuming these services at some level or another. How do we get from the position of visibility and analytics to a chargeback mechanism?

Cheney: Most customers that I talk to these days are very keen on jumping immediately to the charge back and value side of the equation. I like to say, "Let’s start by walking before we run," with the full understanding that the end goal really is being able to show the value that IT is delivering and be able to charge back for the services that are actually being consumed.

Most organizations haven’t even put in place these processes that they need, which is why, when we talk about what we are doing with IT financial management, we want to make sure customers understand that it’s a complete solution, where we view the underlying processes being the end goal of understanding of the value doing charge back. To get to that nirvana of understanding the value of IT, customers need to put in place those processes around capturing labor cost and non-labor assets and effectively managing the IT investment lifecycle end-to-end.

On top of that, by doing the cost aggregation and the analytics that we are doing with the Financial Planning and Analysis offering, you get the cost visibility. Once you understand the cost, you can then go through the process of pricing out what your services are. At that point, once you are able to actively price your services, you're able to charge back for the consumption of those services.

IT underappreciated

Gardner: Over the years, I've heard from a number of IT folks about their frustration at not being appreciated. People don’t understand what goes into being able to provide these services. This perhaps opens up an opportunity or a door for the IT department to explain itself better and perhaps be better appreciated. Does that bear fruit in some of the uses that you’ve come across so far?

Cheney: Absolutely. That is really what we are driving for -- to help IT organizations be much more credible in front of the business, for business to understand what it is that they are actually paying for, and for IT to react much more nimbly to the requests that are coming in from the business.

Wills: You are certainly being more transparent. Put the question of charge back to the side for a moment. Without question, you're able to be more transparent in what the costs are. You're able to use the same terminology, very consistent terminology, that the business understands, which is a huge leap forward for most organizations. When you have that transparency, when you have a common set of terminology in the way that you communicate things, it’s a huge boost for IT to be able to justify how they are spending their budget money.

Gardner: Let me ask an interesting question. Who in IT is responsible for this? Is there a "chief visibility officer," if you will, within the IT department? Who is generally the sign-off on the purchase order for these sorts of products?

Wills: The chief sign-off officer, the chief visibility officer, is the CIO. There is no question. The CIO is the one. It’s really interesting. When we talk to accounts, the one who has the burning issues, the one who is most often in front of the business, justifying what IT does, is the CIO --at the highest level obviously.

That's always interesting, because the CIO has the most immediate pain. Often times, people one or two levels beneath him are grinding through manually pulling data together month after month and sending that data upstairs, so to speak. They don’t have the same levels of interaction with the end customers to have that acute pain, but the CIO is definitely the chief one who sees that on a daily basis. Would you agree, Ken?

Cheney: I would. Many CIOs have created essentially an IT finance arm and they may have a role, such as a CFO for IT or IT finance, which is taking all that information rolling up from those folks that are lower down in the organization and trying to make sense of it. This is a manual, very error-prone process these days. So, for that particular organization charged with making sense of IT finances and associating the actual cost of IT to the services consumed it is a big challenge. As such, it makes the job of the CIO very difficult when it comes to going out and communicating with the business.

Gardner: Let’s see if we can quickly identify some examples. Do you

. . . we found that, on average, that strategic portfolio work that our customers would do could save 2 to15 percent of their total IT budget.

have case studies or customers that you can describe for us who have undertaken some of this, and what have they found? Did they get actual significant savings? Did they see an improvement in their trust and sense of validation before the business or perhaps are they looking more for efficiency and are improving the productivity of their IT departments? Any metrics of success for how these products are being used?

Cheney: In terms of how customers are being successful, we’ve seen customers these days who are very focused on quick results, meaning that when they deploy what we are bringing to the table, they are doing it in a very targeted manner. We recommend that customers actually do that. That they want to make sure they’re tackling this problem in what I call bite-size chunks, where they want to get a win within a few months maximum. We have customers who will start, for example, with a base level of control over their IT processes.

One great starting point would be strategic portfolio management. We recently did a survey of about 200 IT executives and found that 43 percent of those executives said they have no form of portfolio rigor in place today. We did a benchmark study with a group of our customers with an organization called the Gantry Group, a third-party group that does return on investment (ROI) analysis and we found that, on average, that strategic portfolio work that our customers would do could save 2 to15 percent of their total IT budget. That's an area where we can have a very quick impactful win, and it's a good example.

Another area would be asset, inventory, and utilization, where we have customers who will get started just understanding what they have out there in terms of their services and desktop software and getting a grip on that. There are immediate savings to be had with that type of initiative as well.

A look at the future

Gardner: That brings up looking at the future. We've heard a lot about virtual desktop infrastructure (VDI), bringing a lot of what was done locally back to the data center, but with cost issues being top of mind around that. Then, we're also hearing quite a bit about cloud computing. It seems to me that we're going to have to start doing some really serious cost benefit analysis about what is the cost to maintain my current client distribution architecture versus going to a VDI or a desktop-as-a-service (DaaS) approach.

I am also going to need to start comparing and contrasting cloud-based services, applications and/or infrastructure against what it costs and what we're doing internally. Do you see some trends in the field, some future outlook, in terms of what the role of IT is going to move into in terms of being able to do these financial justifications?

Cheney: Absolutely. This is an area that we're seeing customers having to grapple with on a consistent basis, and it’s all about making effective sourcing decisions. In many respects, cloud computing, software as a service (SaaS), and virtualization all present great opportunities to effectively leverage capital. IT organizations really need to look at it through the lens of what the intended business objectives are and how they can best leverage the capital that they have available to invest.

Gardner: John, something further to offer?

Wills: There is a huge inflection point right now. Virtual computing, cloud computing, and some of these trends that we see really point towards the time being now for IT organizations to get their hands around cost at a detailed level and to have a process in place for capturing those cost. The world, going forward, obviously doesn’t get simpler. It only gets more complex. IT organizations are really looked at for using capital wisely. They're really looked at as the decision makers for where to allocate that capital, and some of it’s going to be outside the four walls.

We've seen that on the people side of the business with outsourcing for quite some time. Now, it’s happening with the hardware and software side of the business as well. But these decisions are very strategic for the enterprise overall. The percentage of spend, the IT spend-to-revenue, for a lot of these organizations is very large. So, it’s absolutely critical for the enterprise that they get their hands around a process for capturing cost and analyzing cost, if they're going to be able to adapt and evolve as this market continues to change so rapidly.

Gardner: If an outside provider can walk in and say this application or this infrastructure is going to cost this much per employee per month, that’s pretty concrete. If the business decision maker goes back to the IT department and says, "How much is that going to cost from your perspective," they have got to have an answer, right?

Wills: Right. You'd better have an answer for what your fully loaded costs are across every dimension of your business and you'd better understand things like your direct cost, indirect cost, fixed cost, and variable cost. It’s really about looking into the future and predicting not only risk, but opportunity, advising the board and CEO, and saying, "These are our choices and this is the best use of capital."

Gardner: I was just having a chat about cloud computing with Frank Gillett of Forrester Research a few weeks ago. He was saying that when you take a hard look at these costs, in many cases, doing it on-premises internally is actually quite a bit more attractive than going to the cloud but you have to come up with the numbers to actually justify that.

Wills: You have to be able to justify it. There is also the dimension of

You'd better have an answer for what your fully loaded costs are across every dimension of your business and you'd better understand things like your direct cost, indirect cost, fixed cost, and variable cost.

customer satisfaction. You talk about service-level agreements (SLA), and you must factor that in. So, you start to get into some of the soft aspects of costing things out and looking at opportunity cost, but you have to factor those in as well. It does show some of the complexity here with the problem that’s at hand.

We really believe this is the time for the organizations to seriously get their arms around this.

Gardner: Well, I'm afraid we are about out of time but we have been learning more about how improved financial management capabilities can help reduce total IT cost through identification and elimination of wasteful operation, but also gaining visibility into actual IT cost structures can certainly help our organizations justify themselves to the business, find the right balance in budgets and future projects, and, as we have seen for the future, being in a good position to compare and contrast against virtualization, and cloud computing, and some of the other new opportunities for IT acquisition.

I want to thank our panel for getting deeply into this conversation. It’s really been fun. I also want to thank our sponsor for today’s discussion, HP Software and Solutions, for underwriting its production. We've been joined by Ken Cheney, director of product marketing for IT Financial Management at HP software. Thanks, Ken.

Cheney: Great. Thank you.

Gardner: And also, John Wills, practice Leader for the Business Intelligence Solutions Group at HP software. I appreciate your input, John.

Wills: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how IT departments should look deeply in the mirror to determine and measure their costs and how they bring value to the enterprise. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.