Thursday, April 15, 2010

Information Management Takes Aim at Need for Improved Business Insights From Complex Data Sources

Transcript of a sponsored BriefingsDirect podcast on how companies are leveraging information management solutions to drive better business decisions in real time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Get a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today's sponsored podcast discussion delves into how to better harness the power of information to drive and improve business insights.

We’ll examine how the tough economy has accelerated the progression toward more data-driven business decisions. To enable speedy proactive business analysis, information management (IM) has arisen as an essential ingredient for making business intelligence (BI) for these decisions pay off.

Yet IM itself can become unwieldy, as well as difficult to automate and scale. So managing IM has become an area for careful investment. Where then should those investments be made for the highest analytic business return? How do companies better compete through the strategic and effective use of its information?

We’ll look at some use case scenarios with executives from HP to learn how effective IM improves customer outcomes, while also identifying where costs can be cut through efficiency and better business decisions.

To get to the root of IM best practices and value, please join me in welcoming our guests, Brooks Esser, Worldwide Marketing Lead for Information Management Solutions at HP. Welcome, Brooks.

Brooks Esser: Hi, Dana. How are you today?

Gardner: I’m great. We’re also here with John Santaferraro, Director of Marketing and Industry Communications for BI Solutions at HP. Hello, John.

John Santaferraro: Hi Dana. I’m glad to be here, and hello to everyone tuning into the podcast.

Gardner: And also, we’re here with Vickie Farrell, Manager of Market Strategy for BI Solutions at HP. Welcome to the show.

Vickie Farrell: Hi, Dana, thanks.

Gardner: Let me take our first question out to John. IM and BI in a sense come together. It’s sort of this dynamic duo in this era of cost consciousness and cost-cutting. What is it about the two together that you think is the right mix for today’s economy?

Santaferraro: Well, it’s interesting, because the customers that we work with tend to have very complex businesses, and because of that, very complex information requirements. It used to be that they looked primarily at their structured data as a source of insight into the business. More recently, the concern has moved well beyond business intelligence to look at a combination of unstructured data, text data, IM. There’s just a whole lot of different sources of information.

Enterprise IM

The idea that they can have some practices across the enterprise that would help them better manage information and produce real value and real outcomes for the business is extremely relevant. I’d like to think of it as actually enterprise IM.

Very simply, first of all, it’s enterprise, right? It’s looking across the entire business and being able to see across the business. It’s information, all types of information as we identify structured, unstructured documents, scanned documents, video assets, media assets.

Then it’s the management, the effective management of all of those information assets to be able to produce real business outcomes and real value for the business.

Gardner: So the more information you can manage to bring into an analytics process, the higher the return?

Santaferraro: I don’t know that it’s exactly just "more." It’s the fact that, if you look at the information worker or the person who has to make decisions on the front line, if you look at those kinds of people, the truth is that most of them need more than just data and analysis. In a lot of cases, they will need a document, a contract. They need all of those different kinds of data to give them different views to be able to make the right decision.

Gardner: Brooks, tell me a little bit about how you view IM. Is this a life cycle we’re talking about? Is it a category? Where do we draw the boundaries around IM? Is HP taking an umbrella concept here?

Esser: We really are, Dana. We think of IM as having four pillars. The first is the infrastructure, obviously -- the storage, the data warehousing, information integration that kind of ties the infrastructure together. The second piece, which is very important, is governance. That includes things like data protection, master data management, compliance, and e-discovery.

The third, to John’s point earlier, is information processes. We start talking about paper-based information, digitizing documents, and getting them into the mix. Those first three pillars taken together really form the basis of an IM environment. They’re really the pieces that allow you to get the data right.

The fourth pillar, of course, is the analytics, the insight that business leaders can get from the analytics about the information. The two, obviously, go hand in hand. Rugged information infrastructure for your analytics isn’t any better than poor infrastructure with solid analytics. Getting both pieces of that right is very, very important.

Gardner: Vickie, if we take that strong infrastructure and those strong analytics and we do it properly, are we able to take the fruits of that out to a wider audience? Let’s say we are putting these analytics into the hands of more people that can take action.

Very important

Farrell: Yes, it is very important that you do both of those things. A couple of years ago, I remember, a lot of pundits were talking about BI becoming pervasive, because tools have gotten more affordable and easier to use. Therefore anybody with a smartphone or PDA or laptop computer was going to be able to do heavy-duty analysis.

Of course, that hasn’t happened. There is more limiting the wide use of BI than the tools themselves. One of the biggest issues is the integration of the data, the quality of the data, and having a data foundation in an environment where the users can really trust it and use it to do the kind of analysis that they need to do.

What we’ve seen in the last couple of years is serious attention on investing in that data structure -- getting the data right, as we put it. It's establishing a high level of data quality, a level of trust in the data for users, so that they are able to make use of those tools and really glean from that data the insight and information that they need to better manage their business.

Esser: We can’t overemphasize that, Dana. There's a great quote by Mark Twain, of all people, who said it isn’t what you don’t know that gets you into trouble -- it’s what you know for certain that just isn’t so. That really speaks to the point Vickie made about quality of data and the importance of having high-quality data in our analytics.

Gardner: We’re defining IM fairly broadly here, but how do we then exercise what we might consider due diligence in the enterprises -- security, privacy, making the right information available to people and then making sure the wrong people don’t have it? How do you apply that important governance pillar, when we’re talking about such a large and comprehensive amount of information, Brooks?

Esser: I think you have to define governance processes, as you’re building your information infrastructure. That’s the key to everything I talked about earlier -- the pillars of a solid IM environment. One of the key ones is governance, and that talks about protecting data, quality, compliance, and the whole idea of master data management -- limiting access and making sure that right people have access to input data and that data is of high-quality.

Farrell: In fact, we recently surveyed a number of data warehouse and BI users. We found that 81 percent of them either have a formal data governance process in place or they expect to invest in one in the next 12 months. There's a lot of attention on that, as Brooks was talking about.

Gardner: Now, as we also mentioned earlier, the economy is still tough. There is less discretionary spending than we’ve had in quite some time. How do you go to folks and get the rationale for the investment to move in this direction? Is it about cost-cutting? Is it about competitiveness? Is it about getting a better return on their infrastructure investments? John, do you have a sense of how to validate the market for IM?

Santaferraro: It’s really simple. By effectively using the information they have and further leveraging the investments that they’ve already made, there is going to be significant cost savings for the business. A lot of it comes out of just having the right insight to be able to reduce costs overall. There are even efficiencies to be had in the processing of information. It can cost a lot of money to capture data, to store it, and cleanse it.

Cleansing can be up to 70 percent of the cost of the data, trying to figure out your retention strategies. All of that is very expensive. Obviously, the companies that figure out how to streamline the handling and the management of their information are going to have major cost reductions overall.

Gardner: What about the business outcomes? Brooks, do we have a sense of what companies can do with this? If they do it properly, as John pointed out, how does that further vary the profitability, their market penetration, or perhaps even their dominance?

The way to compete

Esser: Dana, it’s really becoming the way that leading edge companies compete. I’ve seen a lot of research that suggests that CEOs are becoming increasingly interested in leveraging data more effectively in their decision-making processes. It used to be fairly simple. You would simply identify your best customers, market like heck to them, and try to maximize the revenue derived from your best customers.

Now, what we’re seeing is emphasis on getting the data right and applying analytics to an entire customer base, trying to maximize revenue from a broader customer base. We’re going to talk about a few cases today where entities got the data right, they now serve their customers better, reduced cost at the same time, and increased their profitability.

Gardner: We’ve talked about this at a fairly high level. I wonder if we could get a bit more specific. I’m curious about what is the problem that IM solves that then puts us in a position to leverage the analytics, put it in the hands of the right people, and then take those actions that cut the costs and increase the business outcome. I’m going to throw this out to anybody in our panel. What are the concrete problems that IM sets out to solve?

Esser: I’ll pick that up, Dana. Organizations all over the world are struggling with an expansion of information. In some companies, you’re seeing data doubling one year over the next. It’s creating problems for the storage environment. Managers are looking at processes like de-duplication to try to reduce the quantity of information.

Lots of information is still on paper. You’ve got to somehow get that into the mix, into your decision-making process. Then you have things like RFID tags and sensors adding to the expansion of information. There are legal requirements. When you think about the fact that most documents, even instant messages, are now considered business records, you’ve got to figure a way to capture that.

The challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer.



Then, you’re getting pressure from business leaders for timely and accurate information to make decisions with. So, the challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer. It’s a tough job.

Santaferraro: If I could throw another one in there, Dana, I recently talked to a couple of senior IT leaders, and both of them were in the same situation. They’ve been doing BI and IM for 10-plus years in their organization. They had fairly mature processes in place, but they were concerned with trying to take the insight that they had gleaned and turn it into action.

Along with all of the things that were just described by Brooks, there are a lot of companies out there that are trying to figure out how to get the data that last mile to the person on the front line who needs to make a decision. How do I get it to them in a very simple format that tells them exactly what they need to do?

So, it’s turning that insight into action, getting it to the teller in a bank, getting it to the clerk at the point of sale, or the ATM machine, or the web portal, when somebody is logging onto a banking system or a retail site.

Along with all of that, there is this new need to find a way to get the data that last mile to where it impacts a decision. For companies, that’s fairly complex, because that could mean millions of decisions every day, as opposed to just getting a report to an executive.

That whole world of the information worker and the need to use the information has changed as well, driving the need for IM.

Analyze the data

Farrell: Dana, you asked what the challenges are, and one that we see a lot is that people need to analyze the data. They'll traipse from data mart to data mart and pull data together manually. It’s time-consuming and it’s expensive. It’s fraught with error, and the fact that you have data stored in all these different data marts, just indicates that you’re going to have redundant data that’s going to be inconsistent.

Another problem is that you’ll end up with reports from different people and different departments, and they won’t match. They will have used different calculations, different definitions for business terms. They will have used different sources for the data. There is really no consistent reconciliation of all of this data and how it gets integrated.

This causes really serious problems for companies. That’s really what IM is going to help people overcome. In some cases, it doesn’t really cost as much as you’d think, because when you do IM properly, you're actually going to see some savings and correction of some of those things that I just talked about.

Gardner: It also seems to me, if you look at a historic perspective, that many of these information workers we're talking about didn’t even try to go after this sort of analytic information. They knew that it wasn’t going to be available to them. They’d probably have to wait in line.

But, if we open the floodgates and make this information available to them, it strikes me that they are going to want to start using it in new and innovative ways. That’s a good thing, but it could also tax the infrastructure and the processes that have been developed.

Without that close alignment between business and IT, a tie of the IT project to real business outcomes, and that constant monitoring by that group, it could easily get out of hand.



How do we balance an expected increase in the proactive seeking of this information? I guess we are starting to talk about the solution to IM. If we're good at it and people want it, how do we scale it? How do we ramp it up? What about that, John? How do we start in on the scaling and the automation aspect of IM?

Santaferraro: With our customers, some of the strategy and planning that we do up front helps them define IM practices internally and create things like an enterprise information competency center where the business is aligned with IT in a way that they are actually preparing for the growth of information usage. Without that close alignment between business and IT, a tie of the IT project to real business outcomes, and that constant monitoring by that group, it could easily get out of hand. The strategy and planning upfront definitely helps out.

Farrell: I'll add to that. The more effectively you bring together the IT people and the business people and get them aligned, the better the acceptance is going to be. You certainly can mandate use of the system, but that’s really not a best practice. That’s not what you want to do.

By making the information easily accessible and relevant to the business users and showing them that they can trust that data, it’s going to be a more effective system, because they are going to be more likely to use it and not just be forced to use it.

Esser: Absolutely, Vickie. When you think about it, it really is the business units within most enterprises that fund activities via a tax or however they manage to pay for these things. Doing it right means having those stakeholders involved from the very beginning of the planning process to make sure they get what they need out of any kind of an IT project.

Access a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Gardner: It strikes me that we have a real virtuous cycle at work here, where the more people get access to better information, the more action they can take on, the more value is perceived in the information, the more demand for the information, the more that the IT folks can provide it and then so on and so forth.

Has anybody got an example of how that might show up in the real world? Do we have any use cases that capture that virtuous adoption benefit?

Better customer service

Farrell: Well, one comes to mind. It’s an insurance company that we have worked with for several years. It’s a regional health insurance company faced with competition from national companies. They decided that they needed to make better use of their data to provide better services for their members, the patients as well as the providers, and also to create a more streamlined environment for themselves.

And so, to bring the IT and business users together, they developed an enterprise data warehouse that would be a common resource for all of the data. They ensured that it was accurate and they had a certain level of data quality.

They had outsourced some of the health management systems to other companies. Diabetes was outsourced to one company. Heart disease was outsourced to another company. It was expensive. By bringing it in house, they were able to save the money, but they were also able to do a better job, because they could integrate the data from one patient, and have one view of that patient.

That improved the aggregate wellness score overall for all of their patients. It enabled them to share data with the care providers, because they were confident in the quality of that data. It also saved them some administrative cost, and they recouped the investment in the first year.

Gardner: Any other examples, perhaps examples that demonstrate how IM and HP’s approach to IM come together?

More real-time applications and more mission-critical applications are coming and there is not going to be the time to do the manual integration.



Farrell: Another thing that we're doing is working with several health organizations in states in the US. We did one project several years ago and we are now in the midst of another one. The idea here is to integrate data from many different sources. This is health data from clinics, schools, hospitals, and so on throughout the state.

This enables you to do many things like run programs on childhood obesity, for example, assess the effectiveness of the program, and assess the overall cost and the return on the investment of that program. It helps to identify classes of people who need extra help, who are at risk.

Doing this gives you the opportunity to bring together and integrate in a meaningful way data from all these different sources. Once that’s been done, that can serve not only these systems, but also some of the potential systems more real-time systems that we see coming down the line, like emergency surveillance systems that would detect terrorist threat, bioterrorism threats, pandemics, and things like that.

It's important to understand and be able to get this data integrated in a meaningful way, because more real-time applications and more mission-critical applications are coming and there is not going to be the time to do the manual integration that I talked about before.

Gardner: It certainly sounds like a worthwhile thing. It sounds like the return on investment (ROI) is strong and that virtuous adoption is very powerful. So, John Santaferraro, what is that HP does that could help companies get in the IM mode?

Obviously, this is not just something you buy and drop in. It's more than just methodologies as well. What are the key ingredients, and how does HP pull them together?

Bringing information together

Santaferraro: We find that a lot of our customers have very disconnected sets of intelligence and information. So, we look at how we can bring that whole world of information together for them and provide a connected intelligence approach. We are actually a complete provider of enterprise class industry-specific IM solutions.

There are a lot of areas where we drill down and bring in our expertise. We have expertise around several business domains like customer relationship management, risk, and supply chain. We go to market with specific solutions from 13 different industries. As a complete solution provider, we provide everything from infrastructure to financing.

Obviously, HP has all of the infrastructure that a customer needs. We can package their IM solution in a single finance package that hits either CAPEX or OPEX. We've got software offerings. We've got our consulting business that comes in and helps them figure out how to do everything from the strategy that we talked about upfront and planning to the actual implementation.

We can help them break into new areas where we have practices around things like master data management or content management or e-discovery.

Across the entire IM spectrum, we have offerings that will help our customers solve whatever their problems are. I like to approach our customers and say, "Give us your most difficult and complex information challenge and we would love to put you together with people who have addressed those challenges before and with technology that’s able to help you do it and even create innovation as a business."

Everyone in the IM market partners with other firms to some extent.



When we've come in and laid the IM foundation for our customers and given them a solid technology platform -- Neoview is a great example -- we find that they began to look at what they've got. It really triggers a whole lot of brand-new innovation for companies that are doing IM the right way.

Gardner: Given these vertical industries, I imagine there are some partners involved there, a specialist in specific regions as well as specific industries. Brooks, is there an ecosystem at work here as well, and how does that shape up?

Esser: Absolutely, Dana. Everyone in the IM market partners with other firms to some extent. We've chosen some strategic partners that complement our capabilities as well. For example, we team with Informatica for our data integration platform and SAP BusinessObjects and MicroStrategy for our BI platform.

We work with a company called Clearwell, and we leverage their e-discovery platform to deliver a solution that helps customers leverage the information in their corporate email systems. We work with Microsoft to deliver HP Enterprise Content Management Solution. So we really have an excellent group of go-to-market partners to leverage.

Gardner: We've talked about the context of the market, why the economy is important, and we looked at some of the imperatives from a business point of view, why this is essential to compete, what problems you need to overcome, and the solution.

So, in order to get towards this notion of a payback, it's important to know where to get started. There seem to be so many inception points, so many starting points. Let me take this to you, John. The holistic approach of being comprehensive, but at the same time, breaking this into parts that are manageable, how do you do that?


Best practices

Santaferraro: One of the things that we have done is made our best practices available and accessible to our customers. We actually operationalize them. A lot of consulting companies will come and plop a big fat manual on the desk and say we have a methodology.

We've created an offering called the methodology navigator which actually walks the customers through the entire project in an interactive environment, where depending on whatever step of the project they are in, they can click on a little box that represents that step and quickly access templates, accelerators, and best practices that are directly relevant to that particular step.

We look at this holistic approach, but we also break it down into best practices that apply to every single step along the way.

Gardner: This whole thing sounds like a no-brainer to me. I don’t know whether I am overly optimistic, but I can see applying more information to your personal life, your small business as well as your department and then of course, your comprehensive enterprise.

I think we're entering into a data-driven decade. The more data, the more better decisions, the more productivity. It's how you grow. Brooks, why do you think it’s a no-brainer? Am I overstating the case?

It's how leading edge companies are going to compete, particularly in a tough and the volatile economy.



Esser: I don’t think you are, Dana. It's how leading edge companies are going to compete, particularly in a tough and the volatile economy, as we have seen over the last 5, 7, 8 years. It's really simple. Better information about your customers can help you drive incremental revenue from your existing customer base. The cool part about it is that better information can help you prevent loss of customers that you already have. You know them better and know how to keep them satisfied.

Every marketer knows that it's a lot less expensive to keep a current customer than it is to go out and acquire a new one. So the ROI for IM projects can be phenomenal and, to your point, that makes it kind of a no-brainer.

Gardner: Vickie, we apply this to customers, we apply it to patients, payers, end-users, but are there other directions to point this at? Perhaps supply chain, perhaps thinking about cloud computing and multiple sources of finding social media metadata about processes, customers, suppliers. Are we only scratching the surface in a sense of how we apply IM?

Farrell: I think we probably are. I don’t know that there are any industries that can't make use of better organizing their data and better analyzing their data and making use of that insight that they’ve gained to make better decisions. In fact, across the board, one of the biggest issues that people have is making better decisions.

In some cases, it's providing information to humans through reports or queries, so that they can make the decisions. What we're going to be seeing -- and this gets to what you were talking about -- is that when data is coming in in real time from sensors and things like that, it has location context. It's very rich data, and it provides you with a lot of information and a lot of variables to make the best decisions based on all those variables that are taking place at that time.

Where once we were maybe developing a handful of possible scenarios and picking the closest one, we don’t have to do that anymore. We can really make use of all of that information and make the absolute best decision right then and there. I don’t really think that there are any industries or domains that can't make use of that kind of capability.

Capturing more data

Santaferraro: Dana, I love what we are doing in the oil and gas industry. We have taken the sensors from our printers, and they are some of the most sensitive sensors in the world, and we are doing a project with Shell Oil, where we are actually embedding our sensors at the tip of a drill head.

As it goes down, it's going to capture seismic data that is 100 times more accurate than anything that's been captured in the past. It's going to send it up through a thing called IntelliPipe which is a five-megabyte feed is this correct that goes up through the drill pipe and back up to the well head, where we will be capturing that in real time.

Seismic data tends to be dirty by nature. It needs to be cleansed. So, we're building a real-time cleansing engine to cleanse that data, and then we are capturing it on the back-end in our digital oil field intelligence offering. It's really fun to see as the world changes, there are all these new opportunities for collecting and using information, even in industries that tend to be a little more traditional and mechanical.

Gardner: That's a very interesting point that the more precise we get with instrumentation, the more data, the more opportunity to work with it and then to crunch that in real-time offers us the predictive aspect rather than a reactive aspect.

As I said, it's been compelling and a no-brainier for me. John, you mentioned an on ramp to this, that it's really the methodological approach. Are there any resources, are there places people can go to get more information, to start factoring where in their organization they will get their highest returns, perhaps focus there and then start working outward towards that more holistic benefit?

It's really up to the customers in terms of how they want to start out.



Let me go to you first, Brooks. Where can people go for more information?

Esser: Of course, I'm going to tell folks to talk to their HP reps. In the course of our discussion today, it's pretty obvious that IM projects are huge undertakings, and we understand that. So, we offer a group of assessment and planning services. They can help customers scope out their projects.

We have a couple of ways to get started. We can start with a business value assessment service. This is service that sets people up with a business case and tracks ROI, once they decide on a project. But, the interesting piece of that is they can choose to focus on data integration, master data management, what have you.

You look at the particular element of IM and build a project around that. This assessment service allows people to identify the element in their IM environment, their current environment, that will give them the best ROI. Or, we can offer them a master planning service which generates really comprehensive IM plan, everything from data protection and information quality to advanced analytics.

So, it's really up to the customers in terms of how they want to start out, taking a look at the element of their IM environment, or if they want us to come in and look at the entire environment, we can say, "Here's what you need to do to really transform the entire IM environment."

Obviously, you can get details on those services and our complete portfolio for that matter at www.hp.com/go/bi and www.hp.com/go/im.

Gardner: Vickie, any sense of where you would point people when they ask do I get started, where can I get more information?

Farrell: Well, I think Brooks covered it. All of our information is at www.hp.com/go/bi. We also have another site that's www.hp.com/go/neoview. There is some specific information about the Neoview Advantage enterprise data warehouse platform there.

Gardner: Very well. John Santaferraro, how about from professional services and solutions perceptive; any resources that you have in mind?

Santaferraro: Probably the hottest topic that I have heard from customers in the last year or so has been around the development of the BI competency center. Again if you go to our BI site, you will find some additional information there about the concept of a BICC.

And the other trend that I am seeing is that a lot of companies are wanting to move from just the BI space with that kind of governance. They want to create an enterprise information competency center, so expanding beyond BI to include all of IM.

We have got some great services available to help people set those up. We have customers that have been working in that kind of a governance environment for three or four years. The beautiful thing is that companies that have been doing this for three or four years are doing transformational things for their business.

They are really closely tied to business mission, vision, and objective, versus other companies that are doing a bunch of one-off projects. One customer recently had spent $11 million in a project over the last year, and they were still trying to figure out where they were going to get value out of the project.

Again, heading over to our BI website -- type in BICC, do a search -- there is some great documentation there I think that you will find to help set up some of the governance side.

Gardner: Well great. We've been talking about a natural progression towards data-driven business decisions and using IM to scale that and bring more types of data and content into play. I want to thank our guests for toady's podcast. We've been joined by Brooks Esser, Worldwide Marking Lead for Information Management Solutions at HP. Thank you, Brooks.

Esser: Thanks very much for having me, Dana.

Gardner: John Santaferraro. He is the Director of Marketing and Industry Communications for BI Solutions. Thank you, John.

Santaferraro: Thanks, Dana. Glad to be here.

Gardner: And also, Vickie Farrell, Manager of Market Strategy for BI Solutions. Thanks so much.

Farrell: Thank you, Dana. This is a pleasure.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Access a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Transcript of a sponsored BriefingsDirect podcast on how companies are leveraging information management solutions to drive better business decisions in real time. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, April 13, 2010

Fog Clears on Proper Precautions for Putting More Enterprise Data Safely in Clouds

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on managing risks and rewards in the proper placement of enterprise data in cloud computing environments.

Headlines tell us that Internet-based threats are becoming increasingly malicious, damaging, and sophisticated. These reports come just as more companies are adopting cloud practices and placing mission-critical data into cloud hosts, both public and private. Cloud skeptics frequently point to security risks as a reason for cautiously using cloud services. It’s the security around sensitive data that seems to concern many folks inside of enterprises.

There are also regulations and compliance issues that can vary from location to location, country to country and industry by industry. Yet cloud advocates point to the benefits of systemic security as an outcome of cloud architectures and methods. Distributed events and strategies based on cloud computing security solutions should therefore be a priority and prompt even more enterprise data to be stored, shared, and analyzed by a cloud by using strong governance and policy-driven controls.

So, where’s the reality amid the mixed perceptions and vision around cloud-based data? More importantly, what should those evaluating cloud services know about data and security solutions that will help to make their applications and data less vulnerable in general?

We've assembled a panel of HP experts to delve into the dos and don’ts of cloud computing and corporate data. Please join me in welcoming Christian Verstraete, Chief Technology Officer for Manufacturing and Distributions Industries Worldwide at HP. Welcome back, Christian.

Christian Verstraete: Thank you.

Gardner: We’re also here with Archie Reed, HP's Chief Technologist for Cloud Security, the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Welcome back to the show, Archie.

Archie Reed: Hey, Dana. Thanks.

Gardner: It strikes me that companies around the world are already doing a lot of their data and applications activities in what we could loosely call "cloud computing," cloud computing being a very broad subject and the definition being rather flexible.

Let me take this first to you, Archie. Aren’t companies already doing a lot of cloud computing? Don’t they already have a great deal of transactions and data that’s being transferred across the Web, across the Internet, and being hosted on a variety of either internal or external servers?

Difference with cloud

Reed: I would certainly agree with that. In fact, if you look at the history that we’re dealing with here, companies have been doing those sorts of things with outsourcing models or sharing with partners or indeed community type environments for some time. The big difference with this thing we call cloud computing, is that the vendors advancing the space have not developed comprehensive service level agreements (SLAs), terms of service, and those sorts of things, or are riding on very thin security guarantees.

Therefore, when we start to think about all the attributes of cloud computing -- elasticity, speed of provisioning, and those sorts of things -- the way in which a lot of companies that are offering cloud services get those capabilities, at least today, are by minimizing or doing away with security and protection mechanisms, as well as some of the other guarantees of service levels. That’s not to dismiss their capabilities, their up-time, or anything like that, but the guarantees are not there.

So that arguably is a big difference that I see here. The point that I generally make around the concerns is that companies should not just declare cloud, cloud services, or cloud computing secure or insecure.

It’s all about context and risk analysis. By that, I mean that you need to have a clear understanding of what you’re getting for what price and the risks associated with that and then create a vision about what you want and need from the cloud services. Then, you can put in the security implications of what it is that you’re looking at.

Gardner: Christian, it seems as if we have more organizations that are saying, "We can provide cloud services," even though those services have been things that have been done for many years by other types of companies. But we also have enterprises seeking to do more types of applications and data-driven activities via these cloud providers.

So, we’re expanding the universe, if you will, of both types of people involved with providing cloud services and types of data and applications that we would use in a cloud model. How risky is it, from your perspective, for organizations to start having more providers and more applications and data involved?

Verstraete: People need to look at the cloud with their eyes wide open. I'm sorry for the stupid wordplay, but the cloud is very foggy, in the sense that there are a lot of unknowns, when you start and when you subscribe to a cloud service. Archie talked about the very limited SLAs, the very limited pieces of information that you receive on the one hand.

On the other hand, when you go for service, there is often a whole supply chain of companies that are actually going to join forces to deliver you that service, and there's no visibility of what actually happens in there.

Considering the risk

I’m not saying that people shouldn't go to the cloud. I actually believe that the cloud is something that is very useful for companies to do things that they have not done in the past -- and I’ll give a couple of examples in a minute. But they should really assess what type of data they actually want to put in the cloud, how risky it would be if that data got public in one way, form, or shape, and assess what the implications are.

As companies are required to work more closely with the rest of their ecosystem, cloud services is an easy way to do that. It’s a concept that is reasonably well-known under the label of community cloud. It’s one of those that is actually starting to pop up.

A lot of companies are interested in doing that sort of thing and are interested in putting data in the cloud to achieve that and address some of the new needs that they have due to the fact that they become leaner in their operations, they become more global, and they're required to work much more closely with their suppliers, their distribution partners, and everybody else.

It’s really understanding, on one hand, what you get into and assessing what makes sense and what doesn’t make sense, what’s really critical for you and what is less critical.

Gardner: Archie, it sounds as if we’re in a game of catch-up, where the enticements of the benefits of cloud computing have gotten ahead of the due diligence and managing of the complexity that goes along with it. If you subscribe to that, then perhaps you could help us in understanding how we can start to close that gap.

People are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.



To me one recent example was at the RSA Conference in San Francisco, the Cloud Security Alliance (CSA) came out with a statement that said, "Here’s what we have to do, and here are the steps that need to be taken." I know that HP was active in that. Tell me if you think we have a gap and how the CSA thinks we can close it.

Reed: We’re definitely in a situation where a number of folks are rushing toward the cloud on the promise of cost savings and things like that. In fact, in some cases, people are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.

A prime example of this was just last week, a week after the RSA Conference, the General Services Administration (GSA) here in the U.S. actually withdrew a blanket purchase order (BPO) for cloud computing services that they had put out only 11 months before.

They gave two reasons for that. The first reason was that technology had advanced so much in that 11 months that their original purchase order was not as applicable as it was at that time. But the second reason, perhaps more applicable to this conversation, was that they had not correctly addressed security concerns in that particular BPO.

Take a step back

In that case, it shows we can rush toward this stuff on promises, but once we really start to get into the cloud, we see what a mess it can be and we take a step back. As far as the CSA, HP was there at the founding. We did sponsor research that was announced at RSA around the top threats to cloud computing.

We spoke about what we called the seven deadly sins of cloud. Just fortuitously we came up with seven at the time. I will point out that this analysis was also focused more on the technical than on specific business risk. But, one of the threats was data loss or leakage. In that, you have examples such as insufficient authentication, authorization, and all that, but also lack of encryption or inconsistent use of encryption, operational failures, and data center liability. All these things point to how to protect the data.

One of the key things we put forward as part of the CSA was to try and draw out key areas that people need to focus on as they consider the cloud and try and deliver on the promises of what cloud brings to the market.

Gardner: Correct me if I am wrong, but one of the points that the CSA made was the notion that, by considering cloud computing environments and methodologies and scenarios, you can actually make your general control and management of data improved by moving in this direction. Do you subscribe to that?

Reed: Although cloud introduces new capabilities and new options for getting services, commonly referred to as infrastructure or platform or software, the posture of a company does not need to necessarily change significantly -- and I'll say this very carefully -- from what it should be. A lot of companies do not have a good security posture.

You need to understand what regs, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches, and then be able to prove that you did the right thing.



When we talk to folks about how to manage their approach to cloud or security in general, we have a very simple philosophy. We put out a high-level strategy called HP Secure Advantage, and it has three tenets. The first is to protect the data. We go a lot into data classification, data protection mechanisms, the privacy management, and those sorts of things.

The second tenet is to defend the resources which is generally about infrastructure security. In some cases, you have to worry about it less when you go into the cloud per se, because you're not responsible for all the infrastructure, but you do have to understand what infrastructure is in play to feed your risk analysis.

The third part of that validating compliance is the traditional governance, risk, and compliance management aspects. You need to understand what regulations, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches -- and then be able to prove that you did the right thing.

So this seems to make sense, whether you're talking to a CEO, CIO, or a developer. And it also makes sense, whether you are talking about internal resources or going to the cloud. Does that makes sense?

Gardner: Sure, it does. So getting it right means that you have more options in terms of what you can do in IT?

Reed: Absolutely.

Gardner: That seems like a pretty obvious direction to go in. Now, Christian, we talked a little bit about the technology standards methods for approaching security and data protection, but there is more to that cloud computing environment. What I'm referring to is compliance, regulation, and local laws. And this strikes me that there is a gap -- maybe even a chasm -- between where cloud computing allows people to go, above where the current laws and regulations are.

Perhaps you could help us better understand this gap and what organizations need to consider when they are thinking about moving data to the cloud vis-a-vis regulation.

A couple of caveats

Verstraete: Yes, it's actually a very good point. If you really look at the vision of the cloud, it's, "Don't care about where the infrastructure is. We'll handle all of that. Just get the things across and we'll take care of everything."

That sounds absolutely wonderful. Unfortunately, there are a couple of caveats, and I'll take a very simple example. When we started looking at the GS1 Product Recall service, we suddenly realized that some countries require information related to food that is produced in that country to remain within the country's boundaries.

That goes against this vision of clouds, in which location becomes irrelevant. There are a lot of examples, particularly around privacy aspects and private information, that makes it difficult to implement that complete vision of dematerialization, if I can put it that way, of the whole power that sits behind the cloud.

Why? Because the EU, for example, has very stringent rules around personal data and only allows countries that have similar rules to host their data. Frankly, there are only a couple of countries in the world, besides the 27 countries of the EU, where that's applicable today.

This means that if I take an example, where I use a global cloud with some data centers in the US and some data centers in Europe, and I want to put some private data in there, I may have some issues. How does that data proliferate across the multiple data centers that service actually uses? What is the guarantee that all of the data centers that will host and contain my data and its replication and these backups and others are all within the geographical boundaries that are acceptable by the European legislation?

The bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here.



I'm just taking that as an example, because there is other legislation in the US that is state-based and has the same type of approach and the same type of issues. So, on the one hand, we still are based with a very local-oriented legislative body and we are there with a globally oriented vision for cloud. In one way, form, or shape we'll have to address the dichotomy between both for the cloud to really be able to take off from a legal perspective.

Reed: Dana, if I may, the bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here. One of the ways in which I would recommend folks consider this -- when you start talking about data loss, data protection and that sort of stuff -- is having a data-classification approach that allows you to determine or at least deploy certain logic and laws and thinking how you're going to use it and in what way.

If you go to the military, the government, public sector, education, and even energy, they all have very structured approaches to the data that they use. That includes understanding how this might be used by third parties and things like that. You also see some recent stuff.

Back in 2008, I think it was, the UK came up with a data handling review, which was in response to public sector data breaches. As a result, they released a security policy framework that contains guidance and policies on security and risk management for the government departments. One of the key things there is how to handle data, where it can go, and how it can be used.

Trying to streamline

What we find is that, despite this conflict, there are a lot of approaches that are being put into play. The goal of anyone going into this space, as well as what we are trying to promote with the CSA, is to try to streamline that stuff and, if possible, influence the right people that are trying to avoid creating conflicting approaches and conflicting classification models.

Ultimately, when we get to the end of this, hopefully the CSA or a related body that is either more applicable or willing will create something that will work on a global scale or at least as widely as possible.

Gardner: So, for those companies interested in exploring cloud it's by no means a cakewalk. They need to do their due diligence in terms of technology and procedures, governance and policies, as well as regulatory issues compliance and, I suppose you could call it, localization types of issues.

Is there a hierarchy that appears to either of you about where to start in terms of what are the safe types of data, the safer or easier types of applications, that allows you to move toward some of these principles that probably are things you should be doing already, but that allow you to enjoy some of the rewards, while mitigating the risks?

Reed: There are two approaches there. One of the things we didn't say at the outset was there are a number of different versions of cloud. There are private clouds and public clouds. Whether you buy into private cloud as a model, in general, the idea there is you can have more protections around that, more controls, and more understanding of where things are physically.

If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.



That's one approach to understanding, or at least achieving, some level of protection around the data. If you control the assets, you're allowed to control where they're located. If you go into the public cloud, then those data-classification things become important.

If you look at some of the government standards, like classified, restricted, or confidential, once you start to understand how to apply the data models and the classifications, then you can decide where things need to go and what protections need to be in place.

Gardner: Is there a progression, a logical progression, that appears to you about how to approach this, given that there are still disparities in the field?

Reed: Sure. You start off with the simplest classification of data. If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.

Verstraete: Going to the cloud is actually a very good moment for companies to really sit down and think about what is absolutely critical for my enterprise and what are things that, if they leak out, if they get known, it's not too bad. It's not great in any case, but it's not too bad. And, that data classification that Archie was just talking about is a very interesting exercise that enterprises should do, if they really want to go to the cloud, and particularly to the public clouds.

I've seen too many companies jumping in without that step and being burnt in one way, form, or shape. It's sitting down and think through that, thinking through, "What are my key assets? What are the things that I never want to let go that are absolutely critical? On the other hand, what are the things that I quite frankly don't care too much about?" It's building that understanding that is actually critical.

Gardner: Perhaps there is an instance that will illustrate what we're talking about. I hear an awful lot about platform as a service (PaaS), which is loosely defined as doing application development activities in a cloud environment. I talk to developers who are delighted to use cloud-based resources for things like testing and to explore and share builds and requirements in the early stages.

At the same time, they're very reluctant to put source code in someone else's cloud. Source code strikes me as just a form of data. Where is the line between safe good cloud practices and application development, and when would it become appropriate to start putting source code in there as well?

Combination of elements

Verstraete: There are a number of answers to your question and they're related to a combination of elements. The first thing is gaining an understanding as much as you can, which is not easy, of what are the protection mechanisms that fit in the cloud service.

Today, because of the term "cloud," most of the cloud providers are getting away with providing very little information, setting up SLAs that frankly don't mean a lot. It's quite interesting to read a number of the SLAs from the major either infrastructure-as-a-service (IaaS) or PaaS providers.

Fundamentally, they take no responsibility, or very little responsibility, and they don't tell you what they do to secure the environment in which they ask you to operate. The reason they give is, "Well, if I tell you, hackers can know, and that's going to make it easier for them to hack the environment and to limit our security."

There is a point there, but that makes it difficult for people who really want to have source code, as in your example. That's relevant and important for them, because you have source code that’s not too bad and source code that's very critical. To put that source code in the cloud, if you don't know what's actually being done, is probably worse than being able to make an assessment and have a very clear risk assessment. Then, you know what the level of risk is that you take. Today, you don't know in many situations.

Gardner: Alright, Archie.

Reed: There are a couple of things or points that need to be made. First off, when we think about things like source code or data like that, there is this point where data is stored and it sits at rest. Until you start to use it, it has no impact, if it's encrypted, for example.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.



So, if you're storing source code up there, it's encrypted, and you hold the keys, which is one of the key tenets that we would advocate for anyone thinking about encrypting stuff in the cloud. then maybe there is a level of satisfaction and meeting compliance that you have with that type of model.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.

The second thing is that we're also seeing a very nascent set of controls and guarantees and SLAs and those sorts of things. This is very early on, in my opinion and in a lot of people's opinion, in the development of this cloud type environment, looking at all these attributes that are given to cloud, the unlimited expansion, the elasticity, and rapid provisioning. Certainly, we can get wrapped around the axle about what is really required in cloud, but it all ultimately comes down to that risk analysis.

If you have the right security in the system, if you have the right capabilities and guarantees, then you have a much higher level of confidence about putting data, such as source code or some sets of data like that, into the cloud.

Gardner: To Christian’s point of that the publicly available cloud providers are basically saying buyer beware, or in this case, the cloud practitioner beware, the onus to do good privacy, security compliance, and best practices falls back on the consumer, rather than the provider.

Community clouds

Reed: That's often the case. But, also consider that there are things like community clouds out there. I'll give the example of US Department of Defense back in 2008. HP worked with the Defense Information Systems Agency (DISA) to deploy cloud computing infrastructure. And, we created RACE, which is the Rapid Access Computing Environment, to set things up really quickly.

Within that, they share those resources to a community of users in a secure manner and they store all sorts of things in that. And, not to point fingers or anything, but the comment is, "Our cloud is better than Google's."

So, there are secure clouds out there. It's just that when we think about things like the visceral reaction that the cloud is insecure, it's not necessarily correct. It's insecure for certain instances, and we've got to be specific about those instances.

In the case of DISA, they have a highly secured cloud, and that's where we expect things to go and evolve into a set of cloud offerings that are stratified by the level of security they provide, the level of cost, right down to SLA’s and guarantees, and we’re already seeing that in these examples.

Gardner: So, for that cloud practitioner, as an organization, if they take those steps towards good cloud computing practices and technologies, it’s probably going to benefit them across the board in their IT infrastructure, applications, and data activities. But does it put them at a competitive advantage?

What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.



If you do this right, if you take the responsibility yourself to figure out the risks and rewards and implement the right approach, what does that get for you? Christian, what’s your response to that?

Verstraete: It gives you the capability to use the elements that the cloud really brings with it, which means to have an environment in which you can execute a number of tasks in a pay-per-use type environment.

But, to come back to the point that Archie was making, one of the things that we often have a tendency to forget -- and I'm as guilty as anybody else in that space -- is that cloud means a tremendous amount of different things. What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.

The more you move away from the traditional public cloud -- and when I say the traditional public cloud, I’m thinking about Amazon, Google, Microsoft, that type of thing -- to more community clouds and private clouds, the more important that you have it under your own control to ensure that you have the appropriate security layers and security levels and appropriate compliance levels that you feel you need for the information you’re going to use, store, and share in those different environments.

Gardner: Okay, Archie, we’re about out of time, so the last question is to you and it’s going to be the same question. If you do this well, if you do it right, if you take the responsibility, perhaps partner with others in a community cloud, what do you get, what’s the payoff, why would that be something that’s a competitive advantage or cost advantage, and energy advantage?

Beating the competition

Reed: We’ve been through a lot of those advantages. I’ve mentioned several times the elasticity, the speed of provisioning, the capacity. While we’ve alluded to, and actually discussed, specific examples of security concerns and data issues, the fact is, if you get this right, you have the opportunity to accelerate your business, because you can basically break ahead of the competition.

Now, if you’re in a community cloud, standards may help you, or approaches that everyone agrees on may help the overall industry. But, you also get faster access to all that stuff. You also get capacity that you can share with the rest of the community. If you're thinking about cloud in general, in isolation, and by that I mean that you, as an individual organization, are going out and looking for those cloud resources, then you’re going to get that ability to expand well beyond what your internal IT department.

There are lots of things we could close on, of course, but I think that the IT department of today, as far as cloud goes, has the opportunity not only to deliver and better manage what they’re doing in terms of providing services for the organization, but also have a responsibility to do this right and understand the security implications and represent those appropriately to the company such that they can deliver that accelerated capability.

Gardner: Very good. We’ve been discussing how to manage risks and rewards and proper placement of enterprise data in cloud-computing environments. I want to thank our two panelists today, Christian Verstraete, Chief Technology Officer for Manufacturing and Distribution Industries Worldwide at HP. Thank you, Christian.

Verstraete: You’re welcome.

Gardner: And also, Archie Reed, HP's Chief Technologist for Cloud Security, and the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Thank you, Archie.

Reed: Hey, Dana. Thanks for taking the time to talk to us today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for joining us, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Wednesday, April 07, 2010

Well-Planned Data Center Transformation Effort Delivers IT Efficiency Paybacks, Green IT Boost for Valero Energy

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power and cooling requirement and concerns.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the huge drive for improvement around enterprise data centers. Many enterprises, if not nearly all, are involved nowadays with some level of data-center transformation either in the planning stages or in outright execution. The heightened activity runs the gamut from retrofitting and designing new data centers to then building and occupying them.

We're seeing many instances where numerous data centers are being consolidated into a powerful core few, as well as completely green-field data centers -- with modern design and facilities -- are coming online.

These are, by no means, trivial projects. They often involve a tremendous amount of planning and affect IT, facilities, and energy planners, as well as the business leadership and line of business managers. The payoffs are potentially huge, as we'll see, from doing data center design properly, but the risks are also quite high, if things don't come out as planned.

The latest definition of data center is focused on being what's called fit-for-purpose, of using best practices and assessments of existing assets and correctly projecting future requirements to get that data center just right -- productive, flexible, efficient and well-understood and managed.

The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.



Today, we're going to examine the lifecycle of data-center design and fulfillment through migration and learn about some of the payoffs when this goes as planned. We're going to learn more about a successful project at Valero Energy Corp. The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.

We're here with two executives from Hewlett-Packard to look at proper planning and data center design, as well as build and migration. And we'll learn from an IT leader at Valero how they managed their project.

Please join me in welcoming our guests today. We're here with Cliff Moore, America’s PMO Lead for Critical Facilities Consulting at HP. Welcome to the show, Cliff.

Cliff Moore: Thanks, Dana.

Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Hello, John.

John Bennett: Hi, Dana.

Gardner: We're also here with John Vann, Vice President of Technical Infrastructure and Operations at Valero Energy Corp. Welcome to the show, John.

John Vann: Hello, Dana. Thanks a lot.

Gardner: Let's go to you, John Bennett. Tell us why data center transformation is at an inflection point, where data centers are in terms of their history, and what are the new requirements. It seems to be somewhat of a perfect storm in terms of there's a need to move, and things still are really not acceptable?

Modern and efficient

Bennett: You're right on that front. I find it just fascinating that if you had spoken four years ago and dared to suggest that energy, power, cooling, facilities, and buildings were going to be a dominant topic with CIOs, you would have been laughed at. Yet, that's definitely the case today, and it goes back to the point you made about IT being modern and efficient.

Data-center transformation, as we've spoken about before, really is about not only significantly reducing cost to an organization, not only helping them shift their spending away from management and maintenance and into business projects and priorities, but also helping them address the rising cost of energy, the rising consumption of energy and the mandate to be green or sustainable.

The issues that organizations have in trying to address those mandates, of course, is that the legacy infrastructure and environments they have, the applications portfolio, the facilities, etc., all hinder their ability to execute on the things they would like to do.

Data-center transformation tries to take a step back, assess the data center strategy and the infrastructure strategy that's appropriate for a business, and then figure how to get from here to there. How do you go from where you are today to where you need to be?

It turns out that one of the things that gets in the way, both from a cost perspective and from a supporting the business perspective is the data centers themselves. Customers can find themselves, as HP did, having a very large number of data centers. We had 85 around the world, because we grew through acquisition, we grew organically, and we had data centers for individual lines of business.

We had data centers for individual countries and regions. When you added it up, we had 85 facilities and innumerable server rooms, all of them requiring administrative staff, data center managers, and a lot of overhead. As part of our own IT transformation effort, we've brought that down to six.

You have organizations that discover that the data centers they have aren't capable of meeting their future needs. One wag has characterized this as the "$15 million server," where you keep needing to grow and support the business. All of a sudden, you discover that you're bursting at the themes.

Or, you can be in California or the U.K. The energy supply they have today is all they’ll ever have in their data center. If they have to support business growth, they're going to have to deal it by addressing both their infrastructure strategies, but probably also by addressing their facilities. That's where facilities really come into the equation and have become a top-of-mind issue for CIOs and IT executives around the world.

Gardner: John, it also strikes me that the timing is good, given the economic cycle. The commercial market for land and facilities is a buyer's market, and that doesn’t always happen, especially if you have capacity issues. You don’t always get a chance to pick when you need to rebuild and then, of course, money is cheap nowadays too.

Bennett: If you can get to it.

Gardner: The capital markets are open for short-intervals.

Signs of recovery

Bennett: We certainly see, and hope to see, signs of recovery here. Data center location is an interesting conversation, because of some of the factors you named. One of the things that is different today than even just 10 years ago is that the power and networking infrastructure available around the world is so phenomenal, there is no need to locate data centers close to corporate headquarters.

You may choose to do it, but you now have the option to locate data centers in places like Iceland, because you might be attracted to the natural heating of their environment. Of course, you might have volcano risk.

You have people who are attracted to very boring places, like the center of the United States, which don't have earthquakes, hurricanes, wildfires and things that might affect facilities themselves. Or, as I think you'll discover with John at Valero, you can choose to build the data center right near corporate headquarters, but you have a lot of flexibility in it.

The issue is not so much access to capital markets as it is that any facility’s project is going to have to go through not just the senior executives of the company, but probably the board of directors. You'll need a strong business case, because you're going to have to justify it financially. You're going to have to justify it as an opportunity cost. You're going to have to justify in terms of the returns on investment (ROIs) expected in the business, if they make choices about how to manage and source funds as well.

So, it's a good time from the viewpoint of land being cheap, but it might be a good time in terms of business capital be available. It might not be a good time in terms of investment funds being available, as many banks continue to be reluctant to loan than it appears.

The majority of the existing data centers out there today were built 10-15 year ago, when power requirements and densities were lot lower.



Gardner: The variables now for how you would consider, plan, and evaluate are quite different than even just a few years ago.

Bennett: It's certainly true, and I probably would look to Cliff to say more about that.

Gardner: Cliff Moore, what's this notion of fit-for-purpose, and why do you think that variables for deciding to move forward with data center transformation of redesigned activities is different nowadays? Why we are in a different field, in terms of decisions around these issue?

Moore: Obviously, there's no such thing as a one-size-fits-all data center. It's just not that way. Every data center is different. The majority of the existing data centers out there today were built 10 to 15 years ago, when power requirements and densities were a lot lower.

No growth modeling

It's also estimated that, at today's energy cost, the cost of running a server from an energy perspective is going to exceed the cost of actually buying the server. So that's a major consideration. We're also finding that many customers have done no growth modeling whatsoever regarding their space, power, and cooling requirements for the next 5, 10, or 15 years -- and that's critical as well.

Gardner: We should explain the notion of fit for purpose upfront for those folks who might not be familiar with it.

Bennett: With fit for purpose, the question in mind is the strategic one of the data center strategy for an organization in particular. If you think about the business services that are being provided by IT, it's not only what those business services are, but how they should be sourced. If they’re being provided out of entity-owned data centers, how many and where? What's the business continuity strategy for those?

It needs to take into account, as Cliff has highlighted, not only what I need today, but that buildings typically have an economic life of 15 to 25 years. Technology life cycles for particular devices are two or three years, and we have ongoing significant revolutions in technology itself, for example, as we moved from traditional IT devices to fabric infrastructures like converged infrastructure.

You have these cycles upon cycles of change taking place. The business forecasts drive the strategy and part of that forecasting will be sizing and fit for purpose. Very simply, are the assets I have today capable of meeting my needs today, and in my planning horizon? If they are, they’re fit for purpose. If they’re not, they’re unfit for purpose, and I'd better do something about it.

Gardner: We're in a bit of a time warp, Cliff. It seems that, if many were built 15 years and we still don't have the sense of where we'll be in 5 or 10 years, we seem to be caught between not fitting into the past but not quite fitting or knowing what the future is. How do you help people smooth that out?

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business.



Moore: Obviously, we’ve got to find out first off what they need -- what space, power, and cooling requirements. Then, based on the criticality of their systems and applications, we quickly determine what level of availability is required, as well. This determines the Uptime Institute Tier Level for the facility. Then, we go about helping the client strategize on exactly what kinds of facilities will meet those needs, while also meeting the needs of the business that come down from the board.

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business. That's exactly what boards of directors are looking for, before they will commit to spending that kind of money.

Gardner: What does HP bring to the table? How do you start a process like this and make it a lifecycle, where that end goal and the reduce risk play out to get the big payoffs that those boards of directors are interested in?

Moore: Well, our group within Critical Facilities Services actually comes to the table with company's executives and not only looks at what are their space, power, and cooling requirements, but what are the strategies of the business. What are the criticality levels of the various mission-critical applications that they run? What are their plans for the future? What are their mergers and acquisitions plans, and so on and so forth. We help them collaboratively develop that strategy in the next 10 to 15 years for the data center future.

Gardner: It was pointed out earlier that one size doesn't fit all. From your experience, Cliff, what are the number one or two reasons that you’re seeing customers go after a new design for the data center, and spend that large sum of money?

Power and cooling

Moore: Probably the biggest reason we're seeing today is power and cooling. Of course, cooling goes along with power. We see more of that than anything else. People are simply running out of power in their data centers. The facilities today that were built 5, 10, or 15 years ago, just do not support the levels of density in power and cooling that clients are asking for going to the future, specifically for blades and higher levels of virtualization.

Gardner: So higher density requires more energy to run the servers and more energy to cool them, but you have a higher efficiency, utilization, and productivity as the end result, in terms of delivering on the requirements. Is there a way for designing the data center that allows you to cut cost and increase capacity or you are asking too much of this process?

Moore: There certainly are ways to do that. We look at all of those different ways with the client. One of the things we do, as part of the strategic plan, is help the client determine the best locations for their data centers based on the efficiency in gathering free cooling, for instance, from the environment. It was mentioned that Iceland might be a good location. You'd get a lot of free cooling there.

Gardner: What are some of the design factors? What are the leading factors that people need to look at? Perhaps, you could start to get us more familiar with Valero and what went on with them in the project that they completed not too long ago.

Moore: I'll defer to John for some of that, but the leading factors we're seeing again are our space, power, and cooling, coupled with the tier level requirement. What is the availability requirement for the facility itself? Those are the biggest factors we're seeing.

Some data centers we see out there use the equivalent of half of a nuclear power plant to run.



Marching right behind that is energy efficiency. As I mentioned before, the cost of energy is exorbitant, when it comes to running a data center. Some data centers we see out there use the equivalent of half of a nuclear power plant to run. It's very expensive, as I'm sure John would tell you. One of the things that the Valero is accomplishing is the lower energy costs, as a result of building their own.

Gardner: Before we go to Valero, I have one last question on the market and some of the drivers. What about globalization? Are we seeing emerging markets, where there is going to be many more people online and more IT requirements? Is that a factor as well?

Moore: Absolutely.

Bennett: There are a number of factors. First of all, you have an increasing access of the Internet and the increasing generation of complex information types. People aren't just posting text anymore, but pictures and videos. And, they’re storing those things, which is feeding what we characterize as an information explosion. The forecast for storage consumption over the next 5 to 10 years is just phenomenal.

Perfect storm

On top of that, you have more and more organizations and businesses providing more of their business services through IT-based solutions. You talked about a perfect storm earlier with regard to the timing for data centers. Most organizations are in a perfect storm today of factors driving the need for ongoing investments and growth out of IT. The facilities have got to help them grow, not limit their growth.

Gardner: John Vann, you’re up. I'm sorry to have left you off on the sidelines there for so long. Tell us about Valero Energy Corp., and what it is that drove you to bite off this big project of data-center transformation and redesign?

Vann: Thanks a lot, Dana. Just a little bit about Valero. Valero is a Fortune 500 company in San Antonio, Texas and we're the largest independent refiner in the North America. We produce fuel and other products from 15 refineries and we have 10 ethanol plants.

We market products in 44 states with large distribution network. We're also into alternative fuel with renewables and one of the largest ethanol producers. We have a wind farm up in northern Texas, around Amarillo, that generates enough power to fuel our McKee refinery.

So what drove us to build? We started looking at building in 2005. Valero grew through acquisitions. Our data center, as Cliff and John have mentioned, was no different than others. We began to run into power,space, and cooling issues.

Even though we were doing a lot of virtualization, we still couldn't keep up with the growth. We looked at remodeling and also expanding, but the disruption and risk to the business was just too great. So, we decided it was best to begin to look for another location.

Our existing data center is on headquarters’ campus which is not the best place for the data center, because it's inside one of our office complexes. Therefore, we have water and other potentially disruptive issues close to the data center -- and it was just concerning considering where the data center is located.

We began to look for alternative places. We also were really fortunate in the timing of our data center review. HP was just beginning their build of the six big facilities that they ended up building or remodeling, and so we were able to get good HP internal expertise to help us as we were beginning our decision of design and building our data center.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space.



So, we really were fortunate to have experts give us some advice and counsel. We did look at collocation. We also looked at other buildings, and we even looked at building another data center on our campus.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space. As we did our economics, it was just better for us to be able to build our own facility. We were able to find land northwest of San Antonio, where several data centers have been built. We began our own process of design and build for 20,000 square feet of raised floor and began our consolidation process.

Gardner: What, in your opinion, was more impactful -- the planning the execution, the migration? I guess the question should be, what ended up being more challenging than you expected initially? Where do you think, in hindsight, you’d put more energy and more planning, if you had to do it all again?

Solid approach

Vann: I think our approach was solid. We had a joint team of HP and the Valero Program Management Office. It went really well the way that was managed. We had design teams. We had people from networking architecture, networking strategy and server and storage, from both HP and Valero, and that went really well. Our construction went well. Fortunately, we didn’t have any bad weather or anything to slow us down; we were right on time and on budget.

Probably the most complex was the migration, and we had special migration plans. We got help from the migration team at HP. That was successful, but it took a lot of extra work.

If we had one thing to do over again, we would probably change the way we did our IP renumbering. That was a very complex exercise, and we didn’t start that soon enough. That was very difficult.

Probably we'd put more project managers on managing the project, rather than using technical people to manage the project. Technical folks are really good at putting the technology in place, but they really struggle at putting good solid plans in place. But overall, I'd just say that migration is probably the most complex.

Power and cooling are just becoming an enormous problem.



Gardner: Thank you for sharing that. How old was the data center that you wanted to replace?

Vann: It's about seven years old and had been remodeled once. You have to realize Valero was in a growth mode and acquiring refineries. We now have 15 refineries. We were consolidating quite a bit of equipment and applications back into San Antonio, and we just outgrew it.

We were having hard time keeping it redundant and keeping it cool. It was built with one foot of raised floor and, with all the mechanical inside the data center, we lost square footage.

Gardner: Do you agree, John, that some of the variables or factors that we discussed earlier in the podcast have changed, say, from just as few as six or seven years ago?

Vann: Absolutely. Power and cooling are just becoming an enormous problem and most of this because virtualization blades and other technologies that you put in a data center just run a little hotter and they take up the extra power. It's pretty complex to be able to balance your data center with cooling and power, also UPS, generators, and things like that. It just becomes really complex. So, building a new one really put us in the forefront.

Gardner: Can you give us some sense of the metrics now that this has gone through and been completed? Are there some numbers that you can apply to this in terms of the payback and/or the efficiency and productivity?

Potential problems

Vann: Not yet. We've seen some recent things that have happened here on campus to our old data center, because of weather and just some failures within the building. We’ve had some water leaks that have actually run into the data center floor. So that's a huge problem that would have flooded our production data center.

You can see the age of the data center beginning to have failures. We've had some air-conditioner failures, some coolant leaking. I think our timing was just right. Even though we have been maintaining the old data center, things were just beginning to fail.

Gardner: So, certainly, there are some initial business continuity benefits there.

Vann: Exactly.

Gardner: Going back to Cliff Moore. Does anything you hear from John Vann light any light bulbs about what other people should be considering as a step up to the plate on these data center issues?

Moore: They certainly should consult John's crystal ball regarding the issues he's had in his old data center, and move quickly. Don’t put it off. I tell people that these things do happen, and they can be extremely, costly when you look at the cost of downtime to the business.

You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications.



Gardner: Getting started, we talked about migration. It turns out that we did another podcast that focused specifically on data-center migration and we can reference folks to that that easily. What is it about planning, getting started as you say, when people recognize that the time might not be on their side? What are some of the initial steps, and how might they look to HP for some guidance?

Moore: We focus entirely on discovery early on. You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications, especially the mission-critical applications.

Typically, a move like John’s requires multiple, what we call, move groups. John’s company had five or six, I believe. You simply cannot divide your servers up into these move groups, without knowing what you might break by dividing them up. Those dependencies are critical, and that's probably the failing point.

Vann: We had five move groups. Knowing what applications go with what is a real chore in making sure that you have the right set of servers you can move on a particular weekend. We also balanced it with downtime from the end customers, so we’re going to make sure that we were not in the middle of a refinery turnaround or a major closing. Being able to balance those weekends, so we had enough time to be able to make the migration work was quite a challenge.

Gardner: John Vann, did you take the opportunity to not only redesign and upgrade your data center facilities, but at the same time, did you modernize your infrastructure or your architecture? You said you did quite a bit with virtualization already, was this a double whammy in terms of the facilities as well as the architecture?

Using opportunities

Vann: Yes. We took the opportunity to upgrade the network architecture. We also took the opportunity to go further with our consolidation. We recently finished moving servers from refineries into San Antonio. We took the opportunity to do more consolidation and more virtualization, upgrade our blade farm, and just do a lot more work around improving the overall infrastructure for applications.

Gardner: I'd like to take that back to John Bennett. I imagine you're seeing that one of the ways you can rationalize the cost is that you're not just repaving a cow path, as it were. You're actually re-architecting and therefore getting a lot greater efficiency, not only from the new facility, but from the actual reconstruction of your architecture, or the modernization and transformation of your architecture.

Bennett: There are several parts to that, and getting your hands around it can really extend the benefits you get from these kinds of projects, especially if you are making the kind of investment we are talking about in new data center facilities. Modernizing your infrastructure brings energy benefits in its own right, and it enhances the benefits of your virtualization and consolidation activities.

It can be a big step forward in terms of standardizing your IT environment, which is recommended by many industry analysts now in terms of preparing for automation or to reduce management and maintenance cost. You can go further and bring in application modernization and rationalization to take a hard look at your apps portfolio. So, you can really get these combined benefits and advantages that come from doing this.

We certainly recommend that people take a look at doing these things. If you do some of these things, while you're doing the data center design and build, it can actually make your migration experience easier. You can host your new systems in the new data center and be moving software and processes, as opposed to having to stage and move servers and storage. It's a great opportunity.

It's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost.



John talked about dealing with the IP addresses, but the physical networking infrastructure and a lot of old data centers is a real hodgepodge that's grown organically over years. I guess you can blame some of our companies for having invented Ethernet a long time ago. But, it's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost. They all come in there.

I actually have a question for John Vann as well. Because they had a pretty strong focus around governance, and especially in handling change request, I'm hoping he might talk a little bit about that process of the design and build project.

Vann: Our goal was to hold scope creep to a minimum. We had an approval process, where it had to be a pretty good reason for a change and for a server not to move. We fundamentally used the word "no" as much as we could to avoid not getting the right applications in the right place. Any kind of approval had to go through me. If I disagreed, and they still wanted to escalate it, we went to my boss. Escalation was rarely used. We had a pretty strong change management process.

Gardner: I can see where that would be important right along the way, not something you want to think about later or adding onto the process, but something to set up right from the beginning.

We’ve had a very interesting discussion about the movement in enterprise data centers where folks are doing a lot more transformation, moving and relocating their data centers, modernizing them, and finding ways to eke out efficiencies, but also trying to reduce the risk of moving in the future and looking at those all important power and energy consumption issues as well.

I want to thank our guests. We've been joined today by Cliff Moore, America’s PMO Leads for Critical Facilities Consulting at HP. Thank you, Cliff.

Moore: Thanks, Dana. Thanks, everybody.

Gardner: John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. Thank you, John.

Bennett: Thank you, Dana.

Gardner: And lastly, John Vann, Vice President, Technical Infrastructure and Operations at Valero Energy. John, I really appreciate your frankness and sharing your experience and I will certainly wish you all in that.

Bennett: Thank you very much, Dana, I appreciate it.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power requirement and cooling concerns. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: