Friday, December 16, 2011

Stone Bond's Metadata Virtualization and Orchestration Improves Enterprise Data Integration Response Time and ROI

Transcript of a BriefingsDirect podcast on how businesses can better manage and exploit their exploding data via new technologies that provide meta-data-based data integration and management.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Stone Bond Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion on the need to make sense of the deluge and complexity of the data and information that is swirling in and around modern enterprises. Most large organizations today are able to identify, classify, and exploit only a small portion of the total data and information within their systems and processes.

Perhaps half of those enterprises actually have a strategy for improving on this fact. But business leaders are now recognizing that managing and exploiting information is a core business competency that will increasingly determine their overall success. That means broader solutions to data distress are being called for.

We'll now then look at how metadata-driven data virtualization and improved orchestration can help provide the inclusivity and scale to accomplish far better data management. Such access then leads to improved integration of all information into an approachable resource for actionable business activities.

With us now to help better understand these issues -- and the market for solutions to these problems -- are our guests, Noel Yuhanna, Principal Analyst at Forrester Research. Welcome to BriefingsDirect, Noel.

Noel Yuhanna: Thanks.

Gardner: We're also here with Todd Brinegar, Senior Vice President for Sales and Marketing at Stone Bond Technologies. Welcome, Todd. [Disclosure: Stone Bond is a sponsor of BriefingsDirect podcasts.]

Todd Brinegar: Dana, how are you? Noel, great to hear you, too.

Gardner: Welcome to you both. Let me start with you, Noel. It's been said often, but it’s still hard to overstate, that the size and rate of growth of data and information is just overwhelming the business world. Why should we be concerned about this? It's been going on for a while. Why is it at a critical stage now to change how we're addressing these issues?

Yuhanna: Well, data has been growing significantly over the last few years because of different application deployments, different devices, such as mobile devices, and different environments, such as globalization. These are obviously creating a bigger need for integration.

We have customers who have 55,000 databases, and they plan to double this in the next three to four years. Imagine trying to manage 55,000 databases. It’s a nightmare. In fact, they don’t even know what the count is actually.

Then, they're dealing with unstructured data, which is more than 75 percent of the data. It’s a huge challenge trying to manage this unstructured data. Forget about the intrusions and the hackers trying to break in. You can’t even manage that data.

Then, obviously, we have challenges of heterogeneous data sources, structured, unstructured, semi-structured. Then, we have different database types, and then, data is obviously duplicated quite a lot as well. These are definitely bigger challenges than we've ever seen.

Different data sources

Gardner: We're not just dealing with an increase in data, but we have all these different data sources. We're still dealing with mainframes. We're still adding on new types of data from mobile devices and sensors. It has become overwhelming.

I hear many times people talking about big data, and that big data is one of the top trends in IT. It seems to me that you can’t just deal with big data. You have to deal with the right data. It's about picking and choosing the correct data that will bring value to the process, to the analysis, or whatever it is you're trying to accomplish.

So Noel, again, to you, what’s the difference between big data and right data?

Yuhanna: It’s like GIGO, Garbage In, Garbage Out. A lot of times, organizations that deal with data don’t know what data they're dealing with. They don’t know that it’s valuable data in the organization. The big challenge is how to deal with this data.

The other thing is making business sense of this data. That's a very important point. And right data is important. I know a lot of organizations think, "Well, we have big data, but then we want to just aggregate the data and generate reports." But are these reports valuable? Fifty percent of times they're not, and they've just burned away 1,000 CPU cycles for this big data.

That's where there's a huge opportunity for organizations that are dealing with such big data. First of all, you need to understand what this big data means, and ask are you going to be utilizing it. Throwing something into the big data framework is useless and pointless, unless you know the data.

Throwing something into the big data framework is useless and pointless, unless you know the data.



Gardner: Todd, reacting to what Noel just said about this very impressive problem, it seems that the old approaches, the old architectures, the connectors and the middleware, aren't going to be up to the task. Why do we have to think differently then about a solution set when we face this deluge, and also getting to the right data rather than just all the data regardless of its value?

Brinegar: Noel is 100 percent correct, and it is all about the right data, not just a lot of data. It’s interesting. We have clients that have a multiplicity of databases. Some they don’t even know about or no longer use, but there is relevant data in there.

Dana, when you were talking about the ability to attach to mainframes, all legacy systems, as well as incorporated into today’s environments, that's really a big challenge for a lot of integration solutions and a lot of companies.

So the ability to come in, attach, and get the right data and make that data actionable and make it matter to a company is really key and critical today. And being able to do that with the lowest cost of ownership in the market and the highest time to value equation -- so that the companies aren’t creating a huge amount of tech on top of the tech that they already have to get at this right data -- that’s really the key critical part.

Gardner: Noel, thinking about how to do this differently, I remember it didn’t seem that long ago when the solution to data integration was to create one big, honking database and try to put everything in there. Then that's what you'd use to crunch it and do your queries. That clearly was not going to work then, and it’s certainly not going to work now.

So what’s this notion about orchestrating, metadata, and data virtualization? Why are some of these architectural approaches being sought out, especially when we start thinking about the real-time issues?

Holistic data set

Yuhanna: You have to look at the holistic data set. Today, most organizations or business users want to look at the complete data sets in terms of how to make business decisions. Typically, what they're seeing is that data has always been in silos, in different repositories, and different data segregations. They did try to bring this all together like in a warehouse trying to deliver this value.

But then the volumes of data, the real-time data needs are definitely a big challenge. Warehouses weren't meant to be real-time. They were able to handle data, but not in real time.

So this whole data segregation delivers a yet even better superior framework to deliver real-time data and the right data to consumers, to processes, to applications, whether it’s structured data, semi-structured, unstructured data, all coming together from different sources -- not only on-premise, also off-premise, such as partner's data and marketplace data coming together and providing that framework toward different elements.

We talked about this many years ago and called it the information fabric, which is basically data virtualization that delivers this whole segregation of data in that layer, so that it could be consumed by different applications as a service, and this is all delivered in a real-time manner.

Now, an important point here is that it's not just read-only, but you can also write back through this virtualized layer, so that it can get back at the data.

We talked about this many years ago and called it the information fabric, which is basically data virtualization that delivers this whole segregation of data in that layer.



Definitely, things have changed with this new framework and there are solutions out there that offer this whole framework, not only just accessing data and integrating data, but they also have frameworks, which includes metadata, security, integration, transformation.

Gardner: How about that Todd Brinegar? When we think about a fabric, when we think about trying to access data, regardless, and get it closer to real time, what are the architectural approaches that you think are working better? What are you putting in place yourselves to try to solve this issue?

Brinegar: It's a great lead in from Noel, because this is exactly the fabric and the framework that Enterprise Enabler, Stone Bond’s integration technology, is built on.

What we've done is look at it from a different approach than traditional integration. Instead of taking old technologies and modifying those technologies linearly to effect an integration and bring that data into a staging database and then do a transformation and then massage it, we've looked at it three-dimensionally.

We attach with our AppComms, which are our connectors, to the metadata layer of an application. We don’t agent within the application. We get the at data of the data. We separate that data from multiple sources, unlimited sources, and orchestrate that to a view that a client has. It could be Salesforce.com, SharePoint, a portal, Excel spreadsheets, or anything that they're used to consuming that data in.

Actionable data

Gardner: Just to be clear, Todd, your architecture and solution approach is not only for access for analysis, for business intelligence (BI), for dashboards and insights -- but this is also for real-time running application sets. This is actionable data?

Brinegar: Absolutely. With Enterprise Enabler, we're not only a data-integration tool, we're an applications-integration tool. So we are EAI/ETL. We cover that full spectrum of integration. And as you said, it is the real-time solution, the ability to access and act on that information in real time.

Gardner: We described why this is a problem and why it's getting worse. We've looked at one approach to ameliorating these issues. But I'm interested in what you get if you do this right.

Let's go back to Noel. For some of the companies that you work with at Forrester, that you are familiar with, the enterprises that are looking to really differentiate themselves, when they get a better grasp of their data, when they can make it actionable, when they can pull it together from a variety of sources, old and new, on-premises and off-premises, how impactful is this? What sort of benefits are they able to gain?

Yuhanna: The good thing about data virtualization is that it's not just a single benefit. There are many, many benefits of data virtualization, and there are customers who are doing real-time BI, business with data virtualization. As I mentioned, there are drawbacks and limitations in some of the older approaches, technologies, and architectures we've used for decades.

Real-time BI is definitely one of the big drivers for data virtualization, but also having a single version of the truth.



We want real-time BI, in the sense that you can’t just wait a day for this report to show up. You need this every hour or every minute. So these are important decisions you've got to make for that.

Real-time BI is definitely one of the big drivers for data virtualization, but also having a single version of the truth. As you know, more than 30 percent of data is duplicated in an organization. That’s a very conservative number. Many people don’t know how much data is duplicated.

And you have different duplication of data -- customer data, product data, or internal data. There are many different types of data that is duplicated. Then the data has a quality issue, because you may change customer data in one of the applications that may touch one database, but the other database is not synchronized as such. What you get is inconsistent data, and customers and other business users don’t really value the data actually anymore.

A single version of the truth is a very important deliverable from solutions, which has never been done before, unless you have one single database actually, but most organizations have multiple databases.

Also it's creating this whole dashboard. You want to get data from different sources, be able to present business value to the consumers, to the business users, what have you, and the other cases like enterprise search, you're able to search data very quickly.

Simpler compliance

Imagine if an auditor walks into an organization, they want to look at data for a particular event, or an activity, or a customer, searching across a thousand resources. It could be a nightmare. The compliance initiative through data virtualization becomes a lot simpler.

Then, you're doing things like content-management applications, which need to be delivered in federation and integrate data from many sources to present more valuable information. Also, smart phones and mobile devices want data from different systems so that they all tie together to their consumers, to the business users, effectively.

So data virtualization has quite a strong value proposition and, typically, organizations get the return on investment (ROI) within six months or less with data virtualization.

Gardner: Todd, at Stone Bond, when you look to some of your customers, what are some of the salient paybacks that they're looking for? Is there some low-hanging fruit, for example? It sounds from what Noel said that there are going to be payoffs in areas you might not even have anticipated, but what are the drivers? What are the ones that are making people face the facts when it comes to data virtualization and get going with it?

Brinegar: With Stone Bond and our technology Enterprise Enabler the ability to virtualize, federate, orchestrate, all in real-time is a huge value. The biggest thing is time to value though. How quickly can they get the software configured and operational within their enterprise? That is really the key that is driving a lot of our clients’ actions.

When we do an installation, a client can be up and operational doing their first integration transformations within the first day.



When we do an installation, a client can be up and operational doing their first integration transformations within the first day. That’s a huge time-to-value benefit for that client. Then, they can be fully operational with complex integration in under three weeks. That's really astounding in the marketplace.

I have one client that on one single project calculated $1.5 million cost savings in personnel in the first year. That’s not even taking into account a technology that they may be displacing by putting in Enterprise Enabler. Those are huge components.

Gardner: How about some examples Todd, use cases? I know sometimes you can name companies and sometimes you can't, but if you do have some names that you can share about what the data virtualization value proposition is doing for them, great.

Brinegar: HP is a great example. HP runs Enterprise Enabler in their supply chain for their Enterprise Server Group. That group provides data to all the suppliers within the Enterprise Server Group on an on-time basis.

They are able to build on demand and take care of their financials in the manufacturing of the servers much more efficiently than they ever have. They were experiencing, I believe, a 10-times return on investment within the first year. That’s a huge cost benefit for that organization. It's really kept them a great client of ours.

We do quite a bit of work in the oil business and the oil-field services business, and each one of our clients has experienced a faster ROI and a lower total cost of ownership (TCO).

We just announced recently that most of our clients experienced a 300 percent ROI in the first year that they implemented Enterprise Enabler. CenterPoint Energy is a large client of Stone Bond and they use us for their strategic transformation of how they're handling their data.

How to begin

Gardner: Let’s go back to Noel. When it comes to getting started, because this is such a big problem, many times it’s trying to boil the ocean, because of all the different data types, the legacy involvement. Do you have a sense of where companies that are successful at doing this have begun?

Is there a pattern, is there a methodology that helps them get moving toward some of these returns that Todd is talking about, that data virtualization is getting these assets into the hands of people who can work with them? Any thoughts about where you get started, where you begin your journey?

Yuhanna: One is taking an issue, like an application-specific strategy, and building blocks on that, or maybe just going out and looking at an enterprise-wide strategy. For the enterprise-wide strategy, I know that some of the large organizations in the financial services, retail, and sales force are starting to embark on looking at all of these data in a more holistic manner:

"I've got customer data that is all over the place. I need to make it more consistent. I need to make it more real-time." Those are the things that I'm dealing with, and I think those are going to be seen more in the coming years.

Obviously, you can’t boil the ocean, but I think you want to start with some data which becomes more valuable, and this comes back to the point that you talked about as the right data. Start with the right data and look at those data points that are being shared and consumed by many users, business users, and that’s going to be valuable for the business itself.

I would definitely recommend looking at newer technologies, because they definitely are faster. They do a lot of caching. They do a lot of faster integration.



The important thing is also that you're building this block on the solution. You can definitely leverage some existing technologies, if you wanted to. I would definitely recommend now looking at newer technologies, because they definitely are faster. They do a lot of caching. They do a lot of faster integration.

As Todd was mentioning, quicker ROI is important. You don’t have to wait for a year trying to integrate data. So I think those are critical for organizations going forward. But you also have to look at security, availability, and performance. All of these are critical, when you're making decisions about what your architecture is going look like.

Gardner: Noel, you do a lot of research at Forrester. Are there any reports, white papers, or studies that you could point to that would help people as they are starting to sort through this to decide where to start, where the right data might be?

Yuhanna: We've actually done extensive research over the last four or five years on this topic. If you look at Information Fabric, this is a reference architecture we've told customers to use when you're building a data virtualization yourself. You can build the data virtualization yourself, but obviously it will take a couple of years to build. It’s a bit complex to build, and I think that's why solutions are better at that.

But Information Fabric reports are there. Also, information as a service is something that we've written about -- best practices, use cases, and also vendor solutions around this topic of discussion. So information as a service is something that customers could look at and gain understanding.

Case studies

We have use cases or case studies that talk about the different types of deployments, whether it’s a real-time BI implementations or doing single version of fraud detection, or any other different types of environments they're doing. So we definitely have case studies as well.

There are case studies, reference architectures, and even product surveys, which talk about all of these technologies and solutions.

Gardner: Todd, how about at Stone Bond? Do you have some white papers or research, reports that you can point to in order to help people sort through this and perhaps get a better sense of where your technologies are relevant and what your value is?

Brinegar: We do. On our website, stonebond.com, we have our CTO's blogs, Pamela Szabó's blog, which have a great perspective of data, big data, and the changing face of data usage and virtualization.

I wish everybody would explore the different opportunities and the different technologies that there are for integration and really determine not what you need today -- that’s important -- but what will you need tomorrow. What’s the tech that you're going to carry forward, and how much is the TCO going to be as you move forward, and really make that value decision past that one specific project, because you're going to live with the solution for a long time.

I wish everybody would explore the different opportunities and the different technologies that there are for integration and really determine not what you need today . . . but what will you need tomorrow.



Gardner: Very good. We've been listening to a sponsored podcast discussion on the need to make sense of the deluge and the complexity of data and information swirling in and around modern enterprises. We've also looked at how better data access can lead to improved integration of all information into approachable resources for actual business activities and intelligence.

I want to thank our guests, Noel Yuhanna, Principal Analyst at Forrester Research. Thanks so much, Noel.

Yuhanna: Thanks a lot.

Gardner: And also Todd Brinegar, the Senior Vice President of Sales and Marketing at Stone Bond Technologies. Thanks to you too, Todd.

Brinegar: Much appreciated. Thank you very much, Dana. Thank you very much, Noel.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Stone Bond Technologies.

Transcript of a BriefingsDirect podcast on how businesses can better manage and exploit their exploding data via new technologies that provide meta-data-based data integration and management. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, December 14, 2011

Case Study: How SEGA Europe Uses VMware to Standardize Cloud Environment for Globally Distributed Game Development

Transcript of a BriefingsDirect podcast on how SEGA Europe has moved to a more secure and scalable VMware cloud solution for its worldwide development efforts.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how a major game developer in Europe is successfully leveraging the hybrid cloud model.

We’ll learn how SEGA Europe is standardizing its cloud infrastructure across its on-premises operations, as well as with a public cloud provider. The result is a managed and orchestrated hybrid environment to test and develop multimedia games, one that dynamically scales productively to the many performance requirements at hand.

We’re joined by a systems architect with SEGA in London to learn more about how the hybrid approach to multiple, complementary cloud instances is meeting SEGA’s critical development requirements in a new way. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Please join me now in welcoming Francis Hart, Systems Architect at SEGA Europe. Welcome to the podcast, Francis.

Francis Hart: Hi.

Gardner: We’re all very familiar with the amazing video games that are being created nowadays. And SEGA of course is particularly well-known for the Sonic the Hedgehog franchise going back a number of years, and I have to tell you, Francis, my son is a big fan of those games.

But I'm curious about how, behind the scenes, these games are made. How they come into being and what are some of the critical requirements that you have from a systems architecture perspective when developing these games?

Hart: We have a lot of development studios across the world. We're working on multiple projects. We need to ensure that we supply them with a highly scalable and reliable solution in order to test, develop, and produce the game and the code in time.

Gardner: And how many developers are you working with there at SEGA Europe?

Hart: We have a number of different development studios. We’re probably looking at thousands of individual developers across the world.

Gardner: For those folks who are not familiar with the process, there is the creation of the code, there is the test and debug, and builds. It's quite complicated. There's a lot going on, many different moving parts. How did you start approaching that from your IT environment, from building the right infrastructure to support that?

Targeting testing

Hart: One of the first areas we targeted very early on was the last process in those steps, the testing, arguably one of the most time-consuming processes within the development cycle. It happens pretty much all the way through as well to ensure that the game itself behaves as it should, it’s tested, and the customer gets the end-user experience they require.

The biggest technical goal that we had for this is being able to move large amounts of data, un-compiled code, from different testing offices around the world to the staff. Historically we had some major issues in securely moving that data around, and this is what we started looking into cloud solutions for this.

Gardner: How did you use to do it? What was the old fashion way?

Hart: For very, very large game builds, and we're talking game builds above 10 gigabytes, it ended up being couriered within the country and then overnight file transfer outside of the country. So, very old school methods.

We needed both to secure that up to make sure we understood where the game builds were, and also to understand exactly which version each of the testing offices was using. So it’s gaining control, but also providing more security.

Gardner: Clearly one of the requirements here is to manage large files rapidly across geographic distances, but with security and management control, governance, and so forth. But as I understand, you're also dealing with this sort of peak-and-trough issue about the infrastructure itself. You need to ramp up a lot of servers to do the build, but then they sit there essentially unproductive between the builds. How did you flatten that out or manage the requirements around the workload support?

We work on the idea of having a central platform for a lot of these systems. Using virtualization to do that allowed us to scale off at certain times.



Hart: Typically, in the early stages of development, there is a fair amount of testing going on, and it tends to be quite small -- the number of staff involved in it and the number of build iterations. Going on, when the game reaches to the end of its product life-cycle, we’re talking multiple game iterations a day and the game size has gotten very large at that point. The number of people involved in the testing to meet the deadlines and get the game shipped on date is into the hundreds and hundreds of staff.

Gardner: How has virtualization and moving your workloads into different locations evolved over the years?

Hart: We work on the idea of having a central platform for a lot of these systems. Using virtualization to do that allowed us to scale off at certain times. Historically, we always had an on-premise VMware platform to do this. Very recently, we’ve been looking at ways to use that resource within a cloud to cut down from some of Capex loading but also remain a little bit more agile with some of the larger titles, especially online games that are coming around.

Gardner: Right. So we’re seeing a lot more of the role-play games (RPG) types of games, games themselves in the cloud. That must influence what you're doing in terms of thinking about your future direction.

Hart: Absolutely. We’ve been looking at things like the hybrid cloud model with VMware as a development platform for our developers. That's really what we're working on now. We've got a number of games in the pipeline that have been developed on the hybrid cloud platform. It gives the developers a platform that is exactly the same and mirrored to what it would eventually be in the online space through ISPs like Colt, which should be hosting the virtual cloud platform.

Gardner: So if the end destination for the runtime, or the operational runtime, for the game is going to be the cloud, it makes sense to live "of, for, and by" the cloud, I suppose. It’s more complementary. It’s always going to be there, right?

Gaining cost benefits

Hart: Yes. And one of the benefits we're seeing in the VMware offering is that regardless of what data center in the world is the standard platform, it also allows us to leverage multiple ISPs, and hopefully gain some cost benefits from that.

Gardner: Francis, tell me a little bit about the pilot project. No one is going to jump up and put their mission-critical activities into a cloud environment, especially a hybrid environment, overnight. So the crawl-walk-run approach seems to be the most prudent way. Tell me a little bit about what your goals were and what you've been able to attain even in a pilot setting?

Hart: Very early on we were in discussions with Colt and also VMware to understand what technology stack they were bringing into the cloud. We started doing a proof of concept with VMware and a professional services company, and together we were able to come over a proof of concept to distribute our game testing code, which previously was a very old-school distribution system. So anything better would improve the process.

There wasn't too much risk to the company. So we saw the opportunity to have a hybrid cloud set up to allow us to have an internal cloud system to distribute the codes to the majority of UK game testers and to leverage high bandwidth between all of our sites.

For the game testing studios around Europe and the world, we could use a hosted version of the same service which was up on the Colt Virtual Cloud Director (VCD) platform to supply this to trusted testing studios.

Doing this allows us to manage it at one location and simply clone the same system to another cloud data center.



Gardner: When you approach this hybrid cloud model, it’s one thing to be able to technically do that, to have the standardization and to have the products in place that will support the workloads and the virtualization continuity, the similar environment. But what about managing that? What about having a view into what’s going on so that you know what aspects of the activity and requirements are being met and where? It must involve quite a bit of management?

Hart: Yes. Also the virtual cloud environment of vCloud Director has a web portal that allows you to manage a lot of this configuration in a central way. We’re also using VMware Cloud Connector, which is a product that allows you to move the apps between different cloud data centers. And doing this allows us to manage it at one location and simply clone the same system to another cloud data center.

In that regard, the configuration very much was in a single place for us in the way that we designed the proof of concept. It actually helped things, and the previous process wasn’t ideal anyway. So it was a dramatic improvement.

Gardner: Well, let’s dig into that a bit. What were some of the metrics of success, even on your pilots? I understand that you’re going to be expanding on that, but are there data points that we can look to whether it’s reduction in cost for servers, operation, security, time to development and test? What were some of the salient paybacks of doing development in this manner?

Hart: One of the immediate benefits was around the design process. It's very obvious that we were tightening up security within our build delivery to the testing studios. Nothing was with a courier on a bike anymore, but within a secured transaction between the two offices.

Risk greatly reduced

Also from a security perspective, we understood exactly what game assets and builds were in each location. So it really helped the product development teams to understand what was where and who was using what, and so from a risk point of view it’s greatly reduced.

In terms of stats and the amount of data throughput, it’s pretty large, and we’ve been moving terabytes pretty much weekly nowadays. Now we’re going completely live with the distribution network.

So it’s been a massive success. All of the UK testing studios are using the build delivery system day to day, and for the European ones we’ve got about half the testing studios on board that build delivery system now, and it’s transparent to them.

Gardner: Francis, in moving to a hybrid environment, in practical terms, was there anything that appeared, that crept in, that you weren’t anticipating? Was there something about this that caught you by surprise -- either good or bad?

Hart: Not particularly. VMware was very good at allowing us to understand the technology and that's one of the benefits of working with a professional services reseller. In terms of gotchas, there weren't too many. There were a lot of good surprises that came up and allowed us to open the door to a lot of other VMware technologies.

There were a lot of good surprises that came up and allowed us to open the door to a lot of other VMware technologies.



Now, we're also looking at alternating a lot of processes within vCenter Orchestrator and other VMware products. They really gave us a good stepping stone into the VMware catalogue, rather than just vSphere, which we were using previously. That was very handy for us.

Gardner: I’d like to just pause here for a second. Your use of vSphere -- and I believe you’re on 4.1 if my notes are correct -- has gotten you to a fairly high level of virtualization. That must have been an important stepping stone to be able to have the dynamic ability to ramp up and down your environments, your support infrastructure, but also skills. I imagine there must have been a comfort zone with virtualization that you needed to have in order to move into the cloud level, too.

Hart: Absolutely. We already have a fair footprint in Amazon Web Services (AWS), and it was a massive skill jump that we needed to train members of the staff in order to use that environment. With the VMware environment, as you said, we already have a large amount of skill set using vSphere. We have a large team that supports our corporate infrastructure and we've actually got VMware in our co-located public environment as well. So it was very, very assuring that the skills were immediately transferable.

Gardner: Let’s get back to what you’re going to be doing, now that this pilot has been successful. You’ve had some success with meeting your requirements, also getting some benefits that you weren't anticipating and that all important security control and governance aspect. What’s the next step? Where did you go with your initial stepping stone into hybrid cloud? How are you going to get into that run mode now that you've sort of walked and crawled?

Game release

Hart: As I mentioned before, the first part was dealing with the end of the process, and that was the testing and the game release process. Now, we’re going to be working back from that. The next big area that we’re actively involved in is getting our developers to develop online games within the hybrid environment.

So they’re designing the game and the game’s back-end servers to be optimal within the VMware environment. And then, also pushing from staging to live is a very simple process using the Cloud Connector.

Gardner: Well, that sounds a lot like what we know in the business as platform as a service (PaaS) where you are actually accomplishing much, if not all, of the development, test and deploy cycle -- the life-cycle of the applications in the cloud.

Hart: Absolutely. We're restructuring and redesigning the IT systems within SEGA to be more of a development operations team to provide a service to the developers and to the company.

Gardner: Great. I really appreciate your sharing your story with us, Francis. Now that you've done this a bit, any words of wisdom, 20/20 hindsight, that you might share with others who are considering moving more aggressively into private cloud, hybrid cloud, and ultimately perhaps the full PaaS value?

The next big area that we’re actively involved in is getting our developers to develop online games within the hybrid environment.



Hart: Just get some hands-on experience and play with the cloud stack from VMware. It’s inexpensive to have a go and just get to know the technology stack.

Gardner: Thanks. You've been listening to a sponsored podcast discussion on how a major game developer, SEGA, is leveraging the hybrid cloud model using the VMware cloud stack.

I’d like to thank our guest, Francis Hart, System Architect at SEGA Europe, based in London. Thanks again so much, Francis.

Hart: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience for joining us as well, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how SEGA Europe has moved to a more secure and scalable VMware cloud solution for its worldwide development efforts. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Monday, December 12, 2011

Efficient Data Center Transformation Requires Consolidation and Standardization Across Critical IT Tasks

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.Today, we present a sponsored podcast discussion on quick and proven ways to attain significantly improved IT operations and efficiency.

We'll hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here today we will specifically explore building quick data center project wins, leveraging project tracking and scorecards, as well as developing a common roadmap for both facilities and IT infrastructure. You don’t need to go very far in IT to find people who are diligently working to do more with less, even as they're working to transform and modernize their environments.

One way to keep the interest high and those operating and investment budgets in place is to show fast results and then use that to prime the pump for even more improvement and even more funding with perhaps even growing budgets.

With us now to explain how these solutions can drive successful data center transformation is our panel, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs); Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. Welcome to you all.

Let's go first to Duncan Campbell on communicating an ongoing stream of positive results, why that’s important and necessary to set the stage for an ongoing virtuous adoption cycle for data center transformation and converged infrastructure projects.

Duncan Campbell: You bet, Dana. We've seen that when a customer is successful in breaking down a large project into a set of quick wins, there are some very positive outcomes from that.

Breeds confidence

N
umber one, it breeds confidence, and this is a confidence that is actually felt within the organization, within the IT team, and into the business as well. So it builds confidence both inside and outside the organization.

The other key benefit is that when you can manifest these quick wins in terms of some specific return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream benefits that actually help out the team in multiple ways.

Gardner: I suppose it's not only getting these quick wins, but effectively communicating them well. People really need to know about them.

Campbell: Right. So this is one of the things that some of the real leaders in IT realize. It's not just about attracting the best talent and executing well, but it's about marketing the team’s results as well.

One of the benefits in that is that you can actually break down these projects just in terms of some specific type of wins. That might be around standardization, and you can see a lot of wins there. You can quickly consolidate to blades. You can look at virtualization types of quick wins, as well as some automation quick wins.

We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.



We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.

Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, I suppose we could also start to see some sort of a virtuous adoption cycle, whereby that sets you up for more interest, an easier time evangelizing, and so on.

Campbell: That’s exactly right. A virtuous cycle is well put. That allows really the team to get the additional green light to go to the next step in terms of their blueprint that they are trying to execute on. It gets a green light also in terms of additional dollars and, in some cases, additional headcount to add to their team as well.

What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent, but it really allows you to retain folks. That means you're getting the best team possible to duplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.

Gardner: I suppose one last positive benefit here might be that, as enterprises adopt more of what we call social networking and social media, the ability for the rank and file, those users involved with these products and services, can start to be your best word-of-mouth marketing internally.

TCO savings

Campbell: That’s right. A good example is where we have been able to see a significant total cost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact was taking one of these consolidated approaches with all their development tools. They saw a considerable savings, both in terms of dollars, over $12.9 million, as well as a percentage of TCO savings that was upwards of 50 percent.

When you see tangible exciting numbers like that, that does grab people’s attention and, you bet, it becomes part of the whole social-media fabric and people want to go to a winner. Success breeds success here.

Gardner: Thank you. Next, we're going to go to Randy Lawton and hear some more about why tracking scorecards and managing expectations through proven data and metrics also contributes to a successful ongoing DCT activity.

Randy, why is it so important to know your baseline tracks and then measure them each and every step along the way?

Randy Lawton: Thank you, Dana. Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization.

So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We find that there’s really no way to do that, unless you have a good way of capturing the data that’s necessary for a baseline.

It’s important to note that we manage these programs through a series of phases in our methodology. The first phase is strategy and analysis. During that phase, we typically run a discovery on all IT assets that would include the data center, servers, storage, the network environment, and the applications that run on those environments.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.



From that, we bridge into the second phase, which is architect and validate, where we begin to solution out and develop the strategies for a future-state design that includes the standardization and consolidation approaches, and on that begin to assemble the business case. In a detailed design, we build out those specifications and begin to create the data that determines what the future-state transformation is.

Then, through the implementation phase, we have detailed scorecards that are required to be tracked to show progress of the application teams and infrastructure teams that contribute to the program in order to guarantee success and provide visibility to all the stakeholders as part of the program, before we turn everything over to operations.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics through each of the phases of these programs. We believe that helps offer a competitive advantage for us and helps enable more rapid achievement of the programs from our customer perspective.

Gardner: As we heard from Duncan about why it’s important to demonstrate wins, I sense that organizations are really data driven now more than ever. It seems important to have actual metrics in place and be able to prove your work each step of the way.

Complex engagements

Lawton: That’s very true. In these complex engagements, it’s normally some time before there are quick-win type of achievements that are really notable.

For example, in the HP IT transformation program we undertook over several years back through 2008, we were building six new data centers so that we could consolidate 185 worldwide. So it was some period of time from the beginning of the program until the point where we moved the first application into production.

All along the way we were scorecarding the progress on the build-out of the data centers. Then, it was the build-out of the compute infrastructure within the data centers. And then it was a matter of being able to show the scorecarding against the applications, as we could get them into the next generation data centers.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required. With some of these tools and approaches and the scorecarding, we were able to demonstrate the progress and keep very visible to management the movements and momentum of the program.During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required.



Gardner: Randy, I know that many organizations are diligent about the scorecarding across all sorts of different business activities and metrics. Have you noticed in some of these engagements that these readouts and feedback in the IT and data center transformation activities are somehow joined with other business metrics? Is there an executive scorecard level that these feed into to give more of a holistic overview? Is this something that works in tandem with other scorecarding activities in a typical corporation?

Lawton: It absolutely is, Dana. Often in these kind of programs there are business activities and projects that are going on within the business units. There are application projects that work into the program and then there are the infrastructure components that all have to be fit together at some level.

What we typically see is that the business will be reporting its set of metrics, each of the application areas will be reporting their metrics, and it’s typically from the infrastructure perspective where we pull together all of the application and infrastructure activities and sometimes the business metrics as well.

We've seen multiple examples with our customers where they are either all consolidated into executive scorecards that come out of the reporting from the infrastructure portion of the program that rolls it all together, or that the business may be running separate metrics and then application teams and infrastructure are running the IT level metrics that all get rolled together into some consolidated reporting on some level.

Gardner: And that, of course, ensures that IT isn’t the odd man out, when it comes to being on time and in alignment with these other priorities. That sounds like a very nice addition to the way things may have been done five or 10 years ago.

Lawton: Absolutely.

Gardner: Any examples, Randy, either with organizations you could name, or use cases where you could describe, where the use of this ongoing baselining, tracking, measuring, and delivering metrics facilitates some benefits? Any stories that you can share?

Cloning applications

Lawton: A very notable example is one of our telecom customers we worked with during the last year and finished a program earlier this year. The company was purchasing the assets of another organization and needed to be able to clone the applications and infrastructure that supported business processes from the acquired company.

Within the mix of delivery for stakeholders in the program, there were nine different companies represented. There were some outsourced vendors from the application support side in the acquiree’s company, outsourcers in the application side for the acquiring company, and outsourcers in the data centers that operated data center infrastructure and operations for the target data centers we were moving into.

What was really critical in pulling all this together was to be able to map out, at a very detailed level, the tasks that needed to be executed, and in what time frame, across all of these teams.

The final cutover migration required over 2,500 tasks across these 9 different companies that all needed to be executed in less than 96 hours in order to meet the downtime window of requirements that were required of the acquiring company’s executive management.

It was the detailed scorecarding and operating war rooms to keep those scorecards up to date in real-time that allowed us to be able to accomplish that. There’s just no possible way we would have been able to do that ahead of time.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

I think that HP was very helpful in working with the customer and bringing that perspective into the program very early on, because there had been a failed attempt to operate this program prior to that, and with our assistance and with developing these tools and capabilities, we were able to successfully achieve the objectives of that program.

Gardner: One thing that jumped out at me there was your use of the words real time. How important is it to capture this data and adjust it and update it in real-time, where there’s not a lot of latency? How has that become so important?

Lawton: In this particular program, because there were so many activities taking place in parallel by representatives from all over the world across these nine different companies, the real-time capture and update of all of the data and information that went into the scorecarding was absolutely essential.

In some of the other programs we've operated, there was not such a compressed time frame that required real-time metrics, but we, at minimum, often required daily updates to the metrics. So each program, the strategies that drive that program, and some of the time constraints will drive what the need is for the real-time update.

We often can provide the capabilities for the real-time updates to come from all stakeholders in the program, so that the tools can capture the data, as long as the stakeholders are providing the updates on a real-time basis.

Gardner: So as is often the case, good information in, good results back.

Lawton: Absolutely.

Organizing infrastructure

Gardner: Let’s move now to our third panelist today. We're going to hear about why organizing facilities and infrastructure planning in conjunction in relationship to one another is so important.

Now to Larry Hinman. Larry, let’s go historical for a second. Has there usually been a completely separate direction for facilities planning in IT infrastructure? Why was that the case, and why is it so important to end that practice?

Larry Hinman: Hi, Dana. If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that we've seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.

One of the things that we're still seeing today is that, even though there is this push to try to get IT groups and facilities organizations to talk and work each other, this gap that exists between truly how to glue all of this together.

If you look at the way people do this traditionally -- and when I say people, I'm talking about IT organizations and facilities organization -- they typically will model IT and data centers, even if they are attempting to try and glue them together, they try to look at power requirements.

One of the things that we spotted a few years ago was that when companies do this, the risk of over provisioning or under provisioning is very high. We tried to figure out a way to back this up a few notches.

What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.



How can we remedy this problem and how can we bring some structure to this and bring some, what I would call, sanity to the whole equation, to be able to have something predictable over time? What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.

So we took this whole complex framework and data center program and broke it into four key areas. It looks simplistic in the way we've done this, and we have done this over many, many years of analysis and trying to figure out exactly what direction we should take. We've actually spun this off in many directions a few times, trying to continually make it better, but we always keep coming back to these four key profiles.

Business and risk is the first profile. IT architecture, which is really the application suite, is the second profile. IT infrastructure is the third. Data center facilities is the fourth.

One of the things that you will start to hear from us, if you haven’t heard it already via the data center transformation story that you guys were just recently talking about, is this nomenclature of IT plus facilities equals the data center.

Getting synchronized

L
ook at that, look at these four profiles, and look at what we call a top-down approach, where I start to get everybody synchronized on what risk profiles are and tolerances for risk are from an IT perspective and how to run the business, gluing that together with an IT infrastructure strategy, and then gluing all that into a data center facility strategy.

What we found over time is that we were able to take this complex program of trying to have something predictable, scalable, all of the groovy stuff that people talk about these days, and have something that I could really manage. If you're called into the boss’s office, as I and others have been over the many years in my career, to ask what’s the data center going to look like over the next five years, at least I would have some hope of trying to answer that question.

That is kind of the secret sauce here, and the way we have developed our framework was breaking this complex program into these four key areas. I'm certainly not trying to say this is an easy thing to do. In a lot of companies, it’s culture changes. It’s a threat to the way the very organization is organized from an IT and a facilities perspective. The risk and recovery teams and the management teams all have to start working together collaboratively and collectively to be able to start to glue this together.

Gardner: You mentioned earlier the issues around energy and the ongoing importance around the cost structure for that. I suppose it's not just fitting these together, but making them fit for purpose. That is to say, IT and facilities on an ongoing basis.

You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time.



It’s not really something that you do and sit still, as would have been the case several years ago, or in the past generation of computing. This is something that's dynamic. So how do you allow a fit-for-purpose goal with data-center facilities to be something that you can maintain over time, even as your requirements change?

Hinman: You just hit a very important point. One of the the big lessons learned for us over the years has been this ability to not only provide this kind of modeling and predictability over time for clients and for customers. We had to get out of this mode of doing this once and putting it on a shelf, deploying a future state data center framework, keep client pointing in the right direction.

The data is, as you said, gets archived, and they pick it up every few years and do it again and again and again, finding out that a lot of times there's an "aha" moment during those periods, the gaps between doing it again and again.

One thing that we have learned is to not only have this deliberate framework and break it into these four simplistic areas, where we can manage all of this, but to redevelop and re-hone our tools and our focus a little bit, so that we could use this as a dynamic ongoing process to get the client pointing the right direction. Build a data center framework that truly is right size, integrated, aligned, and all that stuff. But then, to have something that was very dynamic that they could manage over time.

That's what we've done. We've taken all of our modeling tools and integrated them to common databases, where now we can start to glue together even the operational piece, of data center infrastructure management (DCIM), or architecture and infrastructure management, facilities management, etc., so now the client can have this real-time, long-term, what we call a 10-year view of the overall operation.

So now, you do this. You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time. That's what we've done, and that's where we have been heading with all of our tools and processes over the last two to three years.

EcoPOD concept

Gardner: I also remember with great interest the news from HP Discover in Las Vegas last summer about your EcoPOD and the whole POD concept toward facilities and infrastructure. Does that also play a part in this and perhaps make it easier when your modularity is ratcheted up to almost a mini data center level, rather than at the server or rack level?

Hinman: With the various what we call facility sourcing options, which PODs are certainly one of those these days, we've also been very careful to make sure that our framework is completely unbiased when it comes to a specific sourcing option.

What that means is, over the last 10 plus years, most people were really targeted at building new green-field data centers. It was all about space, then it became all about power, then about cooling, but we were still in this brick and mortar age, but modularity and scalability has been driving everything.

With PODs coming on the scene with some of the other design technologies, like multi-tiered or flexible data center, what we've been able to do is make sure that our framework is targeted at almost a generic framework where we can complete all the growth modeling and analysis, regardless of what the client is going to do from a facilities perspective.

It lays the groundwork for the customer to get their arms around all of this and tie together IT and facilities with risk and business, and then start to map out an appropriate facility sourcing option.

We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down.



We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down. We're starting to see that take a stronghold in a lot of customers.

Gardner: As we begin to wrap up, I should think that these trends are going to be even more important, these methods even more productive, when we start to factor in movement toward private cloud. There's the need to support more of a mobile tier set of devices, and the fact that we're looking for of course even more savings on those long-term energy and operating costs.

Back to you, Randy Lawton. Any thoughts about how scorecards and tracking will be even more important in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?

Lawton: Yes, Dana. In a lot of ways, there is added complexity these days with more customers operating in a hybrid delivery model, where there may be multiple suppliers in addition to their internal IT organizations.

Greater complexity

Just like the example case I gave earlier, where you spread some of these activities not only across multiple teams and stakeholders, but also into separate companies and suppliers who are working under various contract mechanism, the complexity is even greater. If that complexity is not pulled into a simplified model that is beta driven, that is supported by plans and contracts, then there are big gaps in the programs.

The scorecarding and data gathering methods and approaches that we take on our programs are going to be even more critical as we go forward in these more complex environments.

Operating the cloud environments simplifies things from a customer perspective, but it does add some additional complexities in the infrastructure and operations of the organization as well. All of those complexities add up to, meaning that even more attention needs to be brought to the details of the program and where those responsibilities lie within stakeholders.

Gardner: Larry Hinman, we're seeing this drive toward cloud. We're also seeing consolidation and standardization around data center infrastructure. So perhaps more large data centers to support more types of applications to even more endpoints, users, and geographic locations or business units. Getting that facilities and IT equation just right becomes even more important as we have fewer, yet more massive and critical, data centers involved.

Hinman: Dana, that's exactly correct. If you look at this, you have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.

You have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.



It could be that based on a specific client’s business requirements and IT strategy that it will require possibly a couple of large-scale core data centers and multiple remote sites and/or it could just be a bunch of smaller types of facilities.

It really depends on how the business is being run and supported by IT and the application suite, what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff, and then coming up with a framework that matches all those requirements that it’s integrating.

We tell clients constantly that you have to have your act together with respect to your profile, and start to align all of this, before you can even think about cloud and all the wonderful technologies that are coming down the pike. You have to be able to have something that you can at least manage to control cost and control this whole framework and manage to a future-state business requirement, before you can even start to really deploy some of these other things.

So it all glues together. It's extremely important that customers understand that this really is a process they have to do.

Gardner: Very good. You've been listening to a sponsored BriefingsDirect podcast discussion on how quick and proven ways to attain productivity can significantly improve IT operations and efficiency.

This is the second in an ongoing series of podcasts on data center transformation best practices and is presented in conjunction with a complementary video series.

I'd like to thank our guests, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and SMB; Randy Lawton, Practice Principal in the Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. So thanks to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Also, thanks to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: