Friday, October 30, 2009

Business and Technical Cases Build for Data Center Consolidation and Modernization

Transcript of a sponsored BriefingsDirect podcast on how data center consolidation and modernization helps enterprises reduce cost, cut labor, slash energy use, and become more agile.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Akamai Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how data-center consolidation and modernization of IT systems helps enterprises reduce cost, cut labor, slash energy use, and become more agile.

We'll look at the business and technical cases for reducing the numbers of enterprise data centers. Infrastructure advancements, standardization, performance density, and network services efficiencies are all allowing for bigger and fewer data centers that can carry more of the total IT requirements load.

These strategically architected and located facilities offer the ability to seek out best long-term outcomes for both performance and cost -- a very attractive combination nowadays. But, to gain the big payoffs from fewer, bigger, better data centers, the essential list of user expectations for performance and IT requirements for reliability need to be maintained and even improved.

Network services and Internet performance management need to be brought to bear, along with the latest data-center advancements to produce the full desired effect of topnotch applications and data delivery to enterprises, consumers, partners, and employees.

Here to help us better understand how to get the best of all worlds -- that is high performance and lower total cost from data center consolidation -- we're joined by our panel. Please join me in welcoming James Staten, Principal Analyst at Forrester Research. Welcome, James.

James Staten: Thanks for having me.

Gardner: We're also joined by Andy Rubinson, Senior Product Marketing Manager at Akamai Technologies. Welcome, Andy.

Andy Rubinson: Thank you, Dana. I'm looking forward to it.

Gardner: And, Tom Winston, Vice President of Global Technical Operations at Phase Forward, a provider of integrated data management solutions for clinical trials and drug safety, based in Waltham, Mass. Welcome, Tom.

Tom Winston: Hi, Dana. Thanks very much.

Gardner: Let me start off with James. Let's look at the general rationale for data-center modernization and consolidation. What are the business, technical, and productivity rationales for doing this?

Data-center sprawl

Staten: There is a variety of them, and they typically come down to cost. Oftentimes, the biggest reason to do this is because you've got sprawl in the data center. You're running out of power, you're running out of the ability to cool any more equipment, and you are running out of the ability to add new servers, as your business demands them.

If there are new applications the business wants to roll out, and you can't bring them to market, that's a significant problem. This is something the organizations have been facing for quite some time.

As a result, if they can start consolidating, they can start moving some of these workloads onto fewer systems. This allows them to reduce the amount of equipment they have to manage and the number of software licenses they have to maintain and lower their support costs. In the data center overall, they can lower their energy costs, while reducing some of the cooling required and getting rid of some of those power drops.

Gardner: James, isn't this sort of the equivalent of Moore's Law, but instead of at silicon clock-speed level, it's at a higher infrastructure abstraction? Are we virtualizing our way into a new Moore's Law era?

Staten: Potentially. We've always had this gap between how much performance a new CPU or a new server could provide and how much performance an application could take advantage of. It's partly a factor of how we have designed applications. More importantly, it's a factor of the fact that we, as human beings, can only consume so much at so fast a rate.

Most applications actually end up consuming on average only 15-20 percent of the server. If that's the case, you've got an awful lot of headroom to put other applications on there.

We were isolating applications on their own physical systems, so that they would be protected from any faults or problems with other applications that might be on the same system and take them down. Virtualization is the primary isolating technology that allows us to do that.

Gardner: I suppose there are some other IT industry types of effects here. In the past, we would have had entirely different platforms and technologies to support different types of applications, networks, storage, or telecommunications. It seems as if more of what we consider to be technical services can be supported by a common infrastructure. Is that also at work here?

Unique opportunity

Staten: That's mostly happening as well. The exception to that rule is definitely applications that just can't possibly get enough compute power or enough contiguous compute power. That creates the opportunity for unique products in the market.

More and more applications are being broken down into modules, and, much like the web services and web applications that we see today, they're broken into tiers. Individual logic runs on its own engine, and all of that can be spread across some more monetized, consistent infrastructure. We are learning these lessons from the dot-coms of the world and now the cloud-computing providers of the world, and applying them to the enterprise.

Gardner: I've heard quite a few numbers across a very wide spectrum about the types of payoffs that you can get from consolidating and modernizing your infrastructure and your data centers. Are there any rules of thumb that are typical types of paybacks, either in some sort of a technical or economic metric?

Staten: There's a wide range of choices from the fact that the benefits come from how bad off you are when you begin and how dramatically you consolidate. On average, across all the enterprises we have spoken to, you can realistically expect to see about a 20 percent cost reduction from doing this. But, as you said, if you've got 5,000 servers, and they're all running at 5 percent utilization, there are big gains to be had.

Gardner: The economic payoff today, of course, is most important. I suppose there is a twofold effect as well. If you're facing a capacity issue and you're thinking about spending $40 or $50 million for an additional data center, and if you can reduce the need to do that or postpone it, you're saving on capital costs. At the same time, you could, perhaps through better utilization, reduce your operating costs as well.

Staten: Absolutely. One of the biggest benefits you get from virtualization is flexibility. It's so much easier to patch a workload and simply keep it running, while you are doing that. Move it to another system, but apply the patch, make sure the patch worked, deploy a clone, and then turn off the old version.

That's much more powerful, and it gives a lot more flexibility to the IT shop to maintain higher service-level agreements (SLAs), to keep the business up and running, to roll out new things faster, and be able to roll them back more easily.

Gardner: Andy Rubinson, this certainly sounds like a no-brainer: Get better performance for less money and postpone large capital expenditures. What are some of the risks that could come into play while we are starting to look at this whole picture? I'm interested in what's holding people back.

Rubinson: I focus mainly on delivery over the Internet. There are definitely some challenges, if you're talking about using the Internet with your data center infrastructure -- things like performance latency, availability challenges from cable cuts, and things of that nature, as well as security threats on the Internet.

It's thinking about how can you do this, how can you deliver to a global user base with your data center, without having to necessarily build out data centers internationally, and to be able to do that from a consolidated standpoint.

Gardner: So, for those organizations that are not just going to be focused on employees, or, if they are, that they are a global organization, they need to be thinking the most wide area network (WAN) possible. Right?

Rubinson: Absolutely.

Gardner: Let's go to our practitioner, Tom Winston. Tom, what sort of effects were you dealing with at Phase Forward, when you were looking at planning and strategy around data center location, capacity, and utilization?

Early adopter

Winston: Well, we were in a somewhat different position, in that we were actually an early adopter of virtualization technology, and certainly had seen the benefits of using that to help contain our data-center sprawl. But, we were also growing extremely rapidly.

When I joined the organization, it had two different data centers -- one on the East Coast and one on the West Coast. We were facing the challenge of potentially having to expand into a European data center, and even potentially a Pacific Rim data center.

By continuing to expand our virtualization efforts, as well as to leverage some of the technologies that Andy just mentioned as far as, Internet acceleration, via some of the Akamai technologies, we were able to forego that data center expansion. In fact, we were able to consolidate our data center to one East Coast data center, which is now our primary hosting center for all of our applications.

So, it had a very significant impact for us by being able to leverage both that WAN acceleration, as well as virtualization, within our own four walls of the data center. [Editor's note: WAN here and in subsequent uses refers to public wide area networks and not private.]

Gardner: Tom, just for the edification of our listeners, tell us a little bit about Phase Forward. Where are your users and where do your applications need to go.

In an age where . . . people are expecting things to be moving extremely quickly and always available, it's very important for us to be able to provide that application all the time, and to perform at a very high level.



Winston: We run electronic data capture (EDC) software, and pharmacovigilance software for the largest pharmaceutical and clinical device makers in the world. They are truly global organizations in nature. So, we have users throughout the world, with more and more heavy population coming out of the Asia Pacific area.

We have a very large, diverse user base that is accessing our applications 24x7x365, and, as a result, we have performance needs all the time for all of our users.

In an age where, as James mentioned, people are expecting things to be moving extremely quickly and always available, it's very important for us to be able to provide that application all the time, and to perform at a very high level.

One of the things James mentioned from an IT perspective is being able to manage that virtual stack. Another thing that virtualization allows us to do is to provide that stack and to improve performance very quickly. We can add additional compute resources into that virtual environment very quickly to scale to the needs that our users may have.

Gardner: James Staten, back to you. Based on Tom's perspective of the combination of that virtualization and the elasticity that he gets from his data center, and the ability to locate it flexibly, thanks to some network optimization and reliability issues, how important is it for companies now, when they think about data center consolidation, to be flexible in terms of where they can locate?

All over the place

Staten: It's important that they recognize that their users are no longer all in the same headquarters. Their users are all over the place. Whether they are an internal employee, a customer, or a business partner, they need to get access to those applications, and they have a performance expectation that's been set by the Internet. They expect whatever applications they are interacting with will have that sort of local feel.

That's what you have to be careful about in your planning of consolidation. You can consolidate branch offices. You can consolidate down to fewer data centers. In doing so, you gain a lot of operational efficiencies, but you can potentially sacrifice performance.

You have to take the lessons that have been learned by the people who set the performance bar, the providers of Internet-based services, and ask, "How can I optimize the WAN? How can I push out content? How can I leverage solutions and networks that have this kind of intelligence to allow me to deliver that same performance level?" That's really the key thing that you have to keep in mind. Consolidation is great, but it can't be at the sacrifice of the user experience.

Gardner: When you find the means to deliver that user experience, that frees you up to then place your data centers strategically based on things like skills or energy availability or tax breaks, and so forth. Isn't that yet another economic incentive here?

Staten: You want to have fewer data centers, but they have to be in the right location, and the right location has to be optimized for a variety of factors. It has to be optimized for where the appropriate skill sets are, just as you described. It has to be optimized for the geographic constraints that you may be under.

We're able to take some of that load off of the servers, and do the work in the cloud, which also helps reduce them.



You may be doing business in a country in which all of the citizen information of the people who live in that country must reside in that country. If that's the case, you don't necessarily have to own a data center there, but you absolutely have to have a presence there.

Gardner: Andy, back to you. What are some of the pros and cons for this Internet delivery of these applications? I suppose you have to rearchitect, in order to take advantage of this as well.

Rubinson: There are two main areas from the positives, the benefits, and that's the cost efficiency of delivering over the Internet, as well as the responsiveness. From the cost perspective, we're able to eliminate unnecessary hardware. We're able to take some of that load off of the servers, and do the work in the cloud, which also helps reduce them.

A lot of cost efficiencies

There are a lot of cost efficiencies that we get, even as you look to Tom's statement about being able to actually eliminate a data center and avoid having to build out a new data center. Those are all huge areas, where it can help to use the Internet, rather than having to build out your own infrastructure.

Also, in terms of responsiveness, by using the Internet, you can deploy a lot more quickly. As Tom explained, it's being able to reach the users across the globe, while still consolidating those infrastructures and be able to do that effectively.

This is really important, as we have seen more and more users that are going outside of the corporate WANs. People are connecting to suppliers, to partners, to customers, and to all sorts of things now. So, the private WANs that many people are delivering their apps over are now really not effective in reaching those people.

Gardner: As James said earlier, we've got different workloads and different types of applications. Help me understand what Akamai can do. Do you just accelerate a web app, or is there a bit more in your quiver in terms of dealing with different types of loads of media, content, application types?

Rubinson: There are a variety of things that we are able to deliver over the Internet. It includes both web- and IP-based applications. Whether it's HTTP, HTTPS, or anything that's over TCP/IP, we're able to accelerate.

. . . The other key area where we have benefit is through the delivery of dynamic data. By optimizing the cloud, we're able to speed the delivery of information from the origin as well.



We also do streaming. One of the things to consider here is that we actually have a global network of servers that kind of makes up the cloud or is an overlay to the cloud. That is helping to not only deliver the content more quickly, but also uses some caching technology and other things that make it more efficient. It allows us to give that same type of performance, availability, and security that you would get from having a private WAN, but doing it over the much less expensive Internet.

Gardner: You're looking at specifics of an application in terms of what's going to be delivered at frequent levels versus more infrequent levels, and you can cache the data and gain the efficiency with that local data store. Is that how it works?

Rubinson: A lot of folks think about Akamai as being a content delivery network (CDN), and that's true. There is caching that we are doing. But, the other key area where we have benefit is through the delivery of dynamic data. By optimizing the cloud, we're able to speed the delivery of information from the origin as well. That's where it's benefiting folks like Tom, where he is able to not only cache information, but the information that is dynamic, that needs to get back from the data center, goes more quickly.

Gardner: Let's check in with Tom. How has that worked out for you? What sort of applications do you use with wide area optimization, and what's been your experience?

Flagship application

Winston: Our primary application, our flagship application, is a product called InForm, which is the main EDC product that our customers use across the Internet. It's accelerated using Akamai technology, and almost 100 percent of our content is dynamic. It has worked extremely well.

Prior to our deployment of Akamai, we had a number of concerns from a performance standpoint. As James mentioned, as you begin to virtualize, you also have to be very conscious of the potential performance hits. Certainly, one of the areas that we were constrained with was performance around the globe.

We had users in China who, due to the amount of traffic that had to traverse the globe, were not happy with the performance of the application. Specifically, we brought in Akamai to start with a very targeted group of users and to be able to accelerate for them the application in that region.

It literally cut the problem right out. It solved it almost immediately. At that point, we then began to spread the rest of that application acceleration product across the rest of our domains, and to continue to use that throughout the product set.

Having an application perform to the level of a Google is something that our end users expect, even though obviously it's a much different application in what it's attempting to solve and what it's attempting to do.



It was extremely successful for us and helped solve performance issues that our end users were having. I think some of the comments that James made are very important. We do live in a world where everybody expects every application across the Internet to perform like Google. You want to search and you expect it to be back in seconds. If it's not, people tend to be unhappy with the performance of the application.

In our application, it's a much more complex application. A lot more is going on behind the scenes -- database calls, whatever it may be. Having an application perform to the level of a Google is something that our end users expect, even though obviously it's a much different application in what it's attempting to solve and what it's attempting to do. So, the benefits that we were able to get from the acceleration servers were very critical for us.

Rubinson: Just to add to that, we recently commissioned a study with Forrester, looking at what is that tolerance threshold [for a page to load]. In the past it had been that people had tolerance for about four seconds. As of this latest study, it's down to two seconds. That's for business to consumer (B2C) users. What we have seen is that the business-to-business (B2B) users are even more intolerant of waiting for things.

It really has gotten to a point where you need that immediate delivery in order to drive the usage of the tools that are out there.

Gardner: I suppose that's just human nature. Our expectations keep going up. They usually don't go down.

Rubinson: True.

Gardner: Back to you, Tom. Tell me a little bit more about this application. Is this a rich Internet application (RIA)? Is this strictly a web interface? Tell us a little bit more about what the technical challenge was in terms of making folks in China get the same experience as those on the East Coast, who were a mile away from your data center.

Everything is dynamic

Winston: The application is one that has a web front-end, but all the information is being sent back to an Oracle database on the back-end. Literally, every button click that you make is making some type of database query or some type of database call, as I mentioned, with almost zero static content. Everything is dynamic.

There is a heavy amount of data that has to go back and forth between the end user and the application. As a result, prior to acceleration, that was very challenging when you were trying to go halfway around the globe. It was almost immediate for us to see the benefits by being able to hop onto the Akamai Global Network and to cut out a number of the steps across the Internet that we had to traverse from one point to our data center.

Gardner: So, it was clearly an important business metric, getting your far-flung customers happy with their response times. How did that however translate back when you reverse engineered from the experience to what your requirement would be within that data center? Was there sort of a meeting of the minds between what you now understand the network is capable of, with what then you had to deliver through your actual servers and infrastructure?

l guess I'm looking for an efficiency metric or response in terms of what the consolidation benefit was.

Winston: As I mentioned, we had already consolidated from a virtualization standpoint within the four walls of the data center. So, we were continuing to expand in that footprint. But, what it allowed us to do was forego having to put a data center in the Pacific Rim or put a data center in Europe to put the application closer to the end user.

Operating like a cloud is really operating in this more homogeneous, virtualized, abstracted world that we call server virtualization in most enterprises.



Gardner: Let's look to the future a little bit. James, when people think nowadays about cloud computing, that's a very nebulous discussion and topic set. It seems as if what we're talking about here is that more enterprises are going to have to themselves start behaving like what people think of as a cloud.

Staten: Yes, to a degree. There is obviously a positive aspect of cloud and one that can potentially be a negative.

Operating like a cloud is really operating in this more homogeneous, virtualized, abstracted world that we call server virtualization in most enterprises. You want to operate in this mode, so that you can be flexible and you can put applications where they need to be and so forth.

But, one of the things that cloud computing does not deliver is that if you run it in the cloud, you are not suddenly in all geographies. You are just in a shared data center somewhere in the United States or somewhere in your geography. If you want to be global, you still have to be global in the same sense that you were previously.

Cloud not a magic pill

Rubinson: Absolutely. Just putting yourself in the cloud doesn't mean that you're not going to have the same type of latency issues, delivering over the Internet. It's the same thing with availability in trying to reach folks who are far away from that hosted data center. So, the cloud isn't necessarily the answer. It's not a pill that you can take to fix that issue.

Gardner: Andy, I don't think you can mention names, but you are not only accelerating the experience for end users of enterprise applications like a Phase Forward. You're also providing similar services for at least several of the major cloud providers.

Rubinson: It really is anybody who is using the cloud for delivery. Whether it's a high-tech, a pharma company, or even a hosting provider in the cloud, they've all seen the value of ensuring that their end users are having a positive experience, especially folks like software-as-a-service (SaaS) providers.

We've had a lot of interest from SaaS companies that want to ensure that they are not only able to give a positive user experience, but even from a sales perspective, being able to demonstrate their software in other locations and other regions is very valuable.

Obviously, by using the best practices that we've adopted to have blazing fast websites and applying them to make sure that all of your applications, consumed by everyone, are still blazing fast means that you don't have to reinvent the wheel.



Gardner: Now, James, when a commercial cloud provider provides an SLA to their customers, they need to meet it, but they also need to keep their costs as low as possible. More and more enterprises are trying to behave like service providers themselves, whether it's through ITIL adoption, IT shared services or service-oriented architecture (SOA). Over time, we're certainly seeing movement toward a provider-supplier, consumer-subscription relationship of some kind.

If we can use this acceleration and the ability to use the network for that requirement of performance to a certain degree, doesn't this then free up the folks who have to meet those SLAs in terms of what they need to provide? I'm getting back to this whole consolidation issue.

Staten: To some degree. Obviously, by using the best practices that we've adopted to have blazing fast websites and applying them to make sure that all of your applications, consumed by everyone, are still blazing fast means that you don't have to reinvent the wheel. Those practices work for your website. You just apply them to more areas.

If you're applying practices you already know, then you can free up your staff to do other things to modernize the infrastructure, such as deploying ITIL more widely than you have so far. You can make sure that you apply virtualization to a larger percentage of your infrastructure and then deal with the next big issue that we see in consolidation, which is virtual machine (VM) sprawl.

Can get out of control

T
his is where you are allowing your enterprise customers, whether they are enterprise architects, developers, or business units to deploy new VMs much more quickly. Virtualization allows you to do that, but you can quickly get out of control with too many VMs to manage.

Dealing with that issue is what is front and center for a lot of enterprise IT professionals right now. If they haven't applied the best practices or performance to their application sets and to their consolidation practices, that's one more thing on their plate that they need to deal with.

Gardner: So, this also can relate to something that many of us are forecasting. Not much of it happening yet, but it's this notion of a hybrid approach to cloud and sourcing, where you might use your data center up to a certain utilization, and under certain conditions, where there is a spike in demand, you could just offload that to a third-party Cloud provider.

If you're assured from the WAN services that the experience is going to be the same, regardless of the sourcing, they are perhaps going to be more likely to pursue such a hybrid approach. Is that fair to say, James?

Staten: This is a really good point that you're bringing up. We wrote about this in a report we called "Hollow Out The MOOSE." MOOSE is Forrester's term for the Maintenance and Ongoing Operations, Systems, and Equipment, which is basically everything you are running in your data center that hasn't yet been deployed up to this point.

The real answer is that you need to choose the right type of solution for the right problem. We call this Strategic Rightsourcing . . .



The challenge most enterprises have is that MOOSE consumes 70 or 80 percent of their entire budget, leaving very little for new innovation and other things. They see things like cloud and they say, "This is great. I'll just move this stuff to the cloud, and suddenly it will save me money."

No. The real answer is that you need to choose the right type of solution for the right problem. We call this Strategic Rightsourcing, which says to take the things that others do better than you and have others do them, but know economically whether that's a positive tradeoff for you or not. It doesn't necessarily have to be cash positive, but it has to be an opportunity to be cost positive.

In the case of cloud computing, if I have something that I have to run myself, it's very unique to how I design it, and it's really best that I run it in my data center, you're not saving money by putting that in the cloud.

If it's an application that has a lot of elasticity, and you want it to have the ability to be on two virtual machines during the evening, and scale up to as many as 50 during the day, and then shrink back down to 2, that's an ideal use of cloud, because cloud is all about temporary capacity being turned on.

A lot of people think that it's about performance, and it's not. Sure, load balancing and the ability to spawn new VMs increases the performance of your application, but performance is experienced by the person at the end of the wire, and that's what has to be optimized. That's why those types of networks are still very valuable.

Gardner: Tom Winston, is this vision of this hybrid and the use of cloud for ameliorating spikes and therefore reducing your total cost appealing to you?

Has to be right

Winston: It is, but I couldn't agree more with what James just said. It has to be for the right situation. Certainly, we've started to look at some of our applications, potentially using them in a cloud environment, but right now our critical application, the one that I mentioned earlier, is something that we have to manage. It's a very complex environment. We manage it and we need to hold it very close to the vest.

People have the idea that, "Gee, if I put it in the cloud, my life just got a lot easier." I actually think the reverse might be true, because if you put it into the cloud, you lose some control that you have when it's inside your four walls.

Now, you lose the ability to be able to provide the level of service you want for your customers. Cloud needs to be for the right application and for the right situation, as James mentioned. I really couldn't agree more with that.

For Akamai, it's really about how we're able to accelerate that.



Gardner: So, the cloud is not the right hammer for all nails, but for when that nail is correct, that hybrid model can perhaps be quite a economic benefit. Andy, at Akamai, are you guys looking at that hybrid model, and is there something there that your services might foster?

Rubinson: This is really something that we are agnostic about. Whether it's in a data center owned by the customer or whether it's in a hosted facility, we are all about the means of delivery. It's delivering applications, websites, and so forth over the public Internet.

It's something we're able to do, if there are facilities that are being used for, say, disaster recovery, where it's the hybrid scenario that you are describing. For Akamai, it's really about how we're able to accelerate that. How we are able to optimize the routing and the other protocols on the Internet to make that get from wherever it's hosted to a global set of end users.

We don't care about where they are. They don't have to be on the corporate, private WANs. It's really about that global reach and giving the levels of performance to actually provide an SLA. Tell me who else out there provides an SLA for delivery over the Internet? Akamai does.

Gardner: Well, we'll have to leave it there. We've been discussing how data center consolidation and modernization can help enterprises cut costs, reduce labor, slash their energy use, and become more agile, but also keeping in mind the requirements about the performance across wide area networks.

We've been joined by James Staten, he is a Principal Analyst at Forrester Research. Thank you, James.

Staten: Thank you.

Gardner: We were also joined by Andy Rubinson, Senior Product Marketing Manager at Akamai Technologies. Thank you, Andy.

Rubinson: Thank you very much.

Gardner: Also, I really appreciate your input Tom Winston, Vice President of Global Technical Operations at Phase Forward.

Winston: Dana, thanks very much. Thanks for having me.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Akamai Technologies.

Transcript of a sponsored BriefingsDirect podcast on how data center consolidation and modernization helps enterprises reduce cost, cut labor, slash energy use, and become more agile. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, October 29, 2009

Separating Core from Context Brings High Returns in Legacy Application Transformation

Transcript of the second in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on separating core from context, when it comes to legacy enterprise applications and their modernization processes. As enterprises seek to cut their total IT costs, they need to identify what legacy assets are working for them and carrying their own weight, and which ones are merely hitching a high cost -- but largely unnecessary -- ride.

The widening cost-in-productivity division exists between older, hand-coded software assets, supported by aging systems, and replacement technologies on newer, more efficient standards-based systems. Somewhere in the mix, there are core legacy assets distinct from so-called contextal assets. There are peripheral legacy processes and tools that are costly vestiges of bygone architectures. There is legacy wheat and legacy chaff.

Today we need to identify productivity-enhancing resources and learn how to preserve and modernize them -- while also identifying and replacing the baggage or chaff. The goal is to find the most efficient and low-cost means to support them both, through up-to-date data-center architecture and off-the-shelf components and services.

This podcast is the second in a series of three to examine Application Transformation: Getting to the Bottom Line. We will discuss the rationale and likely returns from assessing the true role and character of legacy applications and their actual costs. The podcast, incidentally, runs in conjunction with some Hewlett-Packard (HP) webinars and virtual conferences on the same subject.

Register here to attend the Asia Pacific event on Nov. 3. Register here to attend the EMEA event on Nov. 4. Register here to attend the Americas event on Nov. 5.

With us to delve deeper into the low cost, high reward transformation of legacy enterprise applications is Steve Woods, distinguished software engineer at HP. Hello, Steve.

Steve Woods: Hello. How are you doing?

Gardner: Good. We are also joined by Paul Evans, worldwide marketing lead on Applications Transformation at HP. Hello, Paul.

Paul Evans: Hello, Dana. Thank you.

Gardner: We talked in the earlier podcast in our series, a case study, about transformation and why that's important through the example of a very large education organization in Italy and what they found. We looked at how this can work very strategically and with great economic benefit, but I think now we are trying to get into a bit more of the how.

Tell us a little bit, Paul, about what the stakes are. Why is it so important to do this now?

Evans: In a way, this podcast is about two types of IT assets. You talked before about core and context. That whole approach to classifying business processes and their associated applications was invented by Geoffrey Moore, who wrote Crossing the Chasm, Inside the Tornado, etc.

He came up with this notion of core and context applications. Core being those that provide the true innovation and differentiation for an organization. Those are the ones that keep your customers. Those are the ones that improve the service levels. Those are the ones that generate your money. They are really important, which is why they're called "core."

Lower cost

The "context" applications were not less important, but they are more for productivity. You should be looking to understand how that could be done in terms of lower cost provisioning. When these applications were invented to provide the core capabilities, it was 5, 10, 15, or 20 years ago. What we have to understand is that what was core 10 years ago may not be core anymore. There are ways of effectively doing it at a much different price point.

As Moore points out, organizations should be looking to build "core," because that is the unique intellectual property of the organization, and to then buy "context." They need to understand, how do I get the lowest-cost provision of something that doesn't make a huge difference to my product or service, but I need it anyway.

An human resources system may not be something that you are going to build your business model on, but you need one. You need to be able to service your employees and all the things they need. But, you need to do that at the lowest-cost provision. As time has gone on, this demarcation between core and context has gotten really confused.

As you said, we're putting together a series of events, and Moore will be the keynote speaker on these events. So, we will elucidate more around core and context.

The other speaker at the event is also an inventor, this time from inside HP, Steve Woods. Steve has taken this notion of core and context and has teamed it with some extremely exciting technology and very innovative thinking to develop some unique tools that we use inside the services from HP, which allow us then really to dive into this. That's going to be one of the sessions that we're also going to be delivering on this series of events.

Gardner: Okay, Steve Woods, we can use a lot of different terms here, "core and context," "wheat and chaff." I thought another metaphor would be "baby and bathwater." What happens is that it's difficult to separate the good from the potentially wasteful in the legacy inventory.

I think this has caused people to resist modernizing. They have resisted tinkering with legacy installations in the past. Why are they willing to do it now? Why the heightened interest at this time?

Woods: A good deal of it has to do with the pain that they're going through. We have had customers who had assessments with us before, as much as a year ago, and now they're coming back and saying they want to get started and actually do something. So, a good deal of the interest is caused by the need to drive down costs.

Also, there's the realization that a lot of these tools -- extract, transform, and load (ETL) tools, enterprise application integration (EAI) tools, reporting, and business process management (BPM) -- are proving themselves now. We can't say that there is a risk in going to these tools. They realize that the strength of these tools is that they bring a lot of agility, solve skill sets issues, and make you much more responsive to the business needs of the organization.

Gardner: This definition of core, as Paul said, is changing over time and also varies greatly from organization to organization. Is there no one size fits all approach to this?

Context not code

Woods: I don't think there really is a one size fits all, but as we use our tools to analyze code, we find sometimes as much as 65 percent or more of an application could really not be core. It could just be context.

As we make these discoveries, we find that in the organization there are political battles to be fought. When you identify these elements that are not core and that could be moved out of handwritten code, you're transferring power from the developers -- say, of COBOL -- to the users of the more modern tools, like the BPM tools.

So there is always an issue. What we try to do, when we present our findings, is to be very objective. You can't argue that we found that 65 percent of the application is not doing core. You can then focus the conversation on something more productive. What do we do with this? The worst thing you could possibly do is take a million lines of COBOL that's generating reports and rewrite that in Java or C# hard-written code.

We take the concept of core versus context not just to a possible off-the-shelf application, but at architectural component level. In many cases, we find that this is helpful for them to identify legacy code that could be moved very incrementally to these new architectures.

Gardner: What's been the holdup? What's difficult? You did mention politics, and we will get into that later, but what's been the roadblock from perspective of these tools? Why has that been decreasing in terms of the ability to automate and manage these large projects?

Woods: A typical COBOL application -- this is true of all legacy code, but particularly mainframe legacy code -- can be as much as 5, 10, or 15 million lines of code. I think the sheer idea of the size of the application is an impediment. There is some sort of inertia there. An object at rest tends to stay at rest, and it's been at rest for years, sometimes 30 years.

So, the biggest impediment is the belief that it's just too big and complex to move and it's even too big and complex to understand. Our approach is a very lightweight process, where we go in and answer to a lot of questions, remove a lot of uncertainty, and give them some very powerful visualizations and understanding of the source code and what their options are.

Gardner: So, as we've progressed in terms of the tools, the automation, and the ability to handle large sets of code, the inertia also involves the nontechnical aspects. What do we mean by politics? Are there fiefdoms? Are there territories? Is this strictly a traditional kind of human nature thing? Perhaps you could help us understanding that a bit better.

Doing things efficiently

Woods: Organizations that we go in have not been living in a vacuum, so many of have been doing greenfield development; when they start out to say they need a system that does primarily reporting, or a system that does primarily data integration. In most organizations those fiefdoms, if you will, have grown pretty robust, and they will continue to grow. The realization is that they actually can do those things quite efficiently.

When you go to the legacy side of the house, you start finding that 65 percent of this application is just doing ETL. It's just parsing files and putting them into databases. Why don't you replace that with a tool? The big resistance there is that, if we replace it with a tool, then the people who are maintaining the application right now are either going to have to learn that tool or they're not going to have a job.

So, there's a lot of resistance in the sense that we don't want to lose anymore ground to the target architecture fiefdom, so we are going to not identify this application as having so many elements of context functionality. Our process, in a very objective way, just says that these are the percentages that we're finding. We'll show you the code, you can agree or disagree that's what it is doing, and then let's make decisions based upon those facts.

If we get the facts on the table, particularly visually, then we find that we get a lot of consensus. It may be partial consensus, but it's consensus nonetheless, and we open up the possibilities and different options, rather than just continuing to move through with hand-written code.

If you look at this whole core-context thing, at the moment, organizations are still in survival mode.



Gardner: Paul, you've mentioned in the past that we've moved from the nice-to-have to the must-have, when it comes to legacy applications transformation and modernization. The economy has changed things in many respects, of course, but it seems as if the lean IT goal is no longer something that's a vision. It's really moved up the pecking order or the hierarchy of priorities.

Is this perhaps something that's going to impact this political logjam? Are other organizations and business and financial outcome folks, who are just going to steamroll these political issues?

Evans: Well, I totally think so, and it's happening already. If you look at this whole core-context thing, at the moment, organizations are still in survival mode. Money is still tight in terms of consumer spending. Money is still tight in terms of company spending. Therefore, you're in this position where keeping your customers or trying to get new customers is absolutely fundamental for staying alive. And, you do that by improving service levels, improving your services, and improving your product.

If you stay still and say, "Well, we'll just glide for the next 6 to 12 months and keep our fingers crossed," you're going to be in deep trouble. A lot of people are trying to understand how to use the newer technologies, whether it's things like Web 2.0 or social networking tools, to maintain that customer outreach.

Those of us who went to the business school, marketing school remember -- it takes $10 to get a customer into your store, but it only takes $1 to keep them coming back. People are now worrying about those dollars. How much do we have to spend to keep our customer base?

Therefore, the line-of-business people are now pushing on technology and saying, "You can't back off. You can't not give us what we want. We have to have this ability to innovate and differentiate, because that way we will keep our customers and we will keep this organization alive."

Public and private sectors

That applies equally to the public and private sectors. The public sector organizations have this mandate of improving service, whether it's in healthcare, insurance, tax, or whatever. So all of these commitments are being made and people have to deliver on them, albeit that the money, the IT budget behind it, is shrinking or has shrunk.

So, the challenge here is, "Last year I ran my IT department on my theoretical $100. I spent $80 on keeping things going, and $20 on improving things." That was never enough for the line-of-business manager. They will say, "I want to make a change. I want it now, or I want it next week. I don't want it in six months time. So explain to me how you are going to do that."

That was tough a year ago, but the problem now is that your $100 IT budget is now $80. Now, it's a bit of a challenge, because now all the money you have got you are going to spend on keeping the old stuff alive. I don't think the line-of-business managers, or whoever they are, are going to sit back and say, "That's okay. That's okay. We don't mind." They're going to come and say that they expect you to innovate more.

This goes back to what Steve was talking about, what we talked about, and what Moore will raise in the event, which is to understand what drives your company. Understand the values, the differentiation, and the innovations that you want and put your money on those and then find a way of dramatically reducing the amount of money you spend on the contextual stuff, which is pure productivity.

The point of the tools is that they allow us to see the code. They allow us to understand what's good and bad and to make very clear, rational, and logical decision.



Steve's tools are probably the best thing out there today that highlight to an organization, "You don't need this in handwritten code. You could put this to a low cost package, running on a low cost environment, as opposed to running it in COBOL on a mainframe." That's how people save money and that's how we've seen people get, as we have talked earlier, a return on investment (ROI) of 18 months or less.

So it is possible, it can be done, and it's definitely not as difficult as people think. The point of the tools is that they allow us to see the code. They allow us to understand what's good and bad and to make very clear, rational, and logical decision.

Gardner: Steve Woods, we spoke earlier about how the core assets are going to be variable from organization to organization, but are there some common themes with the contextual services? We certainly see a lot of very low-cost alternatives now creeping up through software as a service (SaaS), cloud-based, outsourced, mix-sourced, co-located, and lots of different options. Is there some common theme now among what is not core that organizations need to consider?

Woods: Absolutely. One of the things that we do find, when we're brought in to look at legacy applications, is that by virtue of the fact that they are still around, the applications have resisted all the waves of innovation that have preceded. Sometimes, they tend to be of a very definite nature.

A number of them tend to be big data hubs. One of the first things we ask is for the architectural topology diagram, if they have it, or we just draw it on a whiteboard,, they tend to be big spiders. There tends to be a central hub database and you see that they start drawing all these different lines to other different systems within the organization.

The things that have been left behind -- this is the good news -- tend to be the very things that are very amenable for moving to modern architecture in a very incremental way. It's not unusual to find 50-65 percent of an application is just doing ETL functionality.

A good thing

The real benefit to that -- and this is particularly true in a tough economy -- is that if I can identify 65 percent of the application that's just doing data integration, and I create or I have already established the data integration center of excellence within the organization, already have those technologies, or implement those technologies, then I can incrementally start moving that functionality over to the new architecture. When I say incrementally, that's a good thing, because that's beneficial in two ways.

It reduces my risk, because I am doing it a step at a time. It also produces a much better ROI, because the return on the incremental improvement is going to be trickling over time, rather than waiting for 18 months or two years for some big bang type of improvement. Identifying this context code can give you a lot of incremental ROI opportunities, and make you a much more solid IT investment decision picture.

Gardner: So, one of these innovations that's taken place for the past several years is the move towards more distributed data, hosting that data on lower-cost storage architectures, and virtualizing behind the database or the storage itself. That can reduce cost dramatically.

Woods: Absolutely. One of the things that we feel is that decentralizing the architecture improves your efficiency and your redundancy. There is much more opportunity for building a solid, maintainable architecture than there would be if you kept a sort of monolithic approach that's typical on the mainframe.

Gardner: Once we've done this exercise, variable as it may be from organization to organization, separating the core from the non-core, what comes next? What's the next step that typically happens as this transformation and modernization of legacy assets unfolds?

So, if you accept the premise of moving context code to componentized architecture, then the next thing you should be looking for is where is the clone code and how is it arranged?



Woods: That's a very good question. It's really important to understand this leap in logic here. If I accept the notion that a majority of the code in a legacy application can be moved to these model driven architectures, such as BPM and ETL tools, the next premise is, "If I go out and buy these tools, a lot of functionality is provided with these tools right out of the box. It's going to give me my monitoring code, my management code, and in many cases, even some of the testing capabilities are sort of baked into the product."

If that's true, then the next leap of logic is that in my 1.5 million lines of COBOL or my five million lines of COBOL there is a lot of code that's irrelevant, because it's performing management, monitoring, logging, tracing, and testing. If that's true, I need to know where it's at.

The way you find where it's at is identifying the duplicate source code, what we call clone code. Because when you find the clone code, in most cases, it's a superset of that code that's no longer relevant, if you are making this transformation from handwritten code to a model-driven architecture.

What I created at HP is a tool, an algorithm, that can go into any language legacy code and find the duplicate code, and not only find it, but visualize it in very compelling ways. That helps us drill down to identify what I call the unintended design. When we find these unintended designs, they lead us to ask very critical questions that are paramount to understanding how to design the transformation strategy.

So, if you accept the premise of moving context code to componentized architecture, then the next thing you should be looking for is where is the clone code and how is it arranged?

Gardner: Do we have any examples of how this has worked in practice? Are there use cases or an actual organization that you are familiar with? What have been some of the results of going through this process? How long did it take? What did they save? What were the business outcomes?

Viewing the application

Woods: We've often worked with financial services companies and insurance companies, and we have just recently worked with one that gave us an application that was around 1.2 or 1.5 million lines of code. They said, "Here is our application," and they gave us the source code. When we looked into the source code, we found that there were actually four applications, if you looked at just the way the code was structured, which was good news, because it gives us a way of breaking down the functionality.

In this one organization, we found that a high percentage of that code was really just taking files, as I said before, unbundling those files, parsing them, and putting them into databases. So they have kind of let that be the tip of the spear. They said, "That's our start point," because they're often asking themselves where to start.

When you take handwritten code and move it to an ETL tool, there's ample industry evidence that a typical ROI over the course of four years can be between 150 percent and 450 percent improvement in efficiencies. That's just the magic of taking all this difficult-to-maintain spaghetti code and moving it to a very visually oriented tool that gives you much more agility and allows you to respond to changes in the business and the business' needs much more quickly and with skill sets that are readily available.

Gardner: You know, Paul, I've heard a little different story from some of the actual suppliers of legacy systems. A lot of times they say that the last thing you want to do is start monkeying around with the code. What you really want to do is pull it off of an old piece of hardware and put it on a new piece of hardware, perhaps with a virtualization layer involved as well. Why is that not the right way to go?

Evans: Now you've put me in an interesting position. I suppose our view is that there are different strategies. We don't profess one strategy to help people transform or modernize their apps. The first thing they have to do is understand them, and that's what Steve's tools do.

The point is that we don't have a preconceived view of what this thing should run on. That's one thing. We're not wedded to one architectural style.



It is possible to take an approach that says that all we need to do is provide more horsepower. Somebody comes along and says, "Hey, transaction rates are dropping. Users are getting upset because an ATM transaction is taking a minute, when it should take 15 seconds. Surely all we need to do is just give the thing more horsepower and the problem goes away."

I would say the problem goes away -- for 12 months, maybe, or if you're lucky 18 -- but you haven't actually fixed the problem. You've just treated the symptoms.

At HP, we're not wedded to one style of computer architecture as the hub of what we do. We look at the customer requirement. Do we have systems that are equal in performance, if not greater, than a mainframe? Yeah, you bet we do. Our Superdome systems are like that. Are they the same price? No, they are considerably less. Do we have blades, PCs, and normal distributed service? Yeah.

The point is that we don't have a preconceived view of what this thing should run on. That's one thing. We're not wedded to one architectural style. We look at the customer's requirements and then we understand what's necessary in terms of the throughput TP rates or whatever it may be.

So, there is obviously an approach that people can say, "Don't jig around." It's very easy to inject fear into this and just say to put more power underneath it, don't touch the code, and life will be wonderful. We're totally against that approach, but it doesn't mean that one of our strategies is not re-hosting. There are organizations whose applications would benefit from that.

We still believe that can be done on relatively inexpensive hardware. We can re-host an application by keeping the business logic the same, keeping the language the same, but moving it from an expensive system to a less expensive system.

Freeing up cash

People use that strategy to free up cash very quickly. It's one of the fastest ROIs we have, and they are beginning to save instantly. They make the decision that says, "We need to put that money back in the bank, because we need to do that to keep our shareholders happy." Or, they can reinvest that into their next modernization project, and then they're on an upward spiral.

There are approaches to everything, which is why we have seven different strategies for modernization to suit the customer's requirement, but I think the view of just putting more horsepower underneath, closing your eyes, and hoping is not the way forward.

Gardner: Steve, do you have anything more to add to that, treating the symptom rather than the real issues?

Woods: As Paul said, if you treat this as a symptom, we refer to that as a short-term strategy, just to save money to reinvest into the business.

The only thing I would really add to that is that the problem is sometimes not nearly as big as it seems. If you look at the analogy of the clone codes that we find, and all the different areas that we can look at the code and say that it may not be as relevant to a transformation process as you think it is.

The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought.



I do this presentation called "Honey I Shrunk the Mainframe." If you start looking at these different aspects between the clone code and what I call the asymmetrical transformation from handwritten code to model driven architecture, you start looking at these different things. You start really seeing it.

We see this, when we go in to do the workshops. The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought. There are ways to transform it that we didn't realize, and we can do this incrementally. We don't have to do it all at once.

Once we start having those conversations, those who might have been arguing for a re-host suddenly realize that rearchitecting is not as difficult as they think, particularly if you do it asymmetrically. Maybe they should reconsider the re-host and consider going to those context-core concept and start moving the context to these well-proven platforms, such as the ETL tools, the reporting tools, and service-oriented architecture (SOA).

Gardner: Steve, tell us a little bit about how other folks can learn more about this, and then give us a sneak peek or preview into what you are going to be discussing at the upcoming virtual event.

Woods: That's one of the things that we have been talking about -- our tools called the Visual Intelligence Tools. It's a shame you can't see me, because I'm gesturing with my hands as I talk, and If I had the visuals in front of me, I would be pointing to them. This is something to really appreciate -- the images that we give to our customers when we do the analysis. You really have to see it with your own eyes.

We are going to be doing a virtual event on November 3, 4, and 5, and during this you will hear some of the same things I've been talking about, but you will hear them as I'm actually using the tools and showing you what's going to happen with those tools, what those images look like, and why they are meaningful to designing a transformation strategy.

Gardner: Very good. We've been learning more about Application Transformation: Getting to the Bottom Line, and we have been able to separate core from context, and appreciate better how that's an intriguing strategy for approaching this legacy modernization problem and begin to enjoy much greater economic and business benefits as a result.

Helping us weave through this has been Steve Woods, distinguished software engineer at HP. Thanks for your input, Steve.

Woods: Thank you.

Gardner: We've also been joined by Paul Evans, worldwide marketing lead on Applications Transformation at HP. Paul, you are becoming a regular on our show.

Evans: Oh, I'm sorry. I hope I am not getting too repetitive.

Gardner: Not at all. Thanks again for your input.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Transcript of the second in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, October 26, 2009

Linthicum's Latest Book: How SOA and Cloud Intersect for Enterprise Productivity Benefits

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 45 with consultant Dave Linthicum on the convergence of cloud computing and SOA.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download a transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Take the BriefingsDirect middleware/ESB survey now.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 45. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events with industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS and visual orchestration system, and through the support of TIBCO Software.

Our topic this week on BriefingsDirect Analyst Insights Edition, and it is the week of Oct. 12, 2009, centers on Dave Linthicum's new book, Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide. We're here with Dave, and just Dave this time, to dig into the conflation of SOA and cloud computing. Welcome back to the show Dave.

Dave Linthicum: Thank you very much, Dana, thanks for having me.

Gardner: Congratulations. I know producing books like this is bit a like gestating and giving birth, so it may be as close as we guys can come to that experience.

Linthicum: Yeah. I'm already having postpartum depression.

Gardner: So, you’re out with a new arrival and this is part of the Addison-Wesley Information Technology Series.

Linthicum: That's right. It's my fourth book with those guys, starting with the EAI book back in 1997.

Gardner: But, that's still moving off the shelves, right?

Linthicum: It sure is.

Gardner: When is the latest book available? How can you get it and what is it going to set us back?

Linthicum: Cloud Computing and SOA Convergence for Your Enterprise is available now. You can get it on Amazon, of course, for $29.69, and there is a Kindle edition, which, I'm happy to say, is a few bucks less than that. And, I've even seen it on Buy.com for $26. So, get your best deal out there.

Gardner: For those of our listeners out there who might not be familiar with you -- and I have a hard time believing this -- why don't tell us a little bit about yourself and your background, before we get into the timely tome that you've now developed?

Where Web meets enterprise

Linthicum: I've been a distributed-computing guy for a number of years. I've been a thought leader in this space, including writing the EAI book, which we talked about, back in 1997. I was CTO of Software AG, it was called SAGA then, and also CTO of Mercator, and then CTO of Grand Central.

I was CEO of a company called Bridgeworks and then founded my own consulting company called David S. Linthicum, LLC and ran that for any number of years.

I'm primarily focused on where the Web meets the enterprise and I've been doing that for the last 10 years. As the Internet appeared on the scene, I realized that it's not only just a great asset for information, but a great asset where you can put key enterprise applications and post your enterprise data.

There are lots of reasons -- economies of scale, the ability to get efficiency in reuse, the ability to rapidly provision these systems, and get out of the doldrums of IT, which a lot of companies are in right now.

Cloud computing has the opportunity to make things better. The purpose of this book is getting people to look at that as an architectural option for them. In the book, the step-by-step guide provides them with steps that it takes to understand your own issues, your own information, your own data, and your processes, and then figure out the right path to the cloud.

Gardner: It seems that cloud has also, just in the nick of time, come along to give service-oriented architecture (SOA) a little bit of a boost and perhaps even more meaning than people could conjure up for it before.

Linthicum: SOA is the way to do cloud. I saw early on that SOA, if you get beyond the hype that's been around for the last two years, is really an architectural pattern that predates the SOA buzzword, or the SOA TLA.

It's really about breaking down your architecture into a functional primitive, or to a primitive state of several components, including services and data and processes., Then, it's figuring out how to assemble those in such a way that you can not only solve your existing problems, but use those components to resolve problems, as your business changes over time or your mission changes or expands.

Cloud computing is a nice enhancement to that. Cloud doesn't replace SOA, as some people say. Cloud computing is basically architectural options or ways in which you can host your services, in this case, in the cloud.

As we go through reinventing your architecture around the concept of SOA, we can figure out which components, services, processes, or data are good candidates for cloud computing, and we can look at the performance, security and governance aspects of it.

Architectural advantages

We find that some of our services can exist out on the platform in the cloud, which provides us with some additional architectural advantages such as self-provisioning, the ability to get on the cloud very quickly in a very short time without buying hardware and software or expanding our data centers, and the ability to rapidly expand as we need to expand basically on demand.

If we need to go from 10 users to 1,000 users, we can do so in a matter of weeks, not having to buy data-center space, waves and waves of servers, software, hardware licenses, and all those sorts of things. Cloud computing provides you with some flexibility, but it doesn't get away from the core needs to architecture. So, really the book is about how to use SOA in the context of cloud computing, and that's the message I'm really trying to get across.

Gardner: For some folks, the SOA adoption curve perhaps didn't grow as fast as many expected, because the economic impetus was a bit disconnected. Perhaps, it was too far in the future to make direct connections between the investments you would make in your SOA activities and the actual bottom line of IT. Then, cloud comes along. One of the rationales for cloud is that there is an economic impetus.

Of course, not everyone agrees with this. Not everyone agrees with anything about cloud, but if you do cloud correctly, you can cut your utilization waste, reduce your footprint and energy costs, offload peak demands on an elasticity basis, perhaps to third parties, and you can outsource certain apps or data to third parties. Is there an economic benefit from cloud that helps support the investments needed for good SOA?

As we move toward cloud computing, there are more economical and cost-effective architectural options. There is also the ability to play around with SOA in the cloud.



Linthicum: There is, because one of the things people got wrapped around the axle on is having to reinvent their existing systems and go through waves and waves of software and hardware purchases. That became economically nonviable. It was very difficult to figure out how to re-do your architecture, when you had $15-20 million of hardware and software in data center and personnel cost to deal with in support of the new architecture, even though the architecture provides more of a strategic benefit.

As we move toward cloud computing, there are more economical and cost-effective architectural options. There is also the ability to play around with SOA in the cloud, which I think is driving a lot of the SOA. In fact, I find that a lot of people build their first initial SOA as cloud-delivered systems, be it Amazon, IBM, Azure from Microsoft, and some of the other platforms that are out there.

Then, once they figure out the benefits of that, they start putting pieces of it on premise, as it makes sense, and put pieces of it on the cloud. It has the tendency to drive prototyping on the cheap and to leverage architecture and play around with different technologies without the investment we had to do in the past.

It was very difficult to get around that when SOA, as many of the analysts were promoting it, was a big-bang concept and a huge systemic change in how you architecture. Cloud provides a stepwise approach to making that happen. It's much more economic, much more efficient, and it really allows you to play SOA success holistically off of a little success in using the cloud.

Game changing approach

Gardner: Something occurred to me that seems to be a game changing approach or aspect of this. For so long now, people have looked at the total costs of IT, and they went up and up and up. Even though you had things like Moore's Law, commoditization, and maturity that drove some cost down, the total nut of IT for many companies just kept seeming to grow and grow as a percentage of revenue. This, of course, is not a sustainable trajectory.

It seems to me the cloud and SOA as this dream team, as you point out in your book, perhaps provides this inflection point, where we can start to decrease the total nut of IT, rather than just certain aspects of IT. Does that make sense?

Linthicum: It makes perfect sense, and I promote that in the book. One of the things I talk about in Chapter 1 is how things got so bad. The fact of the matter is that we have very ineffective states within the IT realm.

People look at IT and at the movement that's occurred over last 20 years in the progression of the technology, but the reality is that we've gotten a lot less effective in providing benefit to the bottom line of the companies, the missions of the government organizations, and those sorts of things. We need to do better at that.

We've got to stop the insanity. We've got control IT spending.



Ultimately, it's about reinventing the way in which we do IT. In other words, quit thinking about buying the latest and greatest solution and dragging it into the enterprise and having another 20 racks of servers in the data center to support those things that almost never go away. You're getting to a much more complex inflexible state that's not able to change itself or adapt itself to changes in missions or changes in the business. That's just not sustainable in the long-term.

In fact, one of the things I urge IT people to do is to go to a CIO or a COO conference and start talking to them about their IT infrastructure, especially at the cocktail hour. You'll find that it's not a very popular group within most companies and it seems to be, in many instances, the single most limiting factor for them procuring for the companies and growing the business, because of the latency that's in IT.

We've got to stop the insanity. We've got control IT spending. We've got to be much more effective and efficient with the way in which we spend and leverage IT resources. Cloud computing is only a mechanism, it's not a savior for doing that. We need to start marching in new directions and being aggressively innovative around the efficiency, the expandability, and ultimately the agility of IT.

Where the cloud fits

Gardner: Now, looking over your book, Dave, I was impressed by the logic, the layout, and the order of things. You've got a certain level of background and premier information in couple of these chapters on SOA that perhaps we could have been just as well reading in 2005, but the way it fits together is quite interesting. On page 33, you get into when the cloud fits.

That's very much the topic of the day. I speak to a lot of people. Everyone has grokked this general notion of cloud. They understand the private, the public, and "everything as a service," but everybody says, "Yeah, but no one is doing it yet."

What is the right timing for this, and what is the right timing in terms of SOA activities and cloud activities, so they go hand in hand? Are they linear and consecutive? What's the relationship?

Linthicum: They are systemic, one to another. When you're doing SOA and considering SOA within your enterprise or agency, you should always consider cloud as an architectural option. In other words, we have servers we're looking to deploy in middleware, we're looking to leverage in databases we're looking to leverage in terms of SOA. It's governance systems, security systems, and identity management.

Cloud computing is really another set of things that you need to consider in the context of SOA, and you need to start playing around with the stuff now, because it's so cheap. There's no reason that anybody who's working on an SOA shouldn't be playing around with cloud, given the amount of investment that's needed. It's almost nothing, especially with some of the initial forays, some of the prototypes, and some of the pilot projects that need to be done around cloud.

Understanding how cloud computing fits in as a strategic option or another tool in the tool shed, they're able to leverage to drive their architectures.



One really is a matter of doing another. I found out that for people who were deploying SOA their initial success has the tendency to be at least a pure SOA play, as the tendency to be cloud-based. We're doing lots of things in pilot projects that are cloud-oriented and then figuring out how to do that at the enterprise level. Understanding how cloud computing fits in as a strategic option or another tool in the tool shed, they're able to leverage to drive their architectures.

Cloud computing is a fit in many instances. In some instances it's not, and it's a matter of you trying to figure out what's the limitations and the opportunities are within the cloud, before you can figure out what's right to outsource within your own organization.

Gardner: Getting back to where SOA fits in, in Chapter 3, you have a litany of things as a service -- storage, database, information, process, application, platforms, integrations, security, management, governance, testing, and infrastructure. Is there an order? Is there a proper progression? Is there a rationale as to how you should go about all these as services?

The macro domain

Linthicum: You should concentrate on the big macro domain. So, one would be software as a service (SaaS), because SaaS is probably the easiest way to get into the cloud. It also has the most potential to save you the greatest amount of money. Instead of buying a million-dollar, or a two-million-dollar customer reliationship management (CRM) system, you can leverage Salesforce.com for $50-60 a month.

After that, I would progress into infrastructures as a service (IaaS), and that's basically data center on demand. So, it's databases, application servers, WebSphere, and all those sorts of things that you are able to leverage from the data center, but, instead of a data center, you leverage it from the cloud.

Guys like Amazon obviously are in that game. Microsoft, or the Azure platform, are in that game. Any number of players out there are going to be able to provide you with core infrastructure or primitive infrastructure. In other words, it's just available to you over the 'Net with some of kind of a metering system. I would start playing around with that technology after you get through with SaaS.

. . . Instead of having to buy infrastructure and buy a server and set it up and use it, we could go get Google App Engine accounts or Azure accounts.



Then, I would take a look at the platform-as-a-service (PaaS) technology, if you are doing any kind of application development. That's very cool stuff. Those are guys like Force, Google App Engine, and Bungee Labs. They provide you with a complete application development and deployment platform as a service. Then, I would progress into the more detailed stuff -- database, storage, and some of the other more sophisticated services on top of the primitive services that we just mentioned.

Gardner: For those enterprises that do have a sizeable app, Dave, organizations doing a lot of custom development, is that a good place to go for these tests, pilot, and experimental activities? I am going to hazard a guess that this might be the wellspring where cloud has already gotten some attraction, whether organizations recognize it or not?

Linthicum: PaaS with that Google App Engine is driving a lot of innovation right now. People are building applications out there, because they don't have to bother existing IT to get servers and databases brought online, and that will spur innovation.

So, today, we could figure out we want to go off and build this great application and do this great thing to automate a business and, instead of having to buy infrastructure and buy a server and set it up and use it, we could go get Google App Engine accounts or Azure accounts.

Huge potential

Then, we can start building, deploying, defining the database, do the testing, get it up and running, and have it immediately. It's web based and accessible to millions of users who are able to leverage the application in a scalable way. It's an amazing kind of infrastructure when you think about it. The potential is there to build huge, innovative things with very few resources.

Gardner: I'm thinking about the SOA progression over the past five or seven years. One of the cultural organizational obstacles has been getting the development people, the production people, the operation, and the administrator folks to get in some of sort of ongoing feedback loop relationship.

Does cloud PaaS perhaps give a stepping stone approach to start to do that, to think about the totality of an application, the cradle-to-grave iteration, such as the SaaS model, where you've got the opportunity to have a single instance of one code base that you can then work on, rather than have to think about your upgrade cycle.

Linthicum: Yeah, because it's immediately there. That's one thing. There is the instantaneous feedback directly from the users. We can monitor the use. We can monitor the behavior and how people were leveraging the system. We can adjust the system accordingly. The great thing with the SaaS and PaaS models is that we're not doing waves and waves of upgrades that have to be downloaded and then installed, and, in some case, broken.

Now, startups can basically operate with a minimal amount of resources, typically a laptop, pointing at any number of cloud resources.



Everybody is using a centralized platform that's tested as a centralized platform, leveraging the multi-tenant application. We don't have to localize it for Linux, for Windows NT, and for Apple. We just use the platform as web-based, which is perfectly viable these days, when you consider the rich Internet applications (RIAs) out there and the dynamic nature of the interface.

If you're building a SOA and you are building an application instance within the SOA, the opportunities are there to create something that's viable for a long period of time. That's going to be so sustaining, much easier to monitor, and much easier to manage, but the core advantage is, number one, it's much more expandable and also much more cost effective.

We're not having to keep staffs of people around to maintain server hardware and software. We're able to leverage that out in the cloud with a minimal amount of resource consumption. We're also leveling the playing field between small businesses and large businesses.

Ten years ago, it was very difficult to do a start up. You'd have a million dollars in investment funds just to get your infrastructure up and running. Now, startups can basically operate with a minimal amount of resources, typically a laptop, pointing at any number of cloud resources.

A great time

They can build their applications out there. They can build their intellectual capital. They can build their software. They can deploy it. They can test it. Then, they can provision the customers out there and meter their customers. So, it's a great time to be in this business.

Gardner: It cuts across and affects so many aspects, as you say -- the metering, the control of provisions that are more agile, rather than as long upgrades cycles that we traditionally get from commercial software vendors.

I sort of munged two questions together there last time, I want to get back to that culture and organizational issue. This has been something that's a challenge with SOA, and it's going to be a challenge with cloud as well.

Are there organizational stepping-stones or initial preparations that you can do? I'm thinking about IT shared services, perhaps some embracing some vital tenets in ways that you can, in a sense, recast your organization to be in a better position to exploit SOA and then therefore cloud.

There needs to be a lot of education about the opportunities and the advantages of using cloud computing, as well as what the limitations are and what things we have to watch out for.



Linthicum: I think the cultural changes are starting now as far as what cloud computing is going to bring. It's kind of polarizing.

There are two types of people that I run into. Number one: the cloud can do everything and, we really want to move into the cloud -- which is scary. Then there are the people who are looking at the cloud as evil. They always put in front of me all the Gmail adages as proof that the cloud is evil and it's going to destroy their business -- which is also scary.

There needs to be a lot of education about the opportunities and the advantages of using cloud computing, as well as what the limitations are and what things we have to watch out for. Not all applications and all pieces of data are going to be right for the cloud. However, we need to educate people in terms of what the opportunities are.

The fact of the matter is that it's not going to be a dysfunctional and risky thing to move pieces of our architecture out into cloud computing. Get them around the pilot. Get them to go out there and try it. Get them to basically experiment with the technology. Figure out what the capabilities are, and that will ultimately change the culture.

You need to go back to the early '90s. I remember when the Web first came around. I was working for a large corporation, and we weren't allowed to use the Web. If we had to use it, we had to go to the AOL terminal in the library and use it that way.

An understandable asset

Of course, the Web just became bigger, bigger, and bigger and more of an understandable IT asset that could be used enterprise wide. We got web browsers and we're leveraging the Web. The same with the cloud computing. It's going to take a cultural reach. Many large corporations who have embraced the fact are going to put processes and data out on platforms, where they don't know their host.

Gardner: Dave, in Chapter 5, you gave a lot of attention to data. I know there are some people working on that. Tell me about this special relationship between data and SOA, how they come together, and then where cloud fits in?

Linthicum: Understanding data is really the genesis of SOA. A lot of people like to work from the services to the data. I think that the data should be defined and understood in terms of what it is as an as-is state and what it needs to be as a to-be state, where you can build any kind of SOA, using the cloud or not.

Typically, if you're going to leverage the cloud as an infrastructure, it's going to be as a data repository, as well as and for the expandability and the shareability aspects of it, and those sorts of things. However, before you do that, you need to break the data down into a primitive state, understanding what the assets are, what the metadata is, what governance system is around using it -- and just do the traditional architectural stuff.

. . . The data should be defined and understood in terms of what it is as an as-is state and what it needs to be as a to-be state, where you can build any kind of SOA, using the cloud or not.



What I define in the book is definitely cloud related with lots of different examples and different leverages in the context of SOA. But, it's about understanding information the way we've been doing over the last 20 years and then coming up with models and physicals and logicals, trying to figure out what should be where and when we should do that.

It's fairly obvious what pieces and components of the information model you can host in the cloud and which ones need to be on-premise. By the way, it's perfectly acceptable from a performance standpoint to put pieces of physical databases out in the cloud and physical databases on-premise and then leverage those databases simultaneously within the context of applications. You're not going to find tremendous performance differences, and the reliability should be relatively the same.

It's a matter of looking at your information as really a foundation of your architecture, building up on top of that to your services, building up on top of that to your processes, and then really understanding how data exists in the holistic notion of your architecture, in this case, your architecture leveraging cloud computing.

What makes sense

Gardner: Dave, this whole notion of being able to slice and dice data, put it in different places, based on what makes sense for the data, the process, and the applications, rather than simply as a function of the database's needs or the central and core data set needs, strikes a very interesting cord. It allows us to do a lot more interesting things.

In fact, Zimory, another startup, has come out with some interesting announcements, about slicing and dicing caches and then placing them in a variety of ways in different places that can augment and support applications and processes. Are we really going to get to the point soon where we can do things we just never could do before?

Linthicum: We're going to get to a point where the data is going to be a ubiquitous thing. It doesn't really matter where it resides and where we can access it, as long as we access it from a particular model. It's not going to make any difference to the users either. I just blogged about that in InfoWorld.

In fact, we're getting into this notion of what I call the "invisible cloud." In other words, we're not doing application as a service or SaaS, where people get new interfaces that are web-driven. We're putting pieces of the back-end architectural components -- processes, services, and, in this case, data -- out on the platform of the cloud. It really doesn't matter to them where that data resides, as long as they can get at it when they need it.

I don't see a point where we're going to get hindered by where the data resides.



The other aspect of it is because information on a cloud is typically easier to share with other organizations, this has the ability to make the data more valuable by sharing it. That core component becomes a key driver for leveraging the cloud. I don't see a point where we're going to get hindered by where the data resides. We always have to consider governance and security issues and all these things. Every piece of information isn't right for the cloud.

But, for most of the transactional data out there that has semi-private information, which is low-risk -- and that's most of the data from most of the enterprises -- placing pieces of it in the cloud makes sense to better support your architecture and your business. It's perfectly viable.

I don't think people using these information systems are going to have any clue where the information actually resides. IT folks are going to have a tremendous amount of power and numerous options to place in the cloud information that is going to make it much more cost-effective, much more shareable, and therefore much more valuable.

Gardner: Perhaps the takeaway here is that the liberation of data will enliven people in some ways in cloud computing innovation. That really is about business process innovation management. Perhaps that's where we should look to next, and coincidently, that's what your Chapter 7 looks at. Where does business process management fit into cloud, and can that give us something we couldn't do before?

Shared processes

Linthicum: Yeah, it does. We've had the notion of shared processes. In fact, there is a company called Extricity. Back in the old EAI days, it came up with this notion of private versus public processes. Cloud computing provides us with a platform to finally do that. So, not only we are able to drive processes within the enterprise, those going to exist either on premise or in the cloud, depending on where it's best economically and where it's a right architectural fit for those things.

The more important strategic benefit of doing that is that ultimately we're able to put processes on centralized cloud-delivered systems that are shared across multiple enterprises, or multiple divisions in the same enterprise.

This provides us to place an information sharing mechanisms and also process sharing mechanisms, which drives together all of this information in the context of a business process. It allows us to do things like real-time supply-chain automation, real-time event-driven sales-force direction management, and a lot of real-time processes around any business event that spans multiple enterprises. We've been trying to do this for years.

Back in the day, business to business (B2B) was the big buzzword. We had technology guys like Extricity and other process-management technologies to provide us with the capabilities to make this happen. But, it really hasn't been widespread. That's because there was no agreed-on platform to leverage processes and create processes that are shared across multiple enterprises.

All of these things are automated between these very disparate organizations to support the customer better. That's how you're going win this game.



Cloud computing provides us with that capability. So, we have innovators like Appian On Demand and a few other folks out, who are building processes that are sharable on the cloud. We're able to link those to our existing services and data and have our existing systems and our IT assets, such as data and services, participate in these larger process that may span multiple enterprises.

It gets to this point where I can walk into a car dealer and they can tell me exactly when the car I'm ordering is going to show up -- not "8-12 weeks." They know who's going to build it, where the supply is going to come from, where's it going to put together, and how it's going to shipped. All of these things are automated between these very disparate organizations to support the customer better. That's how you're going win this game. That's really the true value of cloud.

Gardner: I agree. We're getting toward extreme visibility all across the exchange -- buyers, sellers, participants, suppliers, and value-added participants. That visibility, of course, gets to more intelligent decision making, less waste, and much higher productivity. Productivity is the key here. If you're in an economy, like we are now, where we've got to grow our way out of this thing, you can't do it by cutting costs forever. The up side is going to come from productivity.

This whole discussion about business process is the cloud discussion that we should be taking to the board of director level, to the COO, and the CFO. They probably don't care too much about the cloud, but they will probably like the fact that the cost for IT can go down. Please help me if you agree or feel free to flesh this out, isn't this the thing that's going to get the business people jazzed?

Bottom-line questions

Linthicum: That's great thinking, Dana. Ultimately, people don't care about whatever hype-driven technology paradigm is coming down the line. Cloud computing can be inclusive of that. How can you save me a buck? How can you get my business out of the doldrums? Can you do that through innovation, and can that innovation cost me less at the end of the day? Those are the questions being asked.

We're not getting, "How can I spend more to get more?" They're saying, "How can we be more effective and efficient with the organization and what innovative changes can make me more effective and efficient?"

Cloud computing is an example of technology that has a potential of doing that. A lot of CIOs and CEOs that I talk to are going to say, "Cloud-Schmoud. I could care less if you do it with pixie dust or cloud computing. I just want it to happen."

Those in IT need understand that this, ultimately, is the motivation. At the end of the day, they need to put together a plan of attack in how to get them to that more effective and efficient state.

IT shops, in the next five years, are going to look very different than they are today. Typically, they're going to be much smaller. They're going to have a lot less hardware and software around, even though it's never going to be eliminated. They're going to be evaluated on their effectiveness and efficiency toward the bottom line in the business.

So, we need to buckle down, be more innovative, figure out what our options are, and figure out a way to move our existing infrastructure in more productive directions.



In the past, we've been exempt from that -- for what reason I don't know -- but they've given IT carte blanche to spend a lot of money. The results come in, but they're not measured as carefully as those in sales and marketing. I think those days are over.

So, we need to buckle down, be more innovative, figure out what our options are, and figure out a way to move our existing infrastructure in more productive directions. Or else, your competitor is going to figure it out before you and they're going to put you out of business.

Gardner: There is a ton of information in this book, but it's still tight and concise. It doesn't go on and on and on. So, I commend you for that. We've got whole chapter on governance. We've got whole chapter on testing. But, the one that really jumped out at me was Chapter 10, "Defining the Candidate Data, Services and Processes for the Cloud."

To me, this really gets at the heart of the issue that IT folks are going to be grappling with. How to get started? What's the right approach for me as an organization for our culture, skills, capabilities, and budgets? How do you tailor this? How do you get started? Maybe you can just dig in and give us a little preview on Chapter 10.

Following the checklist

Linthicum: Chapter 10 is really about what you need to do, once we've gone through these steps of understanding your data, services, and processes, creating a governance model, and understanding security issues and all those things that are good candidates to move onto the cloud?

Once you have this understanding of how to select services, processes, and pieces of data that should be moved out there, it's a matter of going through those checklists to see if the processes, applications, and data are independent or loosely coupled.

If they're independent, then the chances are they're going to more easy to move out to the cloud. If they're loosely coupled, they're easy to move out to the cloud. If they're interdependent, which means they're bound to different things, it's very difficult to decouple them and move them out to the cloud.

You need to figure out the points of integration. Ultimately, if we move something out to the cloud, can we link that information back to the enterprise, can we do that in efficient and effective way, and will that lower costs for us?

You are not going to be able to put cloud computing on top of an existing dysfunctional architecture and expect miracles to occur.



In many instances, we can put systems out in the cloud and we can say it's more cost-effective to have them out there? But when you factor in the integration cost, it's much less cost-effective and much less effective and efficient for the enterprise. You find that with the lot of the salesforce.com installations. Integration wasn't really factored in, and it ended up being a huge issue.

You need to consider your security. You need to consider the core internal enterprise architecture and making sure that it's healthy. You are not going to be able to put cloud computing on top of an existing dysfunctional architecture and expect miracles to occur. As part of this process, as you mentioned earlier Dana, you need to understand that cloud computing needs to be leveraging the context of the SOA, which spans on-premise and off-premise.

This is about getting your existing architecture healthy and leveraging cloud computing as an option. It's not really bolting cloud computing onto existing bad architecture and hoping for changes that are never going to occur.

Looking at the cost models

Ultimately, it's about looking at the cost models and trying to figure out which are the right candidates to move out the cloud in terms of the efficiencies and effectiveness, while looking at the strategy of the company.

I was helping a disaster company a while back. It had to go from 10 users to 10,000 users in a week. Cloud computing is a great candidate for those things and those types of processes. Instead of having a data center that's dark and that you turn on and fire up whenever you need the capacity, you can just go ahead and call Amazon or Google and turn on the capacity to make that happen.

Those are good candidates for cloud computing. But, you need to consider governance, security, how tightly or loosely coupled those processes are within the system, cost effectiveness, integration, other assets of data, the larger strategy of the company, and the direction of the IT architecture and where it's looking to go.

All those things are fundamental considerations of whether or not something that you've identified as a core component and understood as a core component are outsourced to the cloud or not.

Instead of having a data center that's dark and that you turn on and fire up whenever you need the capacity, you can just go ahead and call Amazon or Google and turn on the capacity to make that happen.



Gardner: Then, you come into Chapter 11. It's a practical and pragmatic tip on analyzing and providing candidates around platforms and around picking private or public approaches. One of the things that occurred to me in looking that over is that perhaps now is the time for companies to be thinking along these lines as well. That's how to protect themselves against lock-in and making choices they might regret later. This is around the whole neutrality and portability issue.

While we're experimenting, Dave, and while we're getting our feet wet with cloud, this is also a good time to start putting pressure on all the parties involved for as much neutrality in standards and portability as possible.

Linthicum: It is, and that's not there, generally speaking, in the cloud community. We have security that's still lacking a bit. We have to figure out better mechanisms for that, and for portability, which is still lacking. We have to figure out better mechanisms for that.

So, you have to factor that into the cost and the risk. Right now, if you're moving into the cloud, and you're going to localize a system for a cloud provider, it's going to be very difficult in the future to take that code and data off of that system and put it on another cloud provider, or, in some instances, bring it on premise.

Looking at standards

As you look at the cloud providers, one of the factors in selecting those guys is number one, do they have a vision for interoperability standards? When will that vision be laid out? What kind of standards organizations are they bound to currently, and how are those standards organizations progressing? What does your application do that's going to cause portability issues?

If they're trying to sell you their cloud, have them look at your application and tell you how easy is it for that application, either new or porting to the system, to move off that system in time in the future.

Typically, the pat answer is that it's easy to port your system off because they're using some standard language and a standard database. But, you'll find that many proprietary application programming interfaces (APIs) and interfaces are there, and they're going to make portability very difficult. All the different cloud providers have built their infrastructure and their products in their own little proprietary ways, because they haven't done close coordination one to another.

It's up to the customers to throw their weight around. You have to build it with your dollars.



So that's going to be a trade-off going forward, and I would grill your cloud computing provider of choice to make sure that they have some kind of a vision going forward and how they are going to provide interoperability. But I think it's going to be some time before it occurs, I am a lot more skeptical than some of the other people out there, and it's going to take a lot of customers who are actually paying these guys money to demand that portability exists and they adhere to some sort of standards.

Gardner: It's up to the market to throw its weight around, right?

Linthicum: It's up to the customers to throw their weight around. You have to build it with your dollars.

Gardner: I also suppose that lessons learned in the software realm over the past 10 or 20 years -- having good contracts, having lawyers look things over, writing the proper safeguards in, whether it's indemnification or what have you -- should all be brought right along into the cloud domain. All those lessons learned in software should not in any way be forgotten?

Understanding the contract

Linthicum: That's right. We had a recent issue. It was well-publicized problem with an on-demand CRM provider and Pulte Homes, and they had a problem with the contracts. They didn't understand the contracts and things went wrong in their implementation. The CRM provider, the SaaS provider, wouldn't let them out of their agreement and made them pay fees for basically no services provided.

I'd argue that there are some customer service issues there on the SaaS provider area, but the customer ultimately needed to read the contracts to make sure they understand what the issues are and any kind of consequences that will come out of that. At the end of the day, we're getting into contractual agreements again. You have to approach them with your eyes open and understanding how the stuff is going to work.

Gardner: Now, closing up a little bit, we've certainly seen a lot of projections from folks like Forrester, Gartner and IDC. There are a lot of different numbers and lots of throwing darts at the various boards around these organizations. But, all of them seem to be quite bullish on cloud and that this is something that's here to stay and is going to be high growth.

I think cloud computing is going to grow a lot over time and just become part of the infrastructure.



When I speak to a lot of folks like you, they are very busy. There is lot of demand for data-center transformation, modernization, and virtualization. These are under-girding movements that will enable or support cloud options. So, how about some forecast Dave? Even if it's in general terms, this is really quite a growth opportunity.

Linthicum: It is. It is. The funny thing in cloud is that it's this big amorphous thing and it's tough to name. In fact, I was kind of wrestling around when using cloud computing as the title of the book, because we're getting into something that's been around for a long period of time as an existing concept. But, I think cloud computing is going to grow a lot over time and just become part of the infrastructure.

We've been using aspects of cloud computing for years and years. Application service providers (ASPs) and SaaS were the first forays into cloud. Now, were using additional infrastructure providers, such as database and middleware and applications, and all those things that we're able to deliver as infrastructure and as service. Then, we're also getting development platforms that come out of the cloud and office automation systems like Google Docs, and Office Live.

Things are going to move from our clients and from our data centers out into the cloud providers through economics of scale and efficiency. When it comes right down to it, there are very innovative solutions out there, and coolness is going to drive people to the cloud.

Economies of scale

In other words, you're going to be able to turn off very inefficient and cost-inefficient applications and turn on these that are cloud delivered. Through the sharing mechanism, the RO update mechanisms, economies of scale, the scalability of it, and the amount of money you're going to have to spend on the cloud versus on-premise system, it's just going to be the way to go.

In the next 10 years, IT, as I mentioned earlier, is going to be very different place. I'm not one of those guys that thinks everything in the existing IT infrastructure is going to exist on some cloud some place. But, a good majority of our applications and our processes, things that exists on premise these days, are going to exist in the cloud. It's just going to be the way in which we do IT.

I look forward to working in that environment. I think it's going to be a lot of fun.



It's not going to be that different than leveraging the Web presence that we're doing these days. Cloud computing is about putting additional IT assets out on the platform of the Web. The adoption curve is going to be very much like the Web adoption curve was in the '90s. Significant cost savings are going to be made. We're going to be a much better, more effective place, and it's much more exciting as an IT person. I look forward to working in that environment. I think it's going to be a lot of fun.

Gardner: I agree. It's going to be very exciting. Well thanks. We have been talking with Dave Linthicum. He has come out with a new book, Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide. It's coming to the market through Addison-Wesley Information Technology Series publishers.

It's out now on all the usual book purchasing sites and, as you pointed out, it's also on Kindle. That's very exciting. I want to thank you, Dave. It's been really enjoyable and a great way for us to get into a lot of the interesting aspects of cloud and SOA. So I wish you well on your book.

Linthicum: Thank, you Dana.

Gardner: I also want to thank our sponsors for the BriefingsDirect Analyst Insights Edition podcast series. They are Active Endpoints and TIBCO Software.

Thanks again for listening. This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download a transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Take the BriefingsDirect middleware/ESB survey now.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 45 with consultant Dave Linthicum on the convergence of cloud computing and SOA. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.