Thursday, October 29, 2009

Separating Core from Context Brings High Returns in Legacy Application Transformation

Transcript of the second in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on separating core from context, when it comes to legacy enterprise applications and their modernization processes. As enterprises seek to cut their total IT costs, they need to identify what legacy assets are working for them and carrying their own weight, and which ones are merely hitching a high cost -- but largely unnecessary -- ride.

The widening cost-in-productivity division exists between older, hand-coded software assets, supported by aging systems, and replacement technologies on newer, more efficient standards-based systems. Somewhere in the mix, there are core legacy assets distinct from so-called contextal assets. There are peripheral legacy processes and tools that are costly vestiges of bygone architectures. There is legacy wheat and legacy chaff.

Today we need to identify productivity-enhancing resources and learn how to preserve and modernize them -- while also identifying and replacing the baggage or chaff. The goal is to find the most efficient and low-cost means to support them both, through up-to-date data-center architecture and off-the-shelf components and services.

This podcast is the second in a series of three to examine Application Transformation: Getting to the Bottom Line. We will discuss the rationale and likely returns from assessing the true role and character of legacy applications and their actual costs. The podcast, incidentally, runs in conjunction with some Hewlett-Packard (HP) webinars and virtual conferences on the same subject.

Register here to attend the Asia Pacific event on Nov. 3. Register here to attend the EMEA event on Nov. 4. Register here to attend the Americas event on Nov. 5.

With us to delve deeper into the low cost, high reward transformation of legacy enterprise applications is Steve Woods, distinguished software engineer at HP. Hello, Steve.

Steve Woods: Hello. How are you doing?

Gardner: Good. We are also joined by Paul Evans, worldwide marketing lead on Applications Transformation at HP. Hello, Paul.

Paul Evans: Hello, Dana. Thank you.

Gardner: We talked in the earlier podcast in our series, a case study, about transformation and why that's important through the example of a very large education organization in Italy and what they found. We looked at how this can work very strategically and with great economic benefit, but I think now we are trying to get into a bit more of the how.

Tell us a little bit, Paul, about what the stakes are. Why is it so important to do this now?

Evans: In a way, this podcast is about two types of IT assets. You talked before about core and context. That whole approach to classifying business processes and their associated applications was invented by Geoffrey Moore, who wrote Crossing the Chasm, Inside the Tornado, etc.

He came up with this notion of core and context applications. Core being those that provide the true innovation and differentiation for an organization. Those are the ones that keep your customers. Those are the ones that improve the service levels. Those are the ones that generate your money. They are really important, which is why they're called "core."

Lower cost

The "context" applications were not less important, but they are more for productivity. You should be looking to understand how that could be done in terms of lower cost provisioning. When these applications were invented to provide the core capabilities, it was 5, 10, 15, or 20 years ago. What we have to understand is that what was core 10 years ago may not be core anymore. There are ways of effectively doing it at a much different price point.

As Moore points out, organizations should be looking to build "core," because that is the unique intellectual property of the organization, and to then buy "context." They need to understand, how do I get the lowest-cost provision of something that doesn't make a huge difference to my product or service, but I need it anyway.

An human resources system may not be something that you are going to build your business model on, but you need one. You need to be able to service your employees and all the things they need. But, you need to do that at the lowest-cost provision. As time has gone on, this demarcation between core and context has gotten really confused.

As you said, we're putting together a series of events, and Moore will be the keynote speaker on these events. So, we will elucidate more around core and context.

The other speaker at the event is also an inventor, this time from inside HP, Steve Woods. Steve has taken this notion of core and context and has teamed it with some extremely exciting technology and very innovative thinking to develop some unique tools that we use inside the services from HP, which allow us then really to dive into this. That's going to be one of the sessions that we're also going to be delivering on this series of events.

Gardner: Okay, Steve Woods, we can use a lot of different terms here, "core and context," "wheat and chaff." I thought another metaphor would be "baby and bathwater." What happens is that it's difficult to separate the good from the potentially wasteful in the legacy inventory.

I think this has caused people to resist modernizing. They have resisted tinkering with legacy installations in the past. Why are they willing to do it now? Why the heightened interest at this time?

Woods: A good deal of it has to do with the pain that they're going through. We have had customers who had assessments with us before, as much as a year ago, and now they're coming back and saying they want to get started and actually do something. So, a good deal of the interest is caused by the need to drive down costs.

Also, there's the realization that a lot of these tools -- extract, transform, and load (ETL) tools, enterprise application integration (EAI) tools, reporting, and business process management (BPM) -- are proving themselves now. We can't say that there is a risk in going to these tools. They realize that the strength of these tools is that they bring a lot of agility, solve skill sets issues, and make you much more responsive to the business needs of the organization.

Gardner: This definition of core, as Paul said, is changing over time and also varies greatly from organization to organization. Is there no one size fits all approach to this?

Context not code

Woods: I don't think there really is a one size fits all, but as we use our tools to analyze code, we find sometimes as much as 65 percent or more of an application could really not be core. It could just be context.

As we make these discoveries, we find that in the organization there are political battles to be fought. When you identify these elements that are not core and that could be moved out of handwritten code, you're transferring power from the developers -- say, of COBOL -- to the users of the more modern tools, like the BPM tools.

So there is always an issue. What we try to do, when we present our findings, is to be very objective. You can't argue that we found that 65 percent of the application is not doing core. You can then focus the conversation on something more productive. What do we do with this? The worst thing you could possibly do is take a million lines of COBOL that's generating reports and rewrite that in Java or C# hard-written code.

We take the concept of core versus context not just to a possible off-the-shelf application, but at architectural component level. In many cases, we find that this is helpful for them to identify legacy code that could be moved very incrementally to these new architectures.

Gardner: What's been the holdup? What's difficult? You did mention politics, and we will get into that later, but what's been the roadblock from perspective of these tools? Why has that been decreasing in terms of the ability to automate and manage these large projects?

Woods: A typical COBOL application -- this is true of all legacy code, but particularly mainframe legacy code -- can be as much as 5, 10, or 15 million lines of code. I think the sheer idea of the size of the application is an impediment. There is some sort of inertia there. An object at rest tends to stay at rest, and it's been at rest for years, sometimes 30 years.

So, the biggest impediment is the belief that it's just too big and complex to move and it's even too big and complex to understand. Our approach is a very lightweight process, where we go in and answer to a lot of questions, remove a lot of uncertainty, and give them some very powerful visualizations and understanding of the source code and what their options are.

Gardner: So, as we've progressed in terms of the tools, the automation, and the ability to handle large sets of code, the inertia also involves the nontechnical aspects. What do we mean by politics? Are there fiefdoms? Are there territories? Is this strictly a traditional kind of human nature thing? Perhaps you could help us understanding that a bit better.

Doing things efficiently

Woods: Organizations that we go in have not been living in a vacuum, so many of have been doing greenfield development; when they start out to say they need a system that does primarily reporting, or a system that does primarily data integration. In most organizations those fiefdoms, if you will, have grown pretty robust, and they will continue to grow. The realization is that they actually can do those things quite efficiently.

When you go to the legacy side of the house, you start finding that 65 percent of this application is just doing ETL. It's just parsing files and putting them into databases. Why don't you replace that with a tool? The big resistance there is that, if we replace it with a tool, then the people who are maintaining the application right now are either going to have to learn that tool or they're not going to have a job.

So, there's a lot of resistance in the sense that we don't want to lose anymore ground to the target architecture fiefdom, so we are going to not identify this application as having so many elements of context functionality. Our process, in a very objective way, just says that these are the percentages that we're finding. We'll show you the code, you can agree or disagree that's what it is doing, and then let's make decisions based upon those facts.

If we get the facts on the table, particularly visually, then we find that we get a lot of consensus. It may be partial consensus, but it's consensus nonetheless, and we open up the possibilities and different options, rather than just continuing to move through with hand-written code.

If you look at this whole core-context thing, at the moment, organizations are still in survival mode.



Gardner: Paul, you've mentioned in the past that we've moved from the nice-to-have to the must-have, when it comes to legacy applications transformation and modernization. The economy has changed things in many respects, of course, but it seems as if the lean IT goal is no longer something that's a vision. It's really moved up the pecking order or the hierarchy of priorities.

Is this perhaps something that's going to impact this political logjam? Are other organizations and business and financial outcome folks, who are just going to steamroll these political issues?

Evans: Well, I totally think so, and it's happening already. If you look at this whole core-context thing, at the moment, organizations are still in survival mode. Money is still tight in terms of consumer spending. Money is still tight in terms of company spending. Therefore, you're in this position where keeping your customers or trying to get new customers is absolutely fundamental for staying alive. And, you do that by improving service levels, improving your services, and improving your product.

If you stay still and say, "Well, we'll just glide for the next 6 to 12 months and keep our fingers crossed," you're going to be in deep trouble. A lot of people are trying to understand how to use the newer technologies, whether it's things like Web 2.0 or social networking tools, to maintain that customer outreach.

Those of us who went to the business school, marketing school remember -- it takes $10 to get a customer into your store, but it only takes $1 to keep them coming back. People are now worrying about those dollars. How much do we have to spend to keep our customer base?

Therefore, the line-of-business people are now pushing on technology and saying, "You can't back off. You can't not give us what we want. We have to have this ability to innovate and differentiate, because that way we will keep our customers and we will keep this organization alive."

Public and private sectors

That applies equally to the public and private sectors. The public sector organizations have this mandate of improving service, whether it's in healthcare, insurance, tax, or whatever. So all of these commitments are being made and people have to deliver on them, albeit that the money, the IT budget behind it, is shrinking or has shrunk.

So, the challenge here is, "Last year I ran my IT department on my theoretical $100. I spent $80 on keeping things going, and $20 on improving things." That was never enough for the line-of-business manager. They will say, "I want to make a change. I want it now, or I want it next week. I don't want it in six months time. So explain to me how you are going to do that."

That was tough a year ago, but the problem now is that your $100 IT budget is now $80. Now, it's a bit of a challenge, because now all the money you have got you are going to spend on keeping the old stuff alive. I don't think the line-of-business managers, or whoever they are, are going to sit back and say, "That's okay. That's okay. We don't mind." They're going to come and say that they expect you to innovate more.

This goes back to what Steve was talking about, what we talked about, and what Moore will raise in the event, which is to understand what drives your company. Understand the values, the differentiation, and the innovations that you want and put your money on those and then find a way of dramatically reducing the amount of money you spend on the contextual stuff, which is pure productivity.

The point of the tools is that they allow us to see the code. They allow us to understand what's good and bad and to make very clear, rational, and logical decision.



Steve's tools are probably the best thing out there today that highlight to an organization, "You don't need this in handwritten code. You could put this to a low cost package, running on a low cost environment, as opposed to running it in COBOL on a mainframe." That's how people save money and that's how we've seen people get, as we have talked earlier, a return on investment (ROI) of 18 months or less.

So it is possible, it can be done, and it's definitely not as difficult as people think. The point of the tools is that they allow us to see the code. They allow us to understand what's good and bad and to make very clear, rational, and logical decision.

Gardner: Steve Woods, we spoke earlier about how the core assets are going to be variable from organization to organization, but are there some common themes with the contextual services? We certainly see a lot of very low-cost alternatives now creeping up through software as a service (SaaS), cloud-based, outsourced, mix-sourced, co-located, and lots of different options. Is there some common theme now among what is not core that organizations need to consider?

Woods: Absolutely. One of the things that we do find, when we're brought in to look at legacy applications, is that by virtue of the fact that they are still around, the applications have resisted all the waves of innovation that have preceded. Sometimes, they tend to be of a very definite nature.

A number of them tend to be big data hubs. One of the first things we ask is for the architectural topology diagram, if they have it, or we just draw it on a whiteboard,, they tend to be big spiders. There tends to be a central hub database and you see that they start drawing all these different lines to other different systems within the organization.

The things that have been left behind -- this is the good news -- tend to be the very things that are very amenable for moving to modern architecture in a very incremental way. It's not unusual to find 50-65 percent of an application is just doing ETL functionality.

A good thing

The real benefit to that -- and this is particularly true in a tough economy -- is that if I can identify 65 percent of the application that's just doing data integration, and I create or I have already established the data integration center of excellence within the organization, already have those technologies, or implement those technologies, then I can incrementally start moving that functionality over to the new architecture. When I say incrementally, that's a good thing, because that's beneficial in two ways.

It reduces my risk, because I am doing it a step at a time. It also produces a much better ROI, because the return on the incremental improvement is going to be trickling over time, rather than waiting for 18 months or two years for some big bang type of improvement. Identifying this context code can give you a lot of incremental ROI opportunities, and make you a much more solid IT investment decision picture.

Gardner: So, one of these innovations that's taken place for the past several years is the move towards more distributed data, hosting that data on lower-cost storage architectures, and virtualizing behind the database or the storage itself. That can reduce cost dramatically.

Woods: Absolutely. One of the things that we feel is that decentralizing the architecture improves your efficiency and your redundancy. There is much more opportunity for building a solid, maintainable architecture than there would be if you kept a sort of monolithic approach that's typical on the mainframe.

Gardner: Once we've done this exercise, variable as it may be from organization to organization, separating the core from the non-core, what comes next? What's the next step that typically happens as this transformation and modernization of legacy assets unfolds?

So, if you accept the premise of moving context code to componentized architecture, then the next thing you should be looking for is where is the clone code and how is it arranged?



Woods: That's a very good question. It's really important to understand this leap in logic here. If I accept the notion that a majority of the code in a legacy application can be moved to these model driven architectures, such as BPM and ETL tools, the next premise is, "If I go out and buy these tools, a lot of functionality is provided with these tools right out of the box. It's going to give me my monitoring code, my management code, and in many cases, even some of the testing capabilities are sort of baked into the product."

If that's true, then the next leap of logic is that in my 1.5 million lines of COBOL or my five million lines of COBOL there is a lot of code that's irrelevant, because it's performing management, monitoring, logging, tracing, and testing. If that's true, I need to know where it's at.

The way you find where it's at is identifying the duplicate source code, what we call clone code. Because when you find the clone code, in most cases, it's a superset of that code that's no longer relevant, if you are making this transformation from handwritten code to a model-driven architecture.

What I created at HP is a tool, an algorithm, that can go into any language legacy code and find the duplicate code, and not only find it, but visualize it in very compelling ways. That helps us drill down to identify what I call the unintended design. When we find these unintended designs, they lead us to ask very critical questions that are paramount to understanding how to design the transformation strategy.

So, if you accept the premise of moving context code to componentized architecture, then the next thing you should be looking for is where is the clone code and how is it arranged?

Gardner: Do we have any examples of how this has worked in practice? Are there use cases or an actual organization that you are familiar with? What have been some of the results of going through this process? How long did it take? What did they save? What were the business outcomes?

Viewing the application

Woods: We've often worked with financial services companies and insurance companies, and we have just recently worked with one that gave us an application that was around 1.2 or 1.5 million lines of code. They said, "Here is our application," and they gave us the source code. When we looked into the source code, we found that there were actually four applications, if you looked at just the way the code was structured, which was good news, because it gives us a way of breaking down the functionality.

In this one organization, we found that a high percentage of that code was really just taking files, as I said before, unbundling those files, parsing them, and putting them into databases. So they have kind of let that be the tip of the spear. They said, "That's our start point," because they're often asking themselves where to start.

When you take handwritten code and move it to an ETL tool, there's ample industry evidence that a typical ROI over the course of four years can be between 150 percent and 450 percent improvement in efficiencies. That's just the magic of taking all this difficult-to-maintain spaghetti code and moving it to a very visually oriented tool that gives you much more agility and allows you to respond to changes in the business and the business' needs much more quickly and with skill sets that are readily available.

Gardner: You know, Paul, I've heard a little different story from some of the actual suppliers of legacy systems. A lot of times they say that the last thing you want to do is start monkeying around with the code. What you really want to do is pull it off of an old piece of hardware and put it on a new piece of hardware, perhaps with a virtualization layer involved as well. Why is that not the right way to go?

Evans: Now you've put me in an interesting position. I suppose our view is that there are different strategies. We don't profess one strategy to help people transform or modernize their apps. The first thing they have to do is understand them, and that's what Steve's tools do.

The point is that we don't have a preconceived view of what this thing should run on. That's one thing. We're not wedded to one architectural style.



It is possible to take an approach that says that all we need to do is provide more horsepower. Somebody comes along and says, "Hey, transaction rates are dropping. Users are getting upset because an ATM transaction is taking a minute, when it should take 15 seconds. Surely all we need to do is just give the thing more horsepower and the problem goes away."

I would say the problem goes away -- for 12 months, maybe, or if you're lucky 18 -- but you haven't actually fixed the problem. You've just treated the symptoms.

At HP, we're not wedded to one style of computer architecture as the hub of what we do. We look at the customer requirement. Do we have systems that are equal in performance, if not greater, than a mainframe? Yeah, you bet we do. Our Superdome systems are like that. Are they the same price? No, they are considerably less. Do we have blades, PCs, and normal distributed service? Yeah.

The point is that we don't have a preconceived view of what this thing should run on. That's one thing. We're not wedded to one architectural style. We look at the customer's requirements and then we understand what's necessary in terms of the throughput TP rates or whatever it may be.

So, there is obviously an approach that people can say, "Don't jig around." It's very easy to inject fear into this and just say to put more power underneath it, don't touch the code, and life will be wonderful. We're totally against that approach, but it doesn't mean that one of our strategies is not re-hosting. There are organizations whose applications would benefit from that.

We still believe that can be done on relatively inexpensive hardware. We can re-host an application by keeping the business logic the same, keeping the language the same, but moving it from an expensive system to a less expensive system.

Freeing up cash

People use that strategy to free up cash very quickly. It's one of the fastest ROIs we have, and they are beginning to save instantly. They make the decision that says, "We need to put that money back in the bank, because we need to do that to keep our shareholders happy." Or, they can reinvest that into their next modernization project, and then they're on an upward spiral.

There are approaches to everything, which is why we have seven different strategies for modernization to suit the customer's requirement, but I think the view of just putting more horsepower underneath, closing your eyes, and hoping is not the way forward.

Gardner: Steve, do you have anything more to add to that, treating the symptom rather than the real issues?

Woods: As Paul said, if you treat this as a symptom, we refer to that as a short-term strategy, just to save money to reinvest into the business.

The only thing I would really add to that is that the problem is sometimes not nearly as big as it seems. If you look at the analogy of the clone codes that we find, and all the different areas that we can look at the code and say that it may not be as relevant to a transformation process as you think it is.

The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought.



I do this presentation called "Honey I Shrunk the Mainframe." If you start looking at these different aspects between the clone code and what I call the asymmetrical transformation from handwritten code to model driven architecture, you start looking at these different things. You start really seeing it.

We see this, when we go in to do the workshops. The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought. There are ways to transform it that we didn't realize, and we can do this incrementally. We don't have to do it all at once.

Once we start having those conversations, those who might have been arguing for a re-host suddenly realize that rearchitecting is not as difficult as they think, particularly if you do it asymmetrically. Maybe they should reconsider the re-host and consider going to those context-core concept and start moving the context to these well-proven platforms, such as the ETL tools, the reporting tools, and service-oriented architecture (SOA).

Gardner: Steve, tell us a little bit about how other folks can learn more about this, and then give us a sneak peek or preview into what you are going to be discussing at the upcoming virtual event.

Woods: That's one of the things that we have been talking about -- our tools called the Visual Intelligence Tools. It's a shame you can't see me, because I'm gesturing with my hands as I talk, and If I had the visuals in front of me, I would be pointing to them. This is something to really appreciate -- the images that we give to our customers when we do the analysis. You really have to see it with your own eyes.

We are going to be doing a virtual event on November 3, 4, and 5, and during this you will hear some of the same things I've been talking about, but you will hear them as I'm actually using the tools and showing you what's going to happen with those tools, what those images look like, and why they are meaningful to designing a transformation strategy.

Gardner: Very good. We've been learning more about Application Transformation: Getting to the Bottom Line, and we have been able to separate core from context, and appreciate better how that's an intriguing strategy for approaching this legacy modernization problem and begin to enjoy much greater economic and business benefits as a result.

Helping us weave through this has been Steve Woods, distinguished software engineer at HP. Thanks for your input, Steve.

Woods: Thank you.

Gardner: We've also been joined by Paul Evans, worldwide marketing lead on Applications Transformation at HP. Paul, you are becoming a regular on our show.

Evans: Oh, I'm sorry. I hope I am not getting too repetitive.

Gardner: Not at all. Thanks again for your input.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Transcript of the second in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, October 26, 2009

Linthicum's Latest Book: How SOA and Cloud Intersect for Enterprise Productivity Benefits

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 45 with consultant Dave Linthicum on the convergence of cloud computing and SOA.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download a transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Take the BriefingsDirect middleware/ESB survey now.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 45. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events with industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS and visual orchestration system, and through the support of TIBCO Software.

Our topic this week on BriefingsDirect Analyst Insights Edition, and it is the week of Oct. 12, 2009, centers on Dave Linthicum's new book, Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide. We're here with Dave, and just Dave this time, to dig into the conflation of SOA and cloud computing. Welcome back to the show Dave.

Dave Linthicum: Thank you very much, Dana, thanks for having me.

Gardner: Congratulations. I know producing books like this is bit a like gestating and giving birth, so it may be as close as we guys can come to that experience.

Linthicum: Yeah. I'm already having postpartum depression.

Gardner: So, you’re out with a new arrival and this is part of the Addison-Wesley Information Technology Series.

Linthicum: That's right. It's my fourth book with those guys, starting with the EAI book back in 1997.

Gardner: But, that's still moving off the shelves, right?

Linthicum: It sure is.

Gardner: When is the latest book available? How can you get it and what is it going to set us back?

Linthicum: Cloud Computing and SOA Convergence for Your Enterprise is available now. You can get it on Amazon, of course, for $29.69, and there is a Kindle edition, which, I'm happy to say, is a few bucks less than that. And, I've even seen it on Buy.com for $26. So, get your best deal out there.

Gardner: For those of our listeners out there who might not be familiar with you -- and I have a hard time believing this -- why don't tell us a little bit about yourself and your background, before we get into the timely tome that you've now developed?

Where Web meets enterprise

Linthicum: I've been a distributed-computing guy for a number of years. I've been a thought leader in this space, including writing the EAI book, which we talked about, back in 1997. I was CTO of Software AG, it was called SAGA then, and also CTO of Mercator, and then CTO of Grand Central.

I was CEO of a company called Bridgeworks and then founded my own consulting company called David S. Linthicum, LLC and ran that for any number of years.

I'm primarily focused on where the Web meets the enterprise and I've been doing that for the last 10 years. As the Internet appeared on the scene, I realized that it's not only just a great asset for information, but a great asset where you can put key enterprise applications and post your enterprise data.

There are lots of reasons -- economies of scale, the ability to get efficiency in reuse, the ability to rapidly provision these systems, and get out of the doldrums of IT, which a lot of companies are in right now.

Cloud computing has the opportunity to make things better. The purpose of this book is getting people to look at that as an architectural option for them. In the book, the step-by-step guide provides them with steps that it takes to understand your own issues, your own information, your own data, and your processes, and then figure out the right path to the cloud.

Gardner: It seems that cloud has also, just in the nick of time, come along to give service-oriented architecture (SOA) a little bit of a boost and perhaps even more meaning than people could conjure up for it before.

Linthicum: SOA is the way to do cloud. I saw early on that SOA, if you get beyond the hype that's been around for the last two years, is really an architectural pattern that predates the SOA buzzword, or the SOA TLA.

It's really about breaking down your architecture into a functional primitive, or to a primitive state of several components, including services and data and processes., Then, it's figuring out how to assemble those in such a way that you can not only solve your existing problems, but use those components to resolve problems, as your business changes over time or your mission changes or expands.

Cloud computing is a nice enhancement to that. Cloud doesn't replace SOA, as some people say. Cloud computing is basically architectural options or ways in which you can host your services, in this case, in the cloud.

As we go through reinventing your architecture around the concept of SOA, we can figure out which components, services, processes, or data are good candidates for cloud computing, and we can look at the performance, security and governance aspects of it.

Architectural advantages

We find that some of our services can exist out on the platform in the cloud, which provides us with some additional architectural advantages such as self-provisioning, the ability to get on the cloud very quickly in a very short time without buying hardware and software or expanding our data centers, and the ability to rapidly expand as we need to expand basically on demand.

If we need to go from 10 users to 1,000 users, we can do so in a matter of weeks, not having to buy data-center space, waves and waves of servers, software, hardware licenses, and all those sorts of things. Cloud computing provides you with some flexibility, but it doesn't get away from the core needs to architecture. So, really the book is about how to use SOA in the context of cloud computing, and that's the message I'm really trying to get across.

Gardner: For some folks, the SOA adoption curve perhaps didn't grow as fast as many expected, because the economic impetus was a bit disconnected. Perhaps, it was too far in the future to make direct connections between the investments you would make in your SOA activities and the actual bottom line of IT. Then, cloud comes along. One of the rationales for cloud is that there is an economic impetus.

Of course, not everyone agrees with this. Not everyone agrees with anything about cloud, but if you do cloud correctly, you can cut your utilization waste, reduce your footprint and energy costs, offload peak demands on an elasticity basis, perhaps to third parties, and you can outsource certain apps or data to third parties. Is there an economic benefit from cloud that helps support the investments needed for good SOA?

As we move toward cloud computing, there are more economical and cost-effective architectural options. There is also the ability to play around with SOA in the cloud.



Linthicum: There is, because one of the things people got wrapped around the axle on is having to reinvent their existing systems and go through waves and waves of software and hardware purchases. That became economically nonviable. It was very difficult to figure out how to re-do your architecture, when you had $15-20 million of hardware and software in data center and personnel cost to deal with in support of the new architecture, even though the architecture provides more of a strategic benefit.

As we move toward cloud computing, there are more economical and cost-effective architectural options. There is also the ability to play around with SOA in the cloud, which I think is driving a lot of the SOA. In fact, I find that a lot of people build their first initial SOA as cloud-delivered systems, be it Amazon, IBM, Azure from Microsoft, and some of the other platforms that are out there.

Then, once they figure out the benefits of that, they start putting pieces of it on premise, as it makes sense, and put pieces of it on the cloud. It has the tendency to drive prototyping on the cheap and to leverage architecture and play around with different technologies without the investment we had to do in the past.

It was very difficult to get around that when SOA, as many of the analysts were promoting it, was a big-bang concept and a huge systemic change in how you architecture. Cloud provides a stepwise approach to making that happen. It's much more economic, much more efficient, and it really allows you to play SOA success holistically off of a little success in using the cloud.

Game changing approach

Gardner: Something occurred to me that seems to be a game changing approach or aspect of this. For so long now, people have looked at the total costs of IT, and they went up and up and up. Even though you had things like Moore's Law, commoditization, and maturity that drove some cost down, the total nut of IT for many companies just kept seeming to grow and grow as a percentage of revenue. This, of course, is not a sustainable trajectory.

It seems to me the cloud and SOA as this dream team, as you point out in your book, perhaps provides this inflection point, where we can start to decrease the total nut of IT, rather than just certain aspects of IT. Does that make sense?

Linthicum: It makes perfect sense, and I promote that in the book. One of the things I talk about in Chapter 1 is how things got so bad. The fact of the matter is that we have very ineffective states within the IT realm.

People look at IT and at the movement that's occurred over last 20 years in the progression of the technology, but the reality is that we've gotten a lot less effective in providing benefit to the bottom line of the companies, the missions of the government organizations, and those sorts of things. We need to do better at that.

We've got to stop the insanity. We've got control IT spending.



Ultimately, it's about reinventing the way in which we do IT. In other words, quit thinking about buying the latest and greatest solution and dragging it into the enterprise and having another 20 racks of servers in the data center to support those things that almost never go away. You're getting to a much more complex inflexible state that's not able to change itself or adapt itself to changes in missions or changes in the business. That's just not sustainable in the long-term.

In fact, one of the things I urge IT people to do is to go to a CIO or a COO conference and start talking to them about their IT infrastructure, especially at the cocktail hour. You'll find that it's not a very popular group within most companies and it seems to be, in many instances, the single most limiting factor for them procuring for the companies and growing the business, because of the latency that's in IT.

We've got to stop the insanity. We've got control IT spending. We've got to be much more effective and efficient with the way in which we spend and leverage IT resources. Cloud computing is only a mechanism, it's not a savior for doing that. We need to start marching in new directions and being aggressively innovative around the efficiency, the expandability, and ultimately the agility of IT.

Where the cloud fits

Gardner: Now, looking over your book, Dave, I was impressed by the logic, the layout, and the order of things. You've got a certain level of background and premier information in couple of these chapters on SOA that perhaps we could have been just as well reading in 2005, but the way it fits together is quite interesting. On page 33, you get into when the cloud fits.

That's very much the topic of the day. I speak to a lot of people. Everyone has grokked this general notion of cloud. They understand the private, the public, and "everything as a service," but everybody says, "Yeah, but no one is doing it yet."

What is the right timing for this, and what is the right timing in terms of SOA activities and cloud activities, so they go hand in hand? Are they linear and consecutive? What's the relationship?

Linthicum: They are systemic, one to another. When you're doing SOA and considering SOA within your enterprise or agency, you should always consider cloud as an architectural option. In other words, we have servers we're looking to deploy in middleware, we're looking to leverage in databases we're looking to leverage in terms of SOA. It's governance systems, security systems, and identity management.

Cloud computing is really another set of things that you need to consider in the context of SOA, and you need to start playing around with the stuff now, because it's so cheap. There's no reason that anybody who's working on an SOA shouldn't be playing around with cloud, given the amount of investment that's needed. It's almost nothing, especially with some of the initial forays, some of the prototypes, and some of the pilot projects that need to be done around cloud.

Understanding how cloud computing fits in as a strategic option or another tool in the tool shed, they're able to leverage to drive their architectures.



One really is a matter of doing another. I found out that for people who were deploying SOA their initial success has the tendency to be at least a pure SOA play, as the tendency to be cloud-based. We're doing lots of things in pilot projects that are cloud-oriented and then figuring out how to do that at the enterprise level. Understanding how cloud computing fits in as a strategic option or another tool in the tool shed, they're able to leverage to drive their architectures.

Cloud computing is a fit in many instances. In some instances it's not, and it's a matter of you trying to figure out what's the limitations and the opportunities are within the cloud, before you can figure out what's right to outsource within your own organization.

Gardner: Getting back to where SOA fits in, in Chapter 3, you have a litany of things as a service -- storage, database, information, process, application, platforms, integrations, security, management, governance, testing, and infrastructure. Is there an order? Is there a proper progression? Is there a rationale as to how you should go about all these as services?

The macro domain

Linthicum: You should concentrate on the big macro domain. So, one would be software as a service (SaaS), because SaaS is probably the easiest way to get into the cloud. It also has the most potential to save you the greatest amount of money. Instead of buying a million-dollar, or a two-million-dollar customer reliationship management (CRM) system, you can leverage Salesforce.com for $50-60 a month.

After that, I would progress into infrastructures as a service (IaaS), and that's basically data center on demand. So, it's databases, application servers, WebSphere, and all those sorts of things that you are able to leverage from the data center, but, instead of a data center, you leverage it from the cloud.

Guys like Amazon obviously are in that game. Microsoft, or the Azure platform, are in that game. Any number of players out there are going to be able to provide you with core infrastructure or primitive infrastructure. In other words, it's just available to you over the 'Net with some of kind of a metering system. I would start playing around with that technology after you get through with SaaS.

. . . Instead of having to buy infrastructure and buy a server and set it up and use it, we could go get Google App Engine accounts or Azure accounts.



Then, I would take a look at the platform-as-a-service (PaaS) technology, if you are doing any kind of application development. That's very cool stuff. Those are guys like Force, Google App Engine, and Bungee Labs. They provide you with a complete application development and deployment platform as a service. Then, I would progress into the more detailed stuff -- database, storage, and some of the other more sophisticated services on top of the primitive services that we just mentioned.

Gardner: For those enterprises that do have a sizeable app, Dave, organizations doing a lot of custom development, is that a good place to go for these tests, pilot, and experimental activities? I am going to hazard a guess that this might be the wellspring where cloud has already gotten some attraction, whether organizations recognize it or not?

Linthicum: PaaS with that Google App Engine is driving a lot of innovation right now. People are building applications out there, because they don't have to bother existing IT to get servers and databases brought online, and that will spur innovation.

So, today, we could figure out we want to go off and build this great application and do this great thing to automate a business and, instead of having to buy infrastructure and buy a server and set it up and use it, we could go get Google App Engine accounts or Azure accounts.

Huge potential

Then, we can start building, deploying, defining the database, do the testing, get it up and running, and have it immediately. It's web based and accessible to millions of users who are able to leverage the application in a scalable way. It's an amazing kind of infrastructure when you think about it. The potential is there to build huge, innovative things with very few resources.

Gardner: I'm thinking about the SOA progression over the past five or seven years. One of the cultural organizational obstacles has been getting the development people, the production people, the operation, and the administrator folks to get in some of sort of ongoing feedback loop relationship.

Does cloud PaaS perhaps give a stepping stone approach to start to do that, to think about the totality of an application, the cradle-to-grave iteration, such as the SaaS model, where you've got the opportunity to have a single instance of one code base that you can then work on, rather than have to think about your upgrade cycle.

Linthicum: Yeah, because it's immediately there. That's one thing. There is the instantaneous feedback directly from the users. We can monitor the use. We can monitor the behavior and how people were leveraging the system. We can adjust the system accordingly. The great thing with the SaaS and PaaS models is that we're not doing waves and waves of upgrades that have to be downloaded and then installed, and, in some case, broken.

Now, startups can basically operate with a minimal amount of resources, typically a laptop, pointing at any number of cloud resources.



Everybody is using a centralized platform that's tested as a centralized platform, leveraging the multi-tenant application. We don't have to localize it for Linux, for Windows NT, and for Apple. We just use the platform as web-based, which is perfectly viable these days, when you consider the rich Internet applications (RIAs) out there and the dynamic nature of the interface.

If you're building a SOA and you are building an application instance within the SOA, the opportunities are there to create something that's viable for a long period of time. That's going to be so sustaining, much easier to monitor, and much easier to manage, but the core advantage is, number one, it's much more expandable and also much more cost effective.

We're not having to keep staffs of people around to maintain server hardware and software. We're able to leverage that out in the cloud with a minimal amount of resource consumption. We're also leveling the playing field between small businesses and large businesses.

Ten years ago, it was very difficult to do a start up. You'd have a million dollars in investment funds just to get your infrastructure up and running. Now, startups can basically operate with a minimal amount of resources, typically a laptop, pointing at any number of cloud resources.

A great time

They can build their applications out there. They can build their intellectual capital. They can build their software. They can deploy it. They can test it. Then, they can provision the customers out there and meter their customers. So, it's a great time to be in this business.

Gardner: It cuts across and affects so many aspects, as you say -- the metering, the control of provisions that are more agile, rather than as long upgrades cycles that we traditionally get from commercial software vendors.

I sort of munged two questions together there last time, I want to get back to that culture and organizational issue. This has been something that's a challenge with SOA, and it's going to be a challenge with cloud as well.

Are there organizational stepping-stones or initial preparations that you can do? I'm thinking about IT shared services, perhaps some embracing some vital tenets in ways that you can, in a sense, recast your organization to be in a better position to exploit SOA and then therefore cloud.

There needs to be a lot of education about the opportunities and the advantages of using cloud computing, as well as what the limitations are and what things we have to watch out for.



Linthicum: I think the cultural changes are starting now as far as what cloud computing is going to bring. It's kind of polarizing.

There are two types of people that I run into. Number one: the cloud can do everything and, we really want to move into the cloud -- which is scary. Then there are the people who are looking at the cloud as evil. They always put in front of me all the Gmail adages as proof that the cloud is evil and it's going to destroy their business -- which is also scary.

There needs to be a lot of education about the opportunities and the advantages of using cloud computing, as well as what the limitations are and what things we have to watch out for. Not all applications and all pieces of data are going to be right for the cloud. However, we need to educate people in terms of what the opportunities are.

The fact of the matter is that it's not going to be a dysfunctional and risky thing to move pieces of our architecture out into cloud computing. Get them around the pilot. Get them to go out there and try it. Get them to basically experiment with the technology. Figure out what the capabilities are, and that will ultimately change the culture.

You need to go back to the early '90s. I remember when the Web first came around. I was working for a large corporation, and we weren't allowed to use the Web. If we had to use it, we had to go to the AOL terminal in the library and use it that way.

An understandable asset

Of course, the Web just became bigger, bigger, and bigger and more of an understandable IT asset that could be used enterprise wide. We got web browsers and we're leveraging the Web. The same with the cloud computing. It's going to take a cultural reach. Many large corporations who have embraced the fact are going to put processes and data out on platforms, where they don't know their host.

Gardner: Dave, in Chapter 5, you gave a lot of attention to data. I know there are some people working on that. Tell me about this special relationship between data and SOA, how they come together, and then where cloud fits in?

Linthicum: Understanding data is really the genesis of SOA. A lot of people like to work from the services to the data. I think that the data should be defined and understood in terms of what it is as an as-is state and what it needs to be as a to-be state, where you can build any kind of SOA, using the cloud or not.

Typically, if you're going to leverage the cloud as an infrastructure, it's going to be as a data repository, as well as and for the expandability and the shareability aspects of it, and those sorts of things. However, before you do that, you need to break the data down into a primitive state, understanding what the assets are, what the metadata is, what governance system is around using it -- and just do the traditional architectural stuff.

. . . The data should be defined and understood in terms of what it is as an as-is state and what it needs to be as a to-be state, where you can build any kind of SOA, using the cloud or not.



What I define in the book is definitely cloud related with lots of different examples and different leverages in the context of SOA. But, it's about understanding information the way we've been doing over the last 20 years and then coming up with models and physicals and logicals, trying to figure out what should be where and when we should do that.

It's fairly obvious what pieces and components of the information model you can host in the cloud and which ones need to be on-premise. By the way, it's perfectly acceptable from a performance standpoint to put pieces of physical databases out in the cloud and physical databases on-premise and then leverage those databases simultaneously within the context of applications. You're not going to find tremendous performance differences, and the reliability should be relatively the same.

It's a matter of looking at your information as really a foundation of your architecture, building up on top of that to your services, building up on top of that to your processes, and then really understanding how data exists in the holistic notion of your architecture, in this case, your architecture leveraging cloud computing.

What makes sense

Gardner: Dave, this whole notion of being able to slice and dice data, put it in different places, based on what makes sense for the data, the process, and the applications, rather than simply as a function of the database's needs or the central and core data set needs, strikes a very interesting cord. It allows us to do a lot more interesting things.

In fact, Zimory, another startup, has come out with some interesting announcements, about slicing and dicing caches and then placing them in a variety of ways in different places that can augment and support applications and processes. Are we really going to get to the point soon where we can do things we just never could do before?

Linthicum: We're going to get to a point where the data is going to be a ubiquitous thing. It doesn't really matter where it resides and where we can access it, as long as we access it from a particular model. It's not going to make any difference to the users either. I just blogged about that in InfoWorld.

In fact, we're getting into this notion of what I call the "invisible cloud." In other words, we're not doing application as a service or SaaS, where people get new interfaces that are web-driven. We're putting pieces of the back-end architectural components -- processes, services, and, in this case, data -- out on the platform of the cloud. It really doesn't matter to them where that data resides, as long as they can get at it when they need it.

I don't see a point where we're going to get hindered by where the data resides.



The other aspect of it is because information on a cloud is typically easier to share with other organizations, this has the ability to make the data more valuable by sharing it. That core component becomes a key driver for leveraging the cloud. I don't see a point where we're going to get hindered by where the data resides. We always have to consider governance and security issues and all these things. Every piece of information isn't right for the cloud.

But, for most of the transactional data out there that has semi-private information, which is low-risk -- and that's most of the data from most of the enterprises -- placing pieces of it in the cloud makes sense to better support your architecture and your business. It's perfectly viable.

I don't think people using these information systems are going to have any clue where the information actually resides. IT folks are going to have a tremendous amount of power and numerous options to place in the cloud information that is going to make it much more cost-effective, much more shareable, and therefore much more valuable.

Gardner: Perhaps the takeaway here is that the liberation of data will enliven people in some ways in cloud computing innovation. That really is about business process innovation management. Perhaps that's where we should look to next, and coincidently, that's what your Chapter 7 looks at. Where does business process management fit into cloud, and can that give us something we couldn't do before?

Shared processes

Linthicum: Yeah, it does. We've had the notion of shared processes. In fact, there is a company called Extricity. Back in the old EAI days, it came up with this notion of private versus public processes. Cloud computing provides us with a platform to finally do that. So, not only we are able to drive processes within the enterprise, those going to exist either on premise or in the cloud, depending on where it's best economically and where it's a right architectural fit for those things.

The more important strategic benefit of doing that is that ultimately we're able to put processes on centralized cloud-delivered systems that are shared across multiple enterprises, or multiple divisions in the same enterprise.

This provides us to place an information sharing mechanisms and also process sharing mechanisms, which drives together all of this information in the context of a business process. It allows us to do things like real-time supply-chain automation, real-time event-driven sales-force direction management, and a lot of real-time processes around any business event that spans multiple enterprises. We've been trying to do this for years.

Back in the day, business to business (B2B) was the big buzzword. We had technology guys like Extricity and other process-management technologies to provide us with the capabilities to make this happen. But, it really hasn't been widespread. That's because there was no agreed-on platform to leverage processes and create processes that are shared across multiple enterprises.

All of these things are automated between these very disparate organizations to support the customer better. That's how you're going win this game.



Cloud computing provides us with that capability. So, we have innovators like Appian On Demand and a few other folks out, who are building processes that are sharable on the cloud. We're able to link those to our existing services and data and have our existing systems and our IT assets, such as data and services, participate in these larger process that may span multiple enterprises.

It gets to this point where I can walk into a car dealer and they can tell me exactly when the car I'm ordering is going to show up -- not "8-12 weeks." They know who's going to build it, where the supply is going to come from, where's it going to put together, and how it's going to shipped. All of these things are automated between these very disparate organizations to support the customer better. That's how you're going win this game. That's really the true value of cloud.

Gardner: I agree. We're getting toward extreme visibility all across the exchange -- buyers, sellers, participants, suppliers, and value-added participants. That visibility, of course, gets to more intelligent decision making, less waste, and much higher productivity. Productivity is the key here. If you're in an economy, like we are now, where we've got to grow our way out of this thing, you can't do it by cutting costs forever. The up side is going to come from productivity.

This whole discussion about business process is the cloud discussion that we should be taking to the board of director level, to the COO, and the CFO. They probably don't care too much about the cloud, but they will probably like the fact that the cost for IT can go down. Please help me if you agree or feel free to flesh this out, isn't this the thing that's going to get the business people jazzed?

Bottom-line questions

Linthicum: That's great thinking, Dana. Ultimately, people don't care about whatever hype-driven technology paradigm is coming down the line. Cloud computing can be inclusive of that. How can you save me a buck? How can you get my business out of the doldrums? Can you do that through innovation, and can that innovation cost me less at the end of the day? Those are the questions being asked.

We're not getting, "How can I spend more to get more?" They're saying, "How can we be more effective and efficient with the organization and what innovative changes can make me more effective and efficient?"

Cloud computing is an example of technology that has a potential of doing that. A lot of CIOs and CEOs that I talk to are going to say, "Cloud-Schmoud. I could care less if you do it with pixie dust or cloud computing. I just want it to happen."

Those in IT need understand that this, ultimately, is the motivation. At the end of the day, they need to put together a plan of attack in how to get them to that more effective and efficient state.

IT shops, in the next five years, are going to look very different than they are today. Typically, they're going to be much smaller. They're going to have a lot less hardware and software around, even though it's never going to be eliminated. They're going to be evaluated on their effectiveness and efficiency toward the bottom line in the business.

So, we need to buckle down, be more innovative, figure out what our options are, and figure out a way to move our existing infrastructure in more productive directions.



In the past, we've been exempt from that -- for what reason I don't know -- but they've given IT carte blanche to spend a lot of money. The results come in, but they're not measured as carefully as those in sales and marketing. I think those days are over.

So, we need to buckle down, be more innovative, figure out what our options are, and figure out a way to move our existing infrastructure in more productive directions. Or else, your competitor is going to figure it out before you and they're going to put you out of business.

Gardner: There is a ton of information in this book, but it's still tight and concise. It doesn't go on and on and on. So, I commend you for that. We've got whole chapter on governance. We've got whole chapter on testing. But, the one that really jumped out at me was Chapter 10, "Defining the Candidate Data, Services and Processes for the Cloud."

To me, this really gets at the heart of the issue that IT folks are going to be grappling with. How to get started? What's the right approach for me as an organization for our culture, skills, capabilities, and budgets? How do you tailor this? How do you get started? Maybe you can just dig in and give us a little preview on Chapter 10.

Following the checklist

Linthicum: Chapter 10 is really about what you need to do, once we've gone through these steps of understanding your data, services, and processes, creating a governance model, and understanding security issues and all those things that are good candidates to move onto the cloud?

Once you have this understanding of how to select services, processes, and pieces of data that should be moved out there, it's a matter of going through those checklists to see if the processes, applications, and data are independent or loosely coupled.

If they're independent, then the chances are they're going to more easy to move out to the cloud. If they're loosely coupled, they're easy to move out to the cloud. If they're interdependent, which means they're bound to different things, it's very difficult to decouple them and move them out to the cloud.

You need to figure out the points of integration. Ultimately, if we move something out to the cloud, can we link that information back to the enterprise, can we do that in efficient and effective way, and will that lower costs for us?

You are not going to be able to put cloud computing on top of an existing dysfunctional architecture and expect miracles to occur.



In many instances, we can put systems out in the cloud and we can say it's more cost-effective to have them out there? But when you factor in the integration cost, it's much less cost-effective and much less effective and efficient for the enterprise. You find that with the lot of the salesforce.com installations. Integration wasn't really factored in, and it ended up being a huge issue.

You need to consider your security. You need to consider the core internal enterprise architecture and making sure that it's healthy. You are not going to be able to put cloud computing on top of an existing dysfunctional architecture and expect miracles to occur. As part of this process, as you mentioned earlier Dana, you need to understand that cloud computing needs to be leveraging the context of the SOA, which spans on-premise and off-premise.

This is about getting your existing architecture healthy and leveraging cloud computing as an option. It's not really bolting cloud computing onto existing bad architecture and hoping for changes that are never going to occur.

Looking at the cost models

Ultimately, it's about looking at the cost models and trying to figure out which are the right candidates to move out the cloud in terms of the efficiencies and effectiveness, while looking at the strategy of the company.

I was helping a disaster company a while back. It had to go from 10 users to 10,000 users in a week. Cloud computing is a great candidate for those things and those types of processes. Instead of having a data center that's dark and that you turn on and fire up whenever you need the capacity, you can just go ahead and call Amazon or Google and turn on the capacity to make that happen.

Those are good candidates for cloud computing. But, you need to consider governance, security, how tightly or loosely coupled those processes are within the system, cost effectiveness, integration, other assets of data, the larger strategy of the company, and the direction of the IT architecture and where it's looking to go.

All those things are fundamental considerations of whether or not something that you've identified as a core component and understood as a core component are outsourced to the cloud or not.

Instead of having a data center that's dark and that you turn on and fire up whenever you need the capacity, you can just go ahead and call Amazon or Google and turn on the capacity to make that happen.



Gardner: Then, you come into Chapter 11. It's a practical and pragmatic tip on analyzing and providing candidates around platforms and around picking private or public approaches. One of the things that occurred to me in looking that over is that perhaps now is the time for companies to be thinking along these lines as well. That's how to protect themselves against lock-in and making choices they might regret later. This is around the whole neutrality and portability issue.

While we're experimenting, Dave, and while we're getting our feet wet with cloud, this is also a good time to start putting pressure on all the parties involved for as much neutrality in standards and portability as possible.

Linthicum: It is, and that's not there, generally speaking, in the cloud community. We have security that's still lacking a bit. We have to figure out better mechanisms for that, and for portability, which is still lacking. We have to figure out better mechanisms for that.

So, you have to factor that into the cost and the risk. Right now, if you're moving into the cloud, and you're going to localize a system for a cloud provider, it's going to be very difficult in the future to take that code and data off of that system and put it on another cloud provider, or, in some instances, bring it on premise.

Looking at standards

As you look at the cloud providers, one of the factors in selecting those guys is number one, do they have a vision for interoperability standards? When will that vision be laid out? What kind of standards organizations are they bound to currently, and how are those standards organizations progressing? What does your application do that's going to cause portability issues?

If they're trying to sell you their cloud, have them look at your application and tell you how easy is it for that application, either new or porting to the system, to move off that system in time in the future.

Typically, the pat answer is that it's easy to port your system off because they're using some standard language and a standard database. But, you'll find that many proprietary application programming interfaces (APIs) and interfaces are there, and they're going to make portability very difficult. All the different cloud providers have built their infrastructure and their products in their own little proprietary ways, because they haven't done close coordination one to another.

It's up to the customers to throw their weight around. You have to build it with your dollars.



So that's going to be a trade-off going forward, and I would grill your cloud computing provider of choice to make sure that they have some kind of a vision going forward and how they are going to provide interoperability. But I think it's going to be some time before it occurs, I am a lot more skeptical than some of the other people out there, and it's going to take a lot of customers who are actually paying these guys money to demand that portability exists and they adhere to some sort of standards.

Gardner: It's up to the market to throw its weight around, right?

Linthicum: It's up to the customers to throw their weight around. You have to build it with your dollars.

Gardner: I also suppose that lessons learned in the software realm over the past 10 or 20 years -- having good contracts, having lawyers look things over, writing the proper safeguards in, whether it's indemnification or what have you -- should all be brought right along into the cloud domain. All those lessons learned in software should not in any way be forgotten?

Understanding the contract

Linthicum: That's right. We had a recent issue. It was well-publicized problem with an on-demand CRM provider and Pulte Homes, and they had a problem with the contracts. They didn't understand the contracts and things went wrong in their implementation. The CRM provider, the SaaS provider, wouldn't let them out of their agreement and made them pay fees for basically no services provided.

I'd argue that there are some customer service issues there on the SaaS provider area, but the customer ultimately needed to read the contracts to make sure they understand what the issues are and any kind of consequences that will come out of that. At the end of the day, we're getting into contractual agreements again. You have to approach them with your eyes open and understanding how the stuff is going to work.

Gardner: Now, closing up a little bit, we've certainly seen a lot of projections from folks like Forrester, Gartner and IDC. There are a lot of different numbers and lots of throwing darts at the various boards around these organizations. But, all of them seem to be quite bullish on cloud and that this is something that's here to stay and is going to be high growth.

I think cloud computing is going to grow a lot over time and just become part of the infrastructure.



When I speak to a lot of folks like you, they are very busy. There is lot of demand for data-center transformation, modernization, and virtualization. These are under-girding movements that will enable or support cloud options. So, how about some forecast Dave? Even if it's in general terms, this is really quite a growth opportunity.

Linthicum: It is. It is. The funny thing in cloud is that it's this big amorphous thing and it's tough to name. In fact, I was kind of wrestling around when using cloud computing as the title of the book, because we're getting into something that's been around for a long period of time as an existing concept. But, I think cloud computing is going to grow a lot over time and just become part of the infrastructure.

We've been using aspects of cloud computing for years and years. Application service providers (ASPs) and SaaS were the first forays into cloud. Now, were using additional infrastructure providers, such as database and middleware and applications, and all those things that we're able to deliver as infrastructure and as service. Then, we're also getting development platforms that come out of the cloud and office automation systems like Google Docs, and Office Live.

Things are going to move from our clients and from our data centers out into the cloud providers through economics of scale and efficiency. When it comes right down to it, there are very innovative solutions out there, and coolness is going to drive people to the cloud.

Economies of scale

In other words, you're going to be able to turn off very inefficient and cost-inefficient applications and turn on these that are cloud delivered. Through the sharing mechanism, the RO update mechanisms, economies of scale, the scalability of it, and the amount of money you're going to have to spend on the cloud versus on-premise system, it's just going to be the way to go.

In the next 10 years, IT, as I mentioned earlier, is going to be very different place. I'm not one of those guys that thinks everything in the existing IT infrastructure is going to exist on some cloud some place. But, a good majority of our applications and our processes, things that exists on premise these days, are going to exist in the cloud. It's just going to be the way in which we do IT.

I look forward to working in that environment. I think it's going to be a lot of fun.



It's not going to be that different than leveraging the Web presence that we're doing these days. Cloud computing is about putting additional IT assets out on the platform of the Web. The adoption curve is going to be very much like the Web adoption curve was in the '90s. Significant cost savings are going to be made. We're going to be a much better, more effective place, and it's much more exciting as an IT person. I look forward to working in that environment. I think it's going to be a lot of fun.

Gardner: I agree. It's going to be very exciting. Well thanks. We have been talking with Dave Linthicum. He has come out with a new book, Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide. It's coming to the market through Addison-Wesley Information Technology Series publishers.

It's out now on all the usual book purchasing sites and, as you pointed out, it's also on Kindle. That's very exciting. I want to thank you, Dave. It's been really enjoyable and a great way for us to get into a lot of the interesting aspects of cloud and SOA. So I wish you well on your book.

Linthicum: Thank, you Dana.

Gardner: I also want to thank our sponsors for the BriefingsDirect Analyst Insights Edition podcast series. They are Active Endpoints and TIBCO Software.

Thanks again for listening. This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download a transcript. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Take the BriefingsDirect middleware/ESB survey now.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 45 with consultant Dave Linthicum on the convergence of cloud computing and SOA. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.