Sunday, April 27, 2008

HP Creates Security Reference Model to Better Manage Enterprise Information Risk

Transcript of BriefingsDirect podcast on best practices for integrated management of security, risk and compliance approaches.

Listen to the podcast here. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about risk, security, and management in the world’s largest organizations. We're going to talk about the need for verifiable best practices, common practices, and common controls at a high level.

The idea is for management of processes, and the ability to prevent unknown and undesirable outcomes -- not at the silo level, or the instance-level of security breaches that we hear about in the news. We will focus instead on what security requires at the high level of business process.

These processes have been newly managed through Information Security Service Management (ISSM) approaches, and there is a reference model (ISSM RM) that goes along with it.

To help us learn more about ISSM, we are joined by two Hewlett-Packard (HP) executives. We are going to be talking with Tari Schreider, the chief security architect in the America’s Security Practice within HP’s Consulting & Integration (C&I) unit.

Also joining us to help us understand ISSM is John Carchide, the worldwide governance solutions manager in the Security and Risk Management Practice within HP C&I. Welcome to you both.

Tari Schreider: Thank you.

John Carchide: Thank you, Dana.

Gardner: John, we have a lot of compliance and regulations to be concerned about. We are in an age where there is so much exposure to networks and the World Wide Web. When something goes wrong, and the word gets out -- it gets out in a big way.

Help us to understand the problem. Then perhaps we'll begin to get closer to the solutions for mitigating risk at the conceptual and practical levels.

Carchide: Part of the problem, Dana, is that we've had several highly publicized incidents where certain things have happened that have prompted regulatory actions by local, state, and foreign governments. They are developing standards, defining best practices, and defining what they call control objectives and detailed controls for one to comply with, prior to being a viable entity within an industry.

These regulatory requirements are coming at us from all directions. Our senior management is currently struggling, because now they have added personal liability and fines associated with this, as each event occurs, like the TJ Max event. The industry is being inundated with compliance and regulatory requirements.

On the other side of this, there are some industry-driving forces, like Visa, which has established standards and requirements that, if you want to do business with Visa, you need to be Payment Card Compliance (PCI) compliant.

All these requirements are hitting senior-level managers within organizations, and they're looking at their IT environment and asking their management teams to address compliance. “Are we compliant?” The answers they're getting are usually vague, and that’s because of the standards.

What Tari Schreider has done is establish a process of defining requirements, based on open standards, and mapping them to risk levels and maturity levels. This provides customers with a clear, succinct, and articulated picture. This tells them what their current state is, what they are doing well, what they are not doing well, where they're in compliance, where they're not in compliance. And it helps them to build the controls in a very logical and systematic way to bring them into compliance.

In the 32 years of security experience I have, Tari is one of the most forward-thinking individuals I've met. It gives me nothing but great pleasure to bring Tari to a much larger audience so he can share his vision.

Information Security Service Management is his vision, his brainchild. We've invested heavily, and will continue to, in the development and maturity of this process. It incorporates all of HP’s services from the C&I organizations and others. It takes HP’s best practices, methodologies, and proven processes, and incorporates them into a solution for a customer.

So, I would like to introduce everyone to the ISSM godfather, Tari Schreider -- probably one of the most innovative individuals you will ever have the privilege of meeting.

Gardner: Thank you, John. Tari, that’s a lot to live up to. Tell us a little bit about how you actually got started in this? How did you end up being the “godfather” of ISSM?

Schreider: Well, let me compose myself from that introduction. When I joined the Security Practice, we would make sales calls to some of HP’s largest customers. Although we were always viewed as great technologists and operationally competent providers of products and services, we weren’t really viewed -- or weren’t on the radar screen -- as a security service provider, or even a security consulting organization.

Through close alignment with the financial services vertical -- because they had basically heard the same message -- we came up with a strategy where we would go out to the top 30 or so financial services clients and talk with them.

"What is it that you're looking for? Where would you like to see us provide leadership? Where do you see us as a component provider of security services? What level do you view us playing at?"

We took that information, went throughout HP, and invited individuals that we felt were thought leaders within the organization. We invited people from the CTO’s office, from HP Labs, from financial services, worldwide security, as well as representation from a number of senior solution architects.

We got together in Chicago for what we look back on and refer to as the "Chicago Sessions." We hammered out a framework based upon some early work that was done principally in control assessments, building on top of that, and leveraging experiences with delivery in terms of what worked and what didn’t.

We started off with what was referred to then as the "building of the house" and the "blueprint." Then, over the last couple of years, as we have delivered and worked with various parts of the organization, as well as clients, we realized that one of the success factors that we would have to quickly align ourselves with was the momentum that we had with HP’s ITSM, now called Service Management Framework. We had to articulate security as a security service management function within that stack. It really came together when we started viewing security as an end-to-end operational process.

Gardner: What happened that required this to become more of a top-down approach? In John’s introduction, it sounded as if there was a lot of history, where a CIO or an executive would just ask for reports, and the information would flow from the bottom on up.

It sounds like something happened at some point where that was no longer tenable, that the complexity and the issues had outgrown that type of an approach. What happened to make compliance require a top-down, systemic approach?

Schreider: One problem that we were constantly faced with was that clients were asking us, "Where is your thought leadership on security? We know we bring you in here when we have to fix security vulnerabilities on the server, and we get that. We know that you know what you are doing and you're competent there. But frankly, we don’t know what it is that you do. We don’t know the value that you can bring to the table. When we invite you in, you come in with a slide deck full of products. Pretty much, you are like everybody else. So where is your thought leadership?"

Because nobody will ever argue against that HP is an operations- and process-oriented company, we wanted to leverage that. And what we wanted to do was stop the assessment and reporting bureaucracy that CIOs and CSOs and CFOs were in because of Sarbanes-Oxley and so forth, and to provide real meat to their information security programs.

The problem was, we had some very large customers that we were losing to competition, because we basically ran out of things to sell them -- only because we didn’t know we had anything to sell them. We had all of this knowledge. We had all of this legacy of doing security in technology for 20 or 30 years, and we didn’t know how to articulate it.

So we formulated this into a reference model, the Information Security Service Management Reference Model, where it would basically serve as an umbrella, by which all of the pillars of security for trusted infrastructure and proactive security management -- and identity and access management, and governance and so forth -- would be showcased under this thought leadership umbrella.

It got us invited into the door, with things like, "You guys are a breath of fresh air. We have all of these Big Four accounting firm-type organizations. They are burying us in reports. And at the end of the day we still fail audits and nothing gets done."

Gardner: I know this is a large and complex topic, on common security and risk management controls, but in a nutshell, or as simply as we can for those folks that might be coming to this from a different perspective, What is ISSM, and what does it mean conceptually?

Schreider: Well, if you look at ISSM, it’s very specifically referred to as the Information Security Service Management Reference Model. It is several things, a framework, architecture, a model, and a methodology. It's a manner in which you can take an information-security program and turn it into a process-driven system within your organization.

That provides you with a better level of security alignment with the business objectives of your organization. It positions security as a driver for IT business-process improvement. It reduces the amount of operational risk, which ensures a higher degree of continuity of business operations. It’s instrumental in uncovering inadequate or failing internal processes that stave off security breaches, and it also turns security into a highly leveraged, high-value process within your organization.

Gardner: This becomes, in effect, a core competency with a command and control structure, rather than something that’s done ad hoc?

Schreider: Absolutely. The other aspect is that through the definition of linked attributes, which we can talk about later, it allows you to actually make security sticky to other business processes.

If you're a financial institution, and you are going to have Web-based banking, it gives you the ability to have sticky security controls, rather than “stovepipes.”

If you're a utility industry, and you have to comply with North America Reliability Corporation (NERC) and Critical Infrastructure Protection (CIP) regulations, it gives you the ability to have sticky security controls around all of your critical cyber assets. Today, they’re simply security controls that are buried in some spreadsheet or Word document, and there is really no way to manage the behavior of those controls.

Gardner: Why don’t we then just name somebody the “Chief Risk Officer” and tell them to pull this all together and organize it in such a way that this is no longer just piecemeal? Is that enough or does something bigger or more methodological have to take place as well?

Schreider: What’s important to understand is that all of our clients represent fairly large global concerns with thousands of employees and billions of dollars in revenue, and with many demands on their day-to-day operations. A lot of them have done some things for security over time.

Pulling the risk manager aside and sort of leaving him with the impression that everything they are doing, they are doing wrong is probably not the best course. We've recognized that through trial and error.

We want to work with that individual and position the ISSM Reference Model as the middle layer, which is typically missing, to pull together all the pieces of their disparate security programs, tools, policies, and processes in an end-to-end system.

Gardner: It sounds as if we really need to look at security and risk in a whole new way.

Schreider: I believe we do. And this is key because what differentiates us from our contemporaries is that we are now “operationalizing” security as a process or a workflow.

Many times, when we pull up The Wall Street Journal or Information Week, and we read about a breach of security -- the proverbial tape rolling off the back of the truck with all of the Social Security numbers -- we find that, when you look at the morphology of that security breach, it’s not necessarily that a product failed. It’s not necessarily that an individual failed. It’s that the process failed. There was no end-to-end workflow and nobody understood where the break points were in the process.

Our unique methodology, which includes a number of frameworks and models, has a component called a P5 Model, where every control has five basic properties:
  • Property 1 -- People, has to be applied to the control.
  • Property 2 --Policies, certainly has to have clear and unambiguous governance in order for controls to work.
  • Property 3 -- Processes, is an end-to-end workflow, where everyone understands where the touch points are.
  • Property 4 -- Products, means technology has to be applied in many cases to these controls in order to bring them to life and to be functioning appropriately, and
  • Property 5 -- Proof, because there have to be proof points to demonstrate that all of this is actually working as prescribed by a standard, a regulation, or best practice.
Gardner: It seems that you are weaving this together so that you get a number of checks and balances, backstops and redundancies -- so that there aren’t unforeseen holes through which these risky practices might fall.

Schreider: I couldn’t say it any better than that.

Gardner: How do I know that I am a company that needs this? Maybe I am of the impression that, "Well, I've done a lot. I've complied and studied and I've got my reports."

Are there any telltale signs that an organization needs to shift the way they are thinking about holistic security and compliance?

Schreider: I'm often asked that question. When I sit down with CFOs or CIOs or business-unit stakeholders, I can ask one question that will be a telltale sign of whether they have a well-managed, continuously improving information security program. That question is, "How much did you spend on security last year?" Then I just shut up.

Gardner: And they don’t have an answer for it at all?

Schreider: They don't have any answer. If you don’t know what you are spending on security, then you actually don’t know what you are doing for security. It starts from there.

Gardner: That’s because these measures are scattered around in a variety of budgets. And, as you say, they evolve through a “siloed” approach. It was, "Okay, we've got to put a band-aid here, a band-aid there. We need to react to this." Over time, however, you've just got a hairball, rather than a concerted, organized, principled approach.

Schreider: That’s correct, Dana. As a matter of fact, we have a number of tools in our methodology that expose this disfranchised approach to security. Within our Property #4 portion of the P5 Model, we have a tool that allows us to go in and inventory all of the products that an organization has.

Then we map that to things like the Open Systems Interconnection (OSI) Reference Model for security on a layered approach, a "defense in depth" approach, an investment approach, and also from a risk and a threat model approach, and in ownership.

When they see the results of that, they say, "Wait a second. I thought we only had 10 or 12 security products, and I manage that." We show them that they actually have 40, 50, or 60, because they're spread throughout the organization, and there's a tremendous amount of duplication.

It’s not unusual for us to present back to a client that they have three or four different identity management systems that they never knew about. They might have four or five disparate identity stores spread throughout the organization. If you don’t know it and if you can’t see it, you can’t manage it.

Gardner: Now, it sounds as if, from an organizational and a power-structure perspective, this could organize itself in several places. It could be a function within IT, or within a higher accounting or auditing level or capability.

Does it matter, or is there high variability from organization to organization as to where the authority comes for this? Do you have more of a prescriptive approach as to how they should do it?

Schreider: The answer to both of those questions is "yes." We recognize that just because of the dynamics, the culture, and the bureaucracy, in many of our customers' organizations, security is going to live in multiple silos or departments. Through our P5 Model, we have the ability to basically take and share the governance of the control.

So, for example, the office of the Business Information Security Officers (BISO) or the Chief Security Officer (CSO) typically owns policies and proof. For the technology piece -- which has been always a struggle between the office of security and the office of technology on who owns what -- we can define the control of the attributes. So, the network-operations people can then own the technical controls, because they are not going to give up their firewalls and their intrusion detection systems. They actually view that as an integral component of their overall network plumbing.

The beauty of ISSM is that it's very nimble and very malleable. We can assign responsibilities at an attribute level for control, which allows people to contribute and then it allows them to have a sharing-of-power strategy, if you will, for security.

Gardner: There's an analogy here to Service Oriented Architecture (SOA) from the IT side. In many respects, we want to leave the resources, assets, applications, and data where they are, but elevate them through metadata to a higher abstraction. That allows us then to manage, on a policy basis, for governance, but also to create processes that are across business domains and which can create a higher productivity level.

I'm curious, did this evolve from the way that IT is dealing with its complexity issues? Is there an analogy here?

Schreider: It's very much similar to how IT is managed, where basically you want to push out to the lowest common denominator and as close as possible to the customer the services that you provide.

By this whole concept of what we would refer to as BISOs there are large components of security that should actually live in the business unit, but they shouldn’t be off doing their own thing. It shouldn’t be the Wild West. There is a component that needs to be structured for overall corporate governance.

We're certainly not shy about lessons learned and about borrowing from what contemporaries have done in the IT world. We're not looking to buck the trend. That’s why we had to make sure that our reference model supported the general direction of where IT has been moving over the last few years.

Gardner: Conceptually I have certainly bought into this. It makes a great deal of sense. But implementation is an entirely different story. How do you approach this in a large global organization, and actually get started on this? To me, it's not so much daunting conceptually, but how do you get started? How do you implement?

Schreider: One of the reasons people come to HP is that we are a global organization. We have the ability to field 600 security consultants in over 80 countries and deliver with uniformity, regardless of where you’re at as a customer.

There is still a bit of work that goes in. Although we have the ISSM Reference Model, and we have a tremendous amount of methodology and collateral, we are not positioning ourselves as a cookie-cutter approach. We spend a good bit of time educating ourselves about where the customer is, understanding where their security program currently lies, and -- based on business direction and external drivers, for example, regulatory concerns -- where it needs to go.

We also want to understand where they want to be in terms of maturity range, according to the Capability Maturity Model (CMM). Once we learn all of that, then we come back to them and we create a road map. We say that, "Today, we view that you are probably at a maturity level of ‘One.’ Based upon the risk and threat profile of your organization, it is our recommendation that you be at a maturity level of ‘Three’."

We can put together process improvement plans that show them step-by-step how they move along the maturity continuum to get to a state that’s appropriate for their business model, their level of investment, and appetite for risk.

Gardner: How would one ever know that they are done, that you are in a compliant state, that your risk has been mitigated? Is this a destination, or is it a journey?

Schreider: It's a journey, with stops along the way. If you are in the IT world -- compliance, risk management, continuity of operation -- it will always be a journey. Technology changes. Business models change. There are many aspects to an organization that require that they continually be moving forward in order to stay competitive.

We map out a road map, which is their journey, but we have very defined stops along the way. They may not ever need to go past a level of maturity of “Three,” for example, but there are things that have to occur for them to maintain that level. There's never a time when they can say, "Aha, we have arrived. We are completely safe."

Security is a mathematical model. As long as math exists, and as long as there are infinite numbers, there will be people who will be able to scientifically or mathematically define exploits to systems that are out there. As long as we have an infinite number of numbers we will always have the potential for a breach of security.

Gardner: I also have to imagine that this is a moving target. Seven years ago, we didn’t worry about Sarbanes-Oxley, ISO, and on-going types of ill effects in the market. We don’t know what’s going to come down the pike in a few years, or perhaps even some more in the financial vertical.

Is there something about putting this ISSM model in place that allows you to better absorb those unforeseen issues and/or compliance dictates? And is there a return on investment (ROI) benefit of setting up your model sooner rather than later?

Schreider: Absolutely. Historically, businesses throughout the world have lacked the discipline to self-regulate. So there is no question that the more onerous types of regulations are going to continue. That's what happened in the subprime [mortgage] arena, and the emphasis toward [mitigating] operational risk is going to continue and require organizations to have a greater level of due diligence and control over their businesses.

Businesses are run on technology, and technologies require security and continuity of operations. So, we understand that this is a moving target.

One of the things we have done with the ISSM Reference Model is to recognize that there has to be an internal framework or a controlled taxonomy that allows you to have a base root that never changes. What happens around you will always change, and regulations always change -- but how you manage your security program at its core will relatively stay the same.

Let me provide an example. If you have a process for hardening a server to make sure that that the soft, chewy inside is less likely to be attacked by a hacker or compromised by malware, that process will improve over time as technology changes. But at the end of the day it is not going to fundamentally change, nor should it change, just because a regulation comes out. How you report on what you are doing is going to change almost on a daily basis.

So we have adopted the open standard with the ISO 27001 and 17799 security-control taxonomy. We have structured the internal framework of ISSM for 1,186 base controls that we have then mapped to virtually every industry regulation and standard out there.

As long as you are minding the store, if you will, which is the inventory of controls based on ISO, we can report out to any change at any regulatory level without having to reverse engineer or reorganize your security program. That level of flexibility is crucial for organizations. When you don't have to redo how you look at security every time a new regulation comes out, the cost savings are just obvious.

Gardner: I suppose there is another analogy to IT, in that this is like a standardized component object model approach.

Schreider: Absolutely.

Gardner: Okay. How about examples of how well this works? Can you tell us about some of your clients, their experiences, or any metrics of success?

Schreider: Let me share with you as many different cross-industry examples that come to mind. One of the first early adopters of ISSM was one of the largest banks based in Mumbai, India.

One issue they had was a great deal of their IT operation was outsourced. They were entering into an area with a significant amount of regulatory oversight for security that never existed before. They also had an environment where operational efficiencies were not necessarily viewed as positive. The cost component of being able to apply human resources to solve a problem or monitor something manually was virtually unlimited, because of the demographics of where their financial institution was located.

However, they needed to structure a program to manage the fact that they had literally hundreds of security professionals working in dozens of different areas of the bank, and they were all basically doing their own things, creating their own best practices, and they lacked sort of that middleware that brought them all together.

ISSM gave them the flexibility to have a model that accounted for the fact that they could have a great number of security engineers and not worry so much about the cost aspect, but for them what was important is that they were basically all following the same set of standards and the same control model.

It worked very well in their example, and they were able to pass the audits of all of the new security regulations.

Another thing was, this organization was looking to do financial instruments with other financial organizations from around the world. They now had an internationally adopted, common control framework, in which they could provide some level of assurance that they were securing their technology in a manner that was aligned to an internationally vetted global and widely accepted standard.

Gardner: That brings to mind another issue. If I am that organization and I have gone through this diligence, and I have a much greater grasp on my risks and security issues, it seems to me I could take that to a potential suitor in a merger and acquisition situation.

I would be a much more attractive mate in terms of what they would need to assume, in terms of what they would be inheriting in regard to risk and security.

Schreider: Sure. When you acquire a company, not only do you acquire their assets, you also acquire their risk. And it’s not unusual for an organization not to pay any attention whatsoever to the threats and vulnerabilities that they are inheriting.

We have numerous stories of manufacturing or financial concerns that open up their network to a new company. They have never done a security assessment, and now, all of a sudden, they have a lot of Barbarians behind the firewall.

Gardner: Interesting. Any other examples of how this works?

Schreider: Actually there are two other ones that I would like to talk about quickly. One of the largest public municipalities in the world was in the process of integrating all of their disparate 911 systems into a common framework. What they had basically was 700 pages of security controls spread over almost 40 different documents, with a lot of duplication. They expected all of their agencies to follow this over the last number of years.

What resulted was that there was no commonality of security approach. Every agency was out there negotiating their own deals with security providers, service providers, and product providers. Now that they were consolidating, they basically had a Tower of Babel.

One thing we were able to do with the ISSM Reference Model was to take all of this disparate control constructs, normalize it into our framework, and articulate to them a comprehensive end-to-end security approach that all of the agencies could then follow.

They had uniformity in terms of their security approaches, their people, their roles, responsibilities, policies, and how they would actually have common proof points to ensure that the key performance indicators and the metrics and the service-level agreements (SLAs) were all working in unity for one homogenized system.

Another example, and it is rapidly exploding within our security practice is the utility industry. There are the NERC CIP regulators, which have now passed a whole series of cyber incident protection standards and requirements.

This just passed in January 2008. All U.S.-based utility organizations -- it could be a water utility, electric utility, anybody who is providing and using a control system -- has to abide by these new standards. These organizations are very “stove-piped.” They operate in a very tightly controlled manner. Most of them have never had to worry about applying security controls at all.

Because of the malleability of the ISSM Reference Model, we now have one that is called the ISSM Reference Model Energy Edition. We have it preloaded with all the NERC CIP standards. There are very specific types of controls that are built into the system, and the types of policies and procedures and workflows that are unique to the energy industry, and also partnerships with products like N-Dimension, Symantec, and our own TCS-e product. We build a compliance portfolio to allow them to become NERC CIP-compliant.

Gardner: That brings to mind another ancillary benefit of the ISSM approach and that is business continuity. It is your being able to maintain business continuity through unforeseen or unfortunate issues with nature or man. What’s the relationship between the business continuity goals and what ISSM provides?

Schreider: There are many who will argue that security is just one facet of business continuity. If you look at continuity of operations and you look at where the disrupters are, it could be acts of man, natural disasters, breaches of security, and so forth. That’s why when you look at our Service Management Framework and availability, continuity, and security-service management functions are all very closely aligned.

It's that cohesion that we bring to the table. How they intersect with one another, and how we have common workflows developed for the process in an organization gives the client a sense that we are paying attention to the entire continuum of continuity of business.

Gardner: So when you look at it through that lens, this also bumps up against business transformation and how you run your overall business across the board?

Schreider: Continuity of business, and security in particular, is an enabler for business transformation. There are organizations out there that could do so much better in their business model if they were able to figure out a way to get a higher degree of intimacy with their customer, but they can’t unless they can guarantee that transaction is secure.

Gardner: Well, great. We've learned a lot today about ISSM as a reference model for getting risk, security, and management together under a common framework, best practices and common controls approach.

I want to thank our guest, Tari Schreider, the chief security architect in the America’s Security Practice at HP’s Consulting & Integration Unit. We really appreciate your input. Tari, great to have you on the show.

Schreider: Thank you, Dana.

Gardner: I also want to thank our introducer, John Carchide, the worldwide governance solutions manager in the Security & Risk Management Practice, also within HP C&I. Thanks to you, John, as well.

Carchide: Thank you very much, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored podcast discussion. This is the BriefingsDirect Podcast Network. Thank you for joining, and come back next time.

Listen to the podcast here. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast on best practices for integrated security, risk and compliance approaches. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Monday, April 07, 2008

XML-Empowered Documents Extend SOA’s Connection to People and Processes

Transcript of BriefingsDirect podcast on XML structured authoring tools and dynamic documents’ extended role in SOA.

Listen to the podcast here. Sponsor: JustSystems.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about two large growth areas in IT, and these are two areas that are actually going to coalesce and intersect in a relationship that we are still defining. This is very fresh information.

We're going to talk about dynamic documents. That is to say, documents that have form and structure and that are things end-users are very familiar with and have been using for generations, but with a twist. That's the ability to bring content and data, on a dynamic lifecycle basis, in and out of these documents in a managed way. That’s one area.

The second area is service-oriented architecture (SOA), the means to automate and reuse assets across multiple application sets and data sets in a large complex organization.

We're seeing these two areas come together. Structured documents and the lifecycle around structured authoring tools come together to provide an end-point for the assets and resources managed through an SOA, but also providing a two-way street, where the information and data that comes in through end-users can be reused back in the SOA to combine with other assets for business process benefits.

To help us understand this interesting intersection and the somewhat complex relationship between structured documents and SOA, we are joined by Jake Sorofman. He is the senior vice president of marketing and business development, for JustSystems North America. Welcome to the show, Jake.

Jake Sorofman: Thank you, Dana, great to be here.

Gardner: There has been a lot of comment around SOA. It’s been discussed and debated for some time. What I'm seeing in the market is the need for bringing more assets, more information, more data, and more aspects of application activities into SOA to validate the investment and the growth.

Tell us what it is about SOA, in the sense that it is data-obsessed? What is it that we need to bring more of into SOA to make it valued?

Sorofman: We’ve all heard the statistics for ages about 80-plus percent of all the information and the enterprise being unstructured information, and how it’s contained within documents, reports, and email etc., and doesn't fit between the columns and rows in the database.

That’s the statistic we’ve all grown comfortable with. The reality, though, is that the SOA initiative today and the whole SOA conversation has really centered on structured data, transactional data, and hierarchical data, as opposed to unstructured content that’s stored within these documents. The documents, as they are created and managed today, are often monolithic artifacts and all the information within those artifacts is locked up and isolated from the business services that comprise our SOA.

Our premise is that you need to find new and unique ways to author your content as extensible markup language (XML), to make it more richly described and widely accessible in the context of SOAs, because it’s an important target source for a lot of these services that comprise your SOA applications.

Gardner: So, there are a number of tactical benefits to recognizing the dynamic nature of documents. Then, to me, there is also this strategic benefit from XML enabling them to provide a new stream or conduit between the content within the lifecycle of these documents and then what can be used in applications and composite applications that an SOA underpins. Help us understand the tactical, and then perhaps the strategic, when it comes to a lifecycle of document and content.

Sorofman: That’s a really good way to think about it. A lot of companies will take on this notion of XML authoring from a tactical perspective. They are looking for new and improved ways to accelerate the creation, maintenance, quality, and consistency of the content that they produce.

It could be all their branded language, all their lock-down regulated language, various technical publications, etc. They need to streamline and improve that process. So, they embrace XML authoring tools as the basis for creating valid XML, to manage the lifecycle of those documents and deliverables.

What they realize in the process of doing so is that there is a strategic byproduct to creating XML content. Now, it’s more accessible by various line-of-business applications and composite applications that can consume it much more readily.

So, it’s enriching the corpus that various applications can draw from, beyond traditional or relational databases, and allowing this more unstructured content to be more widely accessible.

Gardner: In the past, we’ve seen this document management and content management value through some very large, complex, cumbersome, and frankly expensive, standalone management infrastructure that would, in a sense, find a way of bringing these structured and the unstructured worlds together. It seems to me you’ve found a quicker and more direct way of doing this, or am I overstating it?

Sorofman: I think that’s largely right, to the extent that, at author time, the content is created as XML, particularly when that XML is organized within a taxonomy that makes some sense and makes it discoverable in context. Then, that content can just be reused. It can be reused like any other data asset that’s richly described and that doesn’t require heavyweight infrastructure or sizable strategic investments in content infrastructure.

Gardner: Another thing that fascinates me about this topic is a problem with SOA, and that has been the disconnect between the people and the processes that the IT systems can support. We've heard it referred to as "human-oriented architecture," versus SOA. The people that are in the trenches, that are in maintenance types of activities, that are in a highly compliance-oriented environments, need to adhere very closely to regulations, and that the documents become the way that they do that.

It seems to me that if you take the documents that these people thrive on and create en masse, and make those available to the SOA and the composite business processes that that architecture is supporting, then you are able to bridge, this gap between the people, the process, and the systems. Help me understand that a little better.

Sorofman: That makes a great deal of sense. Thus far we’ve been talking about the notion of unstructured content as a target source to SOA-based applications, but you can also think about this from the perspective of the end application itself -- the document as the endpoint, providing a framework for bringing together structured data, transactional data, relational data, as well as unstructured content, into a single document that comes to life.

Let me back up and give you a little context on this. You mentioned the various documents that line workers, for example, need to utilize and consume as the basis for their jobs. Documents have unique value. Documents are portable. You can download a document locally, attach it to an email, associate it with a workflow, and share it into a team room. Documents are persistent. They exist over a period of time, and they provide very rich context. They're how you bring together disparate pieces of information into a cohesive context that people can understand.

Documents allows information to stand alone. They're how knowledge is transferred, and how information is shared between people. Those are all the good things about documents. But, historically, documents have been a snapshot in time. So, even when you have embraced an XML publishing processes, the documents as published as a static artifact. It’s a snapshot in time. As the information feeding these documents changes, what you see within the documents as a published artifact is effectively out of date.

Gardner: I suppose one way that people have gotten around that is to create portals and Web applications, where there is a central way of controlling the data that gets distributed through many views and could be updated. I suppose there must be some drawbacks to the portal perspective. What do we do in here? We take in the best of a Web and portal application and the best of a document and try to bring them together?

Sorofman: Bingo! It’s really about blurring the lines between documents and data or documents and applications. So the portability, the persistence, and the rich context of a document, because documents matter and sometimes and on the glass portal-style application experience, is just not a substitute for what you need out of the document.

But, providing a container for much more dynamic and interactive information and ensuring what you find in that document is always authoritative is just the direct reflection of the sources of truth in the enterprise. All this information is introduced as a set of persistent links back to the sources of record. What you are looking at isn’t an embedded snapshot. You are looking at a reflection of these various systems of record.

Gardner: I was reminded of the importance of the format of a document, just recently when I was doing some tax forms. It’s fine for me to have all this information on my computer about the numbers and the figures, but I have to then present that back to the IRS through this very refined and mandatory format. I need to bring these two together, and, once I have done that, I can see that the IRS is benefiting from the standardization that the format and document brings, and I am of course benefiting from the fact that I can bring fresh data into that.

But, we are now proposing instead these documents that hold value based on their format, their taxonomy, their relevance to a specific regulatory impetus or a vertical industry imperative. What we get beyond that is not just bringing that data from a Web application out, but from perhaps myriad applications and/or this entire SOA, and using the policy-driven benefits of an enterprise service bus (ESB) and governance to help direct the right data to the right document.

Sorofman: Absolutely. The other thing that I mentioned is making these documents semantically aware. The document actually becomes intelligent about its environment. It knows who you are as a user, what your role is, what your permission profile is.

Gardner: And that’s because of the XML that they can make that leap to intelligence?

Sorofman: Well, it’s actually because of the various dynamic document formats that are emerging today, including xfy from JustSystems. We provide the ability to embed this application logic within the document format. The document becomes very attuned to its environment, so it can render information dynamically, based on who you are, what your role is, and where the document is within a process. It can even interact with its environment. The example I would like to use is interactive electronic technical manuals (IETM) for aerospace and defense. These are all the methods and procedures for maintaining the aircraft, often very, very complex documents.

Gardner: We're talking about large tomes, not just a document, but really a publication.

Sorofman: Exactly, and there are really a couple of different issues at work here. The first is the complexity of a document makes it very difficult to keep it up to date. It’s drawing from many different sources of record, both structured and unstructured, and the problem is that when one of the data elements changes, the whole document needs to be republished. You simply can’t keep it up-to-date.

This notion of the dynamic documents ensures that what you’re presenting is always an authoritative reflection of the latest version of the truth within the enterprise. You never run the risk of introducing inaccurate, out of date, or stale information to field base personnel.

The second issue is pinpointing the information that someone needs in the context of the task they are performing, so, targeting the information appropriately. You can lose valuable minutes and hours by thumbing through manuals and trying to find the appropriate protocols for addressing a hydraulic fluid leak, for example.

The environment can actually ping the document. For example, a fault is detected in-flight, and the fault detection that happens in real time can actually interact with the document itself, ping the document, and serve up the set of methods and procedures that represent the fix that needs to be made when the plane reaches its destination. The maintenance crew can start picking the parts and preparing to make fix before the plane lands.

Gardner: It almost sounds like we are bringing some of the benefits that people associate with search into the realm of documents, because they are now structured XML-published and authored documents. There’s XML integration among and between them and their sources. You could do a search and not just come up with an 800-page document, but a search within discrete aspects of that document.

Sorofman: That’s exactly right. You start seeing some blurring some between all these categories of technology around information, search and retrieval, semantics, and document management and data integration. It’s all resulting in a much a richer way of working with and utilizing information.

Gardner: So, we are bringing together what had been document management, content management, data integration, data mashups, compound documents, forms, and requirements for regulatory compliance. That’s why I think it relates to SOA so well.

We're finding a commonality between these, rather than having them be completely separate things that only people physically shuffling complex documents around their desktops could manage. We're starting to automate and bring the IT infrastructure to help in this mixing and matching between these formally siloed activities.

Sorofman: Yes, pretty much so.

Gardner: Alright. One of the things that is a little bit complex for me is understanding the way that the content, the XML, and the data flows among and between documents, and then also how it could flow within the SOA. I think this is still a work in progress. We are really on the cutting edge of how these two different areas come together.

Maybe we could go a little bit into the blue-sky realm for a moment. How do you think the SOA architects should start thinking about dynamic documents, and, then perhaps conversely, how should those that are into structured document authoring start thinking of how that might benefit a larger SOA type of activity?

Sorofman: Great questions. To start with, I don’t think that SOA architects have given a great deal of thought to date to unstructured content and how it plays into SOA architectures. So, there certainly needs to be consideration paid to how you get the information in, in a way that makes it rich to describe, reusable, more akin to relational data than documents themselves.

Structured authoring needs to be part of the thinking around any company’s knowledge management (KM) strategy in general and with a specific importance around how it feeds into the overall SOA strategy. Today, I don’t think that there has really been an intersection between KM and SOA in this respect.

Structured authoring professionals need to start looking beyond their traditional domain of technical publications and into other areas where XML authoring is relevant and appropriate in the enterprise. That’s becoming much more broadly deployed and considered outside of traditional domains of tech-docs.

There’s also this convergence that’s happening between structured documents, structured authoring, and application development, particularly as it relates to this notion of dynamic documents that we are talking about. The creation of business-critical documents becomes much more akin to application development processes, where you are essentially assembling various reusable fragments and components from across the enterprise, into a document that’s really treated more like an application than a monolithic artifact itself, an application that has its own life cycle and needs to be treated and governed in more of an adaptive centric way. So, it’s starting to really impact people’s role in thinking, both from the architect side and on the traditional structured authoring side.

Gardner: Sure, it’s really about people, process, and policy coming together, not just inside the domain of IT, but in the domain of where people actually do their work and where they have traditionally done work for generations.

Sorofman: Very true.

Gardner: Okay, I think I get it now. But to help better understand this it's not just "tell." A lot of times it helps to "show." Can you give me some examples in the real world, where people are starting to move towards these values, where there are some use-case scenarios around dynamic documents extending beyond the document function and getting into application development too?

Sorofman: Absolutely. There are three usage patterns I like to speak about that are illustrative of dynamic documents and how they are being applied today. The first I call "information sharing" sort of broadly. It’s the idea of one-to-many dissemination of information in the form of a document, to various distributed field-based personnel.

A good example of that is the IETM, any kind of business-critical technical manual or a publication that needs to be shared with a variety of different people and where there is a very high cost of that information being either poorly targeted or easily out of date.

This is the idea of bringing together all these different information sources mashed up into the single dynamic document that comes to life. So, as the source information changes, what you see in that document changes and it also has the ability to be semantically intelligent about its environments, about the person who is accessing it, so it can render a view of information that’s appropriate to the context of its usage.

The second example is really taking the same concept of dynamic documents and applying it to collaborative processes, where you need to bring together various stakeholders internally and externally toward the goal of getting some sort of team based process executed or completed.

Think about something like sales and operations planning (S&OP), where you have various stakeholders cross functionally come together periodically, maybe monthly or quarterly, to make trade-off decisions, horse-trading decisions, about which projects to invest in and which ones to disinvest in, how to optimally align, supply and demand.

That’s typically the sales and marketing group, the manufacturing group with a view of capacity and a view of inventory, and then the finance team with a view of a return on it's investment, return on assets, and internal rate of return. These teams are coming together to work on making these decisions, and they often do this by sharing documents. They pull reports from all their various systems of record, manufacturing execution systems, inventory control systems, ERP systems, supply chain, CRM.

Even though these systems have fairly authoritative trustworthy information within them, as soon as you pull a report, it’s frozen in time. So, these teams tend to wrestle with validating and reconciling all this disconnected and static information, before they can make decisions. The dynamic document allows all this information to come together as an authoritative reflection of all these different source systems, but still allows these teams to work in the format they are most comfortable with, which is to say, documents.

Gardner: Because there is a semantic and intelligent aspect of this, this content has been shared collaboratively and would present itself to each of these individuals through a different document format, based on what it is that they are doing within their traditional role.

Sorofman: That’s exactly right. It will serve itself up dynamically, based on what’s appropriate for stakeholders to see, based on their permission profile or on their role. It could be a different level of abstraction or a little different level of detail. It can actually change the information that’s being displayed, based on where it is in a workflow process. The document can actually become aware of its workflow lifecycle state and render a different information based on where it’s been, where it’s going, and where it is in the process.

Gardner: This is strikingly different than what's done by many organizations that I am aware of. They have one big spreadsheet that everyone shares, which really is sort of one-size-fits-all, which isn’t the way people really work.

Sorofman: Everyone has had some experience with spreadsheets gone wrong and the high cost and perverse consequences of trying to force-fit spreadsheets into critical planning process. So, I think most people can empathize with this specific challenge.

Gardner: Alright. Let’s talk about the business case for this. Now, it sounds good theoretically. We’ve certainly got a technology that can help this productivity improvement by extending data in the formats that people are familiar with. There is compliance, and regulatory and risk reduction as a result.

And, of course, as we mentioned earlier, there is the sharing and repurposing and reusing of this across the SOA value stream in the business. But, dollars and cents, how do people go and say, “Wow, this sounds like a good idea. I want to convince somebody to invest in it, but I need to talk to them about return on investment.”

Sorofman: You can make a business case for this sort of approach from a very basic to a much more sophisticated level. At the most basic level, the ROI around XML authoring is pretty straightforward. Rather than creating document authoring as sort of monolithic artifacts, creating them as reusable components helps to accelerate and reduce the cost of creating new documents and deliverables, and it makes information much more reusable. That has a cost implication and a time-to-market implication.

If, for example, you are launching a product that’s highly dependent on documentation -- and documentation is typically one of the things that we do at the end of the product launch cycle -- that becomes a bottleneck that can have implications for revenue that’s foregone, excessive cost, and missed deadlines, etc.

There is also an issue around localization, multi-format output, and multi-channel output of this various content, taking the content, translating it into different languages and into different output formats.

Gardner: Localization. So, you have the same document format, but the input and output can be in a variety of different languages.

Sorofman: That’s exactly right.

Gardner: That would save a lot of time and money. Instead of the full soup-to-nuts translation, you only have to translate exactly the metadata that’s required.

Sorofman: That’s exactly right, and that’s a tremendous ROI. There are many companies that look at the ROI of XML authoring exclusively from the perspective of localization, and it’s often said to have between a 40 and 60 percent cost impact on localization itself.

Gardner: In fact, you are automating a large portion of the translation process.

Sorofman: Yes. Also, think about the change time implications of what we are talking about. In the traditional monolithic model, when you need to make changes to documentation, you are making changes across all the various documents that consume information fragments in all the various formats, in all the various localized versions, and all the derivations and permutations of an information source. That becomes extremely complex, extremely costly, and error prone.

In the XML authoring world, you are authoring once, publishing many times, and maintaining a single native format. So, you are maintaining that one reusable component and allowing those changes to be propagated across all the various consuming documents and deliverables.

Gardner: And, because we are doing this separation, it also strikes me that there is a security benefit here. One of the things that troubles a lot of IT folk and keep them up at night is the idea that there are different versions and copies of full-blown documents and databases, in some cases, on each and every PC and laptop, some of which may disappear in an airport. It strikes me that by separating this out, what might only go into some notorious hands at the airport would be the form, but not the data.

Sorofman: It’s a great point.

Gardner: So, there’s a security benefit here as well, when you are able to control things, and not have all the dynamic data distributed at the end point, but, in a sense, communicated to that end point when it’s the right time.

Sorofman: Absolutely. I guess the benefits we are looking at are really these sorts of operational benefits of the XML authoring and how that impacts the bottom line and time to market etc. There are also bigger benefits that come from the actual consumption of dynamic documents, and how you ensure that you are only putting information in the hands of the people that need it, that it's always up-to-date.

That clearly has an implication for risk and compliance in many different application areas, and accelerating, improving, and optimizing business processes by eliminating the error introduction that comes from the re-keying of information between disconnected process steps, where documents are involved.

Gardner: So the human error factor goes down as well?

Sorofman: Dramatically.

Gardner: How does that work exactly?

Sorofman: Let me give you a quick example of one of the other usage patterns that’s worth speaking about. It's what I like to call "document process transformation." If you think about any business process flow, there are typically silos of automation, and these are the flows within the process that are highly tuned, very transactional, with virtually no human intervention.

They are highly automated, because they can be. Everything can be reduced down to a transaction and thus handled by machines, but then there are manual gaps between these silos of operations or automation that often eliminate, or at least erode, some of the benefits of automation.

This is typically highly human-centric phases of a process, often very document centric. It’s where people need to get involved. For example, if you think of a loan application, in the front end of the application there is a form. It’s very form based, and it’s about capturing information about the applicant.

Some of the information can be handled transactionally, so the form is able to send the information to a back-end system where it’s processed transactionally, but some of the information needs to be viewed and analyzed by human beings, who actually have to look at it in context and make a judgment about the applicant.

In the front end of the form, it becomes a transaction, and then it needs to be served up as a set of document renditions, based on the various personal roles within the process that needs to be viewed to make a judgment about the loan.

The document can actually morph as it moves through the process, based on what that person needs to see or what’s appropriate for them to see. At the end of the process, a judgment is made about the loan. It’s either approved or it’s rejected and it becomes a transaction again.

The information can be extracted from the document set itself automatically, pulled down to a back-end process, like "open the account," the account opening procedure, and then information can be extracted from the document set to serve a traditional publishing pipeline to send a custom acknowledgment letter back to the applicant, welcoming them to the bank, and letting them know that the loan has been approved.

So, you've gone from silos of automation separated by manual gaps, to a much more streamlined and straightened process, where you have transactions driving document renditions and document renditions driving transactions.

Gardner: This is a great example of why this is relevant for SOA. First off, you're talking about how the human input of the data needs to be improved -- and that’s the garbage-in, garbage-out value. If you are going to be reusing this data across multiple applications, you want to make sure that‘s good data to begin with. So, that’s one value.

The other is this controlled workflow, an event-driven workflow, which again is part of what people are trying to build that SOAs will support, these composite workflow process oriented types of activities that are very much core to any business.

Then, the last fascinating aspect is the notion that we are combining what needs to be a human judgment with what is going to be a computer-driven process. These dynamic documents in a sense giving little stop signs that say, “Stop, wait, let the human activity take place.” The human can relate back to the document, the document relates back to the business process, the business process is managed and directed through the SOA.

Sorofman: That’s exactly right. As long as people are involved, there will be documents, but traditionally documents have been fairly unintelligent and inefficient in how they have been authored, organized, managed, and used as a basis for consuming information. This is just what documents have always wanted to be.

Gardner: I dare say that documents have been under-appreciated in the context of SOA.

Sorofman: I couldn’t agree more.

Gardner: Well, great! Thanks for shedding some more light on these issues. Tell us a little bit about how JustSystems works its value in regard to the dynamic documents that are now holding much more relevance in a larger SOA.

Sorofman: JustSystems has two product lines that are very relevant to this discussion. The first is the product called XMetaL, which is one of the leading structured authoring and publishing solutions that provides the basis for creating valid XML content, as part of the authoring process. I mentioned this idea of being able to create valid XML, as opposed to monolithic document artifacts at author time. This provides a basis for both technical authors, but also business authors. It’s sort of the occasional contributor or the subject matter expert, the accidental author, the occasional author to create valid XML without ever seeing an angle bracket, so a very intuitive WYSIWYG environment for creating XML as a byproduct of a very intuitive authoring process.

That’s how you feed the beast, how you get the XMLs into the systems, to make it much more richly described and more reusable, this part of downstream processes.

On the other side of the equation, we have a product line called xfy, which is a document-centric composite application framework that allows you to bring together all these various information sources, structured and structured, and mash them up within a single dynamic document application.

It’s blurring the lines between documents and applications, providing the user experience that people appreciate from a document, but with the authoritative, dynamic and interactive information that has been most closely associated with traditional business applications, the document becomes the application.

Gardner: Of course, we are using XML, which is a standardized markup language. We are also going to be using vertical industry taxonomies and schemas that are shared, and, therefore, this is a fairly open opportunity to share and communicate and collaborate.

Sorofman: That’s right.

Gardner: Well great! Thanks again. We’ve been talking about XML empowerment of documents and how to extend Services-Oriented Architecture’s connection to people and process through these types of documents and structured authoring tools. To help us to understand this, we have been talking with Jake Sorofman. He is the senior vice president of marketing and business development at JustSystems North America. Thanks for joining us, Jake.

Sorofman: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and comeback next time.

Listen to the podcast here. Sponsor: JustSystems.

Transcript of BriefingsDirect podcast on XML structured authoring tools and dynamic documents’ role in SOA. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Sunday, April 06, 2008

Platform as a Service Enables Cloud-Based Development While Accelerating Role of SaaS

Transcript of BriefingsDirect podcast on platform as a service and on-demand applications deployment trends.

Listen to the podcast here. Sponsor: Bungee Labs.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, a sponsored podcast discussion about platform-as-a-service (PaaS). We're looking at an entire lifecycle approach to services on the Web for development, integration, and deployment. We are going to be talking about Bungee Labs, the sponsor of our podcast and take a deep look at how they approach PaaS.

I think it’s important for us to get into this topic to understand its wide-reaching implications. PaaS is really taking the best of some of the old and the new that’s now available to developers and architects -- and it really puts a new spin on Web-oriented architecture (WOA).

For example, many developers can use tools that are fleet and easy as they approach rapid application development (RAD) and Agile development. But when it comes to the flip side -- the deployment -- there are still gaps in many cases a hand-off. There is also an opportunity, now that users are increasingly on the Web, to use mashups and open APIs.

To take advantage of data independence -- data that can be acquired, used, and mashed-up from the variety of sources -- including services-orientation from inside the firewall, from within legacy applications. What's nice about the PaaS approach is that the scale on the deployment can go up very rapidly and scale down efficiently in terms of cost.

In many respects, we are now leveraging the incredible value that comes from not having to pay upfront in capital expenses for applications and infrastructure, but instead can take advantage of the pay-as-you-go, metered approach.

To help us better understand PaaS, we’re joined by Phil Wainewright. He's an independent analyst, the director of Procullux Ventures, and a ZDNet software-as-a-service (SaaS) blogger. Welcome to the show, Phil.

Phil Wainewright: Good to be here again, Dana.

Gardner: We are also joined by Alex Barnett, the vice president of community at Bungee Labs. Welcome to the show, Alex.

Alex Barnett: Many thanks, Dana, for having me on.

Gardner: Now, as I mentioned, this on-demand platform value brings together a lot of the old and the new. But in some regards, the current state of development for applications and Web services is still a tricky business. We're still in a period where friction between the developer and what the operational environment is expected to be.

There are often integration issues. We are also having to deal, of course, with on-premises software acquisition and downloads, licenses, upgrade paths -- just maintaining the software on premises for development and for testing.

Phil, could you help us understand what is it about the current state of development that will benefit from moving all of this into the cloud?

Wainewright: One of the factors is that IT is getting more and more complex, and obviously there is a lot of emphasis on governance and oversight and so on. So, provisioning a new system for a development project can be a very long-winded process.

On top of that the world seems to be moving fast ... because of the Web, because it’s connecting us in much more immediate ways. And therefore, business people want to have much more record intervention, and they need automation of new processes a lot faster. You’ve got these two processes: On one hand, it’s getting more and more complex, difficult, and time-consuming to bring new projects online. At the same time, there is this pressure to have those new projects faster and faster, and to have an adaptable and agile development environment. It’s a collision course, really.

Gardner: And I suppose with the way teams are dispersed now, and with people leveraging globalization and offshore development, that you want to bring groups of people together without also having to maintain and oversee the on-premises software at each location that’s supports new development along with application lifecycle management (ALM) ?

Wainewright: Yes, that’s absolutely true. The web enables this collaboration, whether you are developing on on-premises platforms or in the cloud. You want to do it via a distributed team, because that’s the way that you get to this scale, and you can get the best minds on a project. And so you are going to be doing that collaboration in a kind of Web-connected environment anyway.

Gardner: I suppose it’s also important when we look at the speed of development to appreciate that three or four folks in a garage in Northern California can come up with a mashed-up service and go out and create a social networking company, for example, quick and easy -- and at low expense. And then, here you are in an enterprise, taking six months to work through a requirements process. It seems as if something has got to change.

Wainewright: This is a rewind back to the late '90s when people looked at the Web and they thought, "Well, how is it that my daughter or son can look up the information of some arcane homework project in a couple of seconds, while it takes me three weeks to find out what's happening in my own business?"

Now what we are seeing is my daughter is able to hook out to these applications from some guys working off in a garage for Facebook, and do it all in a couple of days. And, again, I am kind of trying to get similar functionality on my corporate network -- but it could be 18 months before I see it.

People are asking, "Well, why, why is that? Why are we so far behind what these people in garages, these teenagers, are able to do so much out there on the Web?"

Gardner: That’s a reason why we are seeing very rapid uptake in what's known as Enterprise 2.0, WOA, mashups, rich Internet applications (RIAs) -- and it's not just startups, but really starts to move across the board. It’s just a faster, better way of getting a fairly large class of applications going -- and also managing the integration of data on the back-end.

Wainewright: Well, let’s be cautious, Dana, because I think that a lot of people are experimenting with it for the moment. I wouldn’t say enterprises are necessarily doing things that are business-critical or mission-critical right now. They are still working out what the utility is, and what the risks are from this model.

Gardner: Let’s take that over to Alex. Alex, tell us a little bit about how you perceive the rates of adoption around SaaS and WOA. Do you see this as something that is just building; and what about that issue about risks security, control and management?

Barnett: The context from the big picture point of view is that everything we know is moving to the Web. And what that means in tangible terms is that businesses and service providers and software companies are providing layers of functionality and data that are native to the Web or are Web-oriented.

Things like Web services and even less-sophisticated and increasingly popular things using REST and XML offer interfaces into the functionality and data. This ongoing trend is being driven by a frustration in what we’ve inherited from the previous generation of computing, where everything was essentially behind a proprietary protocol or proprietary set of APIs. This required that you to buy a platform set, or toolset, across the board, just so you were able to get the data that you owned as a company.

If you think about customer relationship management (CRM) systems or a certain kind of database -- they are in silos, they are disconnected, or they are very expensive and require lots of proprietary knowledge in order to be able to access the data, and therefore the value.

What we are seeing with things like WOA is a materialization of how we get out of that frustration as an industry -- how we get out of that frustration from business. We want the data and the business intelligence from it, and to be able to get at that from a business perspective.

The IT managers feel the pressure to be able to more rapidly to develop applications and access the data that are based on and being made available through Web APIs. And developers are able to then connect, develop, and build-out new applications based on distributed data, on distributed functionality and react to the business needs.

If you look at ProgrammableWeb.com, for example, at the end of last year, you would have seen that there were something like 500 Web APIs catalogued. There is a combination of commercial and public APIs. And now, here we are in April and we’ve got 650 APIs from the public and commercial space. So exactly the same trend is occurring behind the firewall, within organizations, as they create these layers of connectivity and programmable end-points to gain functionality and data.

Wainewright: It is important to say that the interfaces being published on ProgrammableWeb.com are not just some stupid little kind of stock market data feeds or what's the weather in my area tomorrow, the sort of thing we saw in the early days of Web services. These are actual, complete and functional API sets of the kind that an enterprise calls to applications like Salesforce.com, and to many other resources.

Gardner: So we have some significant new opportunities for taking advantage of the Web with these mashups and the data portability. And, again, we want to take advantage of the old as well as the new, with the old being characterized by lower-risk control and management.

So what does PaaS do conceptually, Alex, and how does that bring together the best of the old and the best of the new?

Barnett: If we define "old" as being that you are locked into a stack, that you aren’t able to get up through using open standard protocols such as SOAP, or even just standard XML interfaces, then that "old" is typical to get around. That’s what we’re seeing in terms of investments to upgrade system to unlock that value.

Gardner: I guess by "old" I mean enterprise-ready and mission-critical, that’s what I mean.

Barnett: Right, and once they’ve got to the point whether they have allowed that services layer to occur, the question is, "How do we get to the next step? How do we quickly derive value from the systems we have?" What we hear businesses crying out for is getting at the data, getting at the applications that are open.

Now, there are some companies that are moving ahead very quickly with this, being able to take the leap of faith of providing CRM-type data on a hosted service. Or, they may want to maintain it behind the firewall, behind their existing systems, and then just be able to provide limited, yet secure, access to that data through applications that are inherently secure in terms of their architecture.

Wainewright: What we're seeing develop at the moment is kind of a two-tier information technology. On the one hand, you have all of the existing on-premise, legacy applications and all that data that Alex was describing. It's locked away, and IT managers are really puzzling over and grappling with the issue of how to unlock that data. How do they make it more accessible? How do they build more agility into applications infrastructure?

They are looking at things like service-oriented architecture (SOA) and other ways of connecting and integrating the data, while automating business processes within the organization. So that’s one tier.

The other tier that’s developing is this Web-oriented tier, all of these APIs and in-the-cloud resources and applications that are out there on the Web. To take advantage of those connections, you need to build a completely new infrastructure, which is different from the existing infrastructure within the firewall. It has to cope with connecting to external resources, and it has to have different kinds of security, different kinds of identity management.

Building a robust infrastructure to do this is very, very hard. That’s one of the reasons why a lot of enterprises are holding back, and are therefore missing agile opportunities. That’s one of the roles that the PaaS can provide.

Barnett: What they have tried is failing, or they haven’t got the right level of investments, or they have other priorities.

Wainewright: Yes. And, we see some tremendous disasters happening where people in e-commerce are, in a very limited way, opening up to customers to transact with them. And they have to do that. They are losing customer data, because they're not getting the security right. So, yes, people have to be wary of that whole area, that Web tier -- because it’s full of pitfalls and traps.

Gardner: Okay, we’ve established that the tide is turning to the Internet, that there are some great Web-based services available, that technologies are now bubbling up to allow for better and easier connectivity. And yet, there is still a need for the right platform and the right infrastructure to make this all mission-critical and enterprise-ready.

So let’s get into PaaS as a possible stepping stone that, in a sense, bridges the best of the Web-oriented architecture and the available SaaS and the APIs-world with what developers inside organizations -- be they ISVs, service providers, or enterprises -- need to make these approaches acceptable and within the acceptable risk parameters.

I noticed that Bungee Labs does not call this "Development-as-a-Service" or "Deployment-as-a-Service" or "Integration-as-a-Service" -- but "Platform" as a service. Alex, give us the primer. What does "Platform-as-a-Service" really mean?

Barnett: That’s what we are trying to define at Bungee Labs. PaaS is one of those terms that we’re going to be hearing more and more. And they are going to be different -- varying levels of definition and interpretation of what that means.

But what we’ve done is put a stake in the ground in this respect, and then saying that in order to really be a PaaS -- and not just any one of those single pieces that you’ve mentioned plus more individual pieces -- that you need to be able to provide the end-to-end services to really call it a "platform."

From the developer’s standpoint, which is the development cycle, this means the tools that they need to develop applications, to be able to then test those applications, to be able to connect to Web services and to combine them, and to have all those kinds of capabilities -- and to then deploy and to make those applications instantly available to the business users.

Literally, we mean a URL that is the end-point for the end-user. From that, they can start consuming the application.

So, PaaS means having an environment in which you deploy inherently and have built-in scalability, reliability, and security. Once you’ve deployed your application, you know that you don't have to take care of all the infrastructure in the datacenter and the capital investments and the bodies that are required to make it scale when newer applications increases in use.

There is also the ability to connect to the various distributed data sources or functionality that the application needs to be able to consume. You can get that inside of that platform, the ability to be able to do that in a Web-native way, and so take advantage of the architectures we descried earlier, such as SOA.

There is also the ability -- and we touched on it earlier -- for developers to be able to collaborate on projects that are built-out in the cloud. They can share code, check in code, do all the standard revisions and collaborative-type functionality that developers need when they’re working on projects with teams distributed across the world or across your offices. And they can do this without having that entire infrastructure on-premise.

And then, the last, but critical, piece is having deep instrumentation and an analytics ability around the use of the application -- of how it’s being used, of where the connections are -- right across the board from the "glass of the window," the browser, for example, and right on through to the Web services in the CPU, or the rest of it.

As a result, you are able to understand performance. You are able to understand your billing, if it’s a billing proposition that you have. And all of what I described is comprised within six pillars [of Bungee's offerings]. All of that is delivered and available purely as a service, so there are no on-premises requirements for any of those components across the development and platform used in a utility model. You use it as much as you pay for, or as much as you use in a utility-based model -- all in the cloud. No bit needs to be installed on any machine at the enterprise in order to take advantage of all those Web services and functionalities.

Gardner: For our listeners who are just getting used to this concept of PaaS, let’s just get right in quickly and describe what Bungee Labs is. It’s a young, innovative company. And you’ve come out with a service called Bungee Connect. This is essentially one place online where you can go to develop, mash up, and access data, to put together Web-based applications and services, and then instantly -- with a click of a button, and perhaps I am oversimplifying -- develop and deploy in basically an integrated continuum. Is that correct?

Barnett: Yes, and provide very rich user experiences as part of that, with highly interactive application functionality. We’ve built out essentially that stack that I’ve described earlier. We've made that available for organizations to take advantage of. We're specifically targeted at developers who really want to be able to build very sophisticated Web applications that leverage orchestration workflow around connecting to Web services.

We are not in the business of being able to provide non-programmers with the ability to do these nice simple mashups.

Gardner: Well, if you can do that, let me know, because that would be a very good trick. I am sure the world would love to have development by anybody!

Barnett: Yeah, and that’s a great dream to be able to have, but inherent in that is inflexibility, because you are simplifying it all for the end-user. What we really offer is for the developers who are tasked with building sophisticated Web applications to do just that, deploy that, and then deliver very rich user experiences out on the Web.

Gardner: And to be clear, this is not just open source. This is commercial code, if they wish. The people who develop on this system, that code is their intellectual property. Is that right?

Barnett: The intellectual property of the code that is developed by the developers is absolutely their own intellectual property and remains so. We do have a community side of things that allows developers -- just as in the open source world -- to be able to share code and even entire applications as open source running on our grid.

But in terms of a company, it’s entirely their intellectual property that they developed and they are able to literally export the code. And if they want then re-factor that for a different kind of a grid or runtime, it’s their property.

Gardner: Phil, how do you see the relationship between PaaS and what Bungee Connect is doing, and then the larger SaaS trend? Do you see a relationship of one aiding and abetting the other? Or are they in separate orbits? How does that work out?

Wainewright: I think they are very much in a similar orbit. And to an extent, I don't think of PaaS as being part of SaaS or vice versa. It’s just everything moving to the cloud. These are two examples of that happening.

One of the things I want to highlight, as Alex was saying, is the useful experience. When people start developing for the Web, for the cloud, then it’s not just building the infrastructure -- it’s also learning what is involved in writing applications for that environment.

There is much more emphasis on the user experience. There is much more emphasis on reusing what other people have done, whether it’s by mash-ups or by reusing other people’s code, as opposed to reinventing the wheel every time. There is much more emphasis on developing applications and programs that can adapt and change to future opportunities in business conditions.

All of those things also have to be learned, at the same time as building the infrastructure. Using PaaS enables you to tap into that shared expertise in a way that you can’t do, if you try all by yourself.

The other thing that’s happening here is that we’re connecting into the resources of the Web, and getting onto the Web, so that we can interact with partners and customers and connect into those other Web resources. This is what we're really expected to do as businesses today, in order to stay competitive. So, there’s a tremendous pressure building to be able to do this kind of thing.

Now, there are three ways you can get onto the cloud. You can go to a cloud-computing provider and basically build your stuff in that cloud, which gets to some of the infrastructure, but, there's still the issue of how do I write applications in this environment and connect to other client resources.

Second, you can go to pure SaaS whereby you get a ready made application and you can do some customization, but there are going to be quite a few gaps around what that provides and what you actually want to do. There are going to be quite big gaps in terms of integrating that into your existing on-premises applications and to the other client application that you use.

Third, where PaaS comes in, it allows for the ability:

A) To get much faster to the custom applications that you need to build for that environment

B) To do the integrations to fill in the gaps and to access other SaaS applications and services, and to patch and connect back to the existing on-premises applications.

Gardner: Well, great. Now to the point a little earlier that if you read a lot of the blogs that are out there you might think that this is all widespread. But PaaS is just in its early stages. Yet one place where SaaS has really become quite popular and in a full enterprise usage is the CRM space.

First, why is it do you think that SaaS has taken off with CRM, and then secondarily, what is it about the nature of the data in CRM that might be something that this PaaS approach might be well suited for?

Barnett: In the early days, CRM took off because SaaS automation allowed a sales manager to get to the functionality he wanted. In the eyes of the IT manager, he said. "Well, okay, this is just a standalone application inside of Salesforce.com. The sales department is not going to hurt anyone else, so go ahead with it." And the sales manger just signed up on the credit card, and perhaps didn’t even tell IT about it and got on with it.

So it was very easy to establish on-demand CRM. Now what has happened more recently is that enterprises have realized that they have a lot of on-demand CRM being used. And perhaps they’ve decided that they want to harness it because it does kind of enable them to give functionality to the sales team faster than they can, if they built it themselves.

But they would have to do that in the context of their IT infrastructure, and they are looking at things like integration and to make sure a customer record in the on-demand CRM system is the same as the customer records in the ERP system. If a salesman closes a prospect, then how do you translate that closing from the CRM system into an order on the ERP system of records so it gets invoiced and the salesman is properly compensated? How do you then take back the data and functions into the SaaS system?

Gardner: So, we have a lot of fast-changing data that is actually essential to a business. This is what makes their sales happen and how they then fulfill those sales and get them into the processes that their back-end systems will perhaps manage and drive to create the actual products and services. This is certainly mission-critical. It’s distributed across a number of people, with many of them scattered and mobile, and it’s all quite dynamic.

Barnett: Right, and they need to do even more functionally. The on-demand CRM providers like NetSuite, Salesforce.com, Oracle, and now we’re hearing Microsoft Dynamics, are providing Web services layers over the functionality that they provide to the end users.

We’ve always been able to do a certain level of functional customization around those applications, but when you have the Web services that provide access to the data -- on the programmatic level -- you gain a whole new opportunity to merge, in terms of levels of customization against an existing CRM application, or ERP systems.

This pushes out the expansibility and the increased functionality of the investments that they’re making. It allows for those services to be able to expand that further in a cloud-orientated, Web-orientated way.

Gardner: Now, back to Phil. What is it about the CRM and PaaS from your perspective that demonstrates a larger opportunity in the market?

That is to say, if we can take advantage of the mashups and services and high level of on-demand CRM -- we can bring PaaS in to help integrate data or the take that data and some of the interfaces and views from a CRM activity and bring it and relate it to either a channel or a supply chain and/or backend systems.

Does that mean that CRM now highlights what will be repeated across other types of business applications?

Wainewright: Yes. I think it’s great for the on-demand CRM vendors, because it really starts to hammer home the benefits of being an on-demand application. Now you’ve got this Web context that you can take advantage of.

As you look to the huge advantages of being able to consolidate community insights into a better application, of being able to connect into API resources and aggregate data for doing composite processes, then that means these are mashups, data mashups and user interface mashups -- it’s all the same kind of thing.

So you gain this ability, and you really destroy the misconception that if you go to SaaS you can’t do customization and you can’t do integration. It means that we’re actually doing better customization and better integration than you are capable of doing with many on-premises systems, because it’s actually a more flexible customization. It’s more cost-effective integration because it’s a shared service.

What previously seemed like disadvantages of the SaaS model can be turned on their head and turned into advantages.

It shows the way to using these applications in other areas. We talked about CRM -- because that’s a very big area that gets a lot of media attention -- but there are other areas that are equally successful with on-demand, such as people management, human capital management, the whole e-commerce area, and, of course, content management. There are lots of opportunities to take this model further.

Gardner: It’s been interesting for me as I look into Bungee Labs and Bungee Connect because it appears that the PaaS value forms a stepping stone to allow more use and exploration of the SaaS applications and services that are available. And the more you use those services -- in a virtuous adoption cycle -- the more you want to customize and integrate. And so you might then look to PaaS as a means for doing that. I think they play off of each other quite well.

Wainewright: They do. And I think probably the biggest takeaway for anyone listening to this is that as a business, you need to know how to work in this Web environment. Your customers expect you to be connected to allow them to participate. Your partners expect it too. You’ve got to be open to the Web. You need to play in this environment.

And secondly, if you are a developer, you need to learn how to do all this stuff, because this is going to be a big, expanding field where the skills are going to be in demand.

A lot of the focus at the moment in IT is on getting all of the internal systems operating together. I think there’s very little budget available for doing a lot of new stuff. So, you can’t afford to build the infrastructure for all of this Web stuff. You really need to go to outside providers to be able to start playing with it quickly, but if you don’t start playing with it now, you are going to left behind.

Gardner: Great points. Also, if you build services and applications on a PaaS approach, they are going to be public facing. Putting it onto a grid, or utility/cloud-type of deployment allows you to then scale up very rapidly without you having to worry about that forklift upgrade of your blade servers or your datacenter. And, of course, you can also scale down if needed if the services impact just a handful of people in a supply chain environment, for example. You can build a service that might be for a small group of people, but at a price-point that makes that worthwhile.

Barnett: I’d say that’s the beauty of what the whole SaaS trend has allowed us to be able to do.

Going back to the CRM example, the sales managers have been able to just start up a test account using their credit card. And, all of a sudden they can start seeing the value instantly of having this CRM on-demand, through a browser. And, then they can just slowly, slowly increase the activity that’s going on there because they see the value and it’s becoming cost effective.

But if you take the same kind of concept and now bring it back to the IT department, they can just try a simple application that has business value, as long as they’re happy on the security side of things. Then maybe they try a composite of two Web services, and provide instant value back to the business. There it is. It’s a URL. It’s secure. It’s locked down, and it’s all the rest of it. And nobody can get access to it except for the end users that you've decided on.

It just slowly builds up. There's no long-term cycle around this massive kind of evaluation, cost and ROI studies, TCOs, and infrastructure investments and projections. You can just like try it out slowly, and if you like what you see and you get in that kind of positive feedback as a developer or as an IT manager or from an end user -- you just slowly build it up and only pay for what you use. And you have the instant ability to scale if it really takes off.

That’s great thing for anybody in business: To be able to try before you buy, without any kind of commitment or contractual commitment or cost-commitment upfront.

Gardner: I am afraid we’re going to have to leave it there, we’re out of time.

We’ve been discussing PaaS and how Bungee Labs has been bringing that concept to market with a service called Bungee Connect, which fulfills much of the underlying functionality for PaaS.

To help us better understand the market opportunity and the trends that support this, we’ve been talking with Phil Wainewright, an independent analyst, director of Procullux Ventures, and a ZDNet SaaS blogger. Good to have you on the show again, Phil.

Wainewright: Pleasure to be here.

Gardner: We’ve also been joined by Alex Barnett, the vice president of community at Bungee Labs. Thank you, Alex.

Barnett: Yes, many thanks for having me on, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks, and come back and join us next time.

Listen to the podcast here. Sponsor: Bungee Labs.

Transcript of BriefingsDirect podcast on platform as a service and on-demand applications deployment trends. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.