Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Monday, April 22, 2013

Service Virtualization Brings Speed Benefit and Lower Costs to TTNET Applications Testing Unit

Transcript of a BriefingsDirect podcast on how Türk Telekom subsidiary TTNET has leveraged Service Virtualization to significantly improve productivity.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and transformation.

Gardner
Once again we're focusing on how software improvements and advanced HP Service Virtualization (SV) solutions are enabling IT leaders to deliver better experiences and payoffs for businesses and end-users alike.

Today we’re going to learn about how TTNET, the largest internet service provider in Turkey, with six million subscribers, has significantly improved on applications deployment, while cutting costs and time to delivery.

With that, let's join our guest, Hasan Yükselten, Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom, and they're based in Istanbul. Welcome to the show, Hasan.

Hasan Yükselten: Thank you.

Gardner: Before we get into this discussion of how you’ve used SV in your testing, what was the situation there before you became more automated and before you started to use more software tools? What was the process before that?

Yükselten: Before SV, we had to use the other party’s test infrastructures in our test cases. We're the leading ISP company in Turkey. We deploy more than 200 applications per year and we have to provide better and faster services to our customers every week and every month.

Yükselten
We mostly had problems on issues such as the accessibility, authorization, downtime, and private data for reaching the other third-party’s infrastructures. So, we needed virtualization on our test systems and we needed automation for getting fast deployment to make the release time shorter for greater virtualization. And of course, we needed to reduce our cost. So, we decided to solve the problems of the company by implementing SV.

Gardner: What did you do to begin this process of getting closer to a faster and automated approach? Did you do away with scripts? Did you replace them? How did you move from where you were to where you wanted to be?

Yükselten: Before SV, we couldn’t do automation, since the other parties are in discrete locations and it was difficult to reach the other systems. We could automate functional test cases, but for end-to-end test cases, it was impossible to do automation.

First, we implemented SV for virtualizing the other systems, and we put SV between our infrastructure and the third-party infrastructure. We learned the requests and responses and then could use SV instead of the other party infrastructure.

Automation tools

After this, we could also use automation tools. We managed to use automation tools via integrating Unified Functional Testing (UFT) and SV tools, and now we can run automation test cases and end-to-end test cases on SV.

Gardner: Was there anything about this that allowed you to have better collaboration between the developers and the testers. I know that in many companies, this is a linear progression, where they develop and then test, and it can be something that there's not a lot of communication on. Was there anything about what you've done that's improved how developers and testers have been able to coordinate and collaborate?

Yükselten: We started to use SV in our test systems first. When we saw the success, we decided to implement SV for the development systems also. But, we've just implemented SV in the development site, so I can't give results yet. We have to wait and see, for maybe one month, before I can reply to this question.

Gardner: Tell me about the types of applications that you’re using here as a large internet service provider. Are these internal apps for your organization? Are they facing out to the customers for billing, service procurement, and provisioning? Give me a sense of the type of applications we’re talking about?

Yükselten: We are mostly working on customer relationship management (CRM) applications. We deploy more than 200 applications per year and we have more than six million customers. We have to offer new campaigns and make some transformations for new customers, etc.

We have to save all the informations, and while saving the information, we also interact the other systems, for example the National Identity System, through telecom systems, public switched telephone network (PSTN) systems.

We have to ask informations and we need make some requests to the other systems. So, we need to use all the other systems in our CRM systems. And we also have internet protocol television (IPTV) products, value added services products, and the company products. But basically, we’re using CRM systems for our development and for our systems.

Gardner: So clearly, these are mission-critical applications essential to your business, your growth, and your ability to compete in your market.

Yükselten: If there is a mistake, a big error in our system, the next day, we cannot sell anything. We cannot do anything all over Turkey.

Gardner: Let's talk a bit about the adoption of your SV. Tell me about some of the products you’re using and some of the technologies, and then we’ll get into what this has done for you. But, let's talk about what you actually have in place so far.

Yükselten: Actually, it was very easy to adopt these products into our system, because including proof of concept (PoC), we could use this tool in six weeks. We spent first two weeks for the PoC and after four weeks, we managed to use the tool.

Easy to implement

For the first six weeks, we could use SV for 45 percent of end-to-end test cases. In 10 weeks, 95 percent of our test cases could be run on SV. It was very easy to implement. After that, we also implemented two other SVs in our other systems. So, we're now using three SV systems. One is for development, one is just for the campaigns, and one is for the E2E tests.

Gardner: Tell me how your relationship with HP Software has been. How has it been working with HP Software to attain this so rapidly?

Yükselten: HP Software helped us so much, especially R&D. HP Turkey helped us, because we were also using application lifecycle management (ALM) tools before SV. We were using QTP LoadRunners, QC, etc., so we had a good relation with HP Software.

Since SV is a new tool, we needed a lot of customization for our needs, and HP Software was always with us. They were very quick to answer our questions and to return for our development needs. We managed to use the tool in six weeks, because of HP’s Rapid Solutions.

Gardner: Let’s talk a little bit about the scale here. My understanding is that you have something on the order of 150 services. You use 50 regularly, but you're able to then spin up and use others on a more ad-hoc basis. Why is it important for you to have that kind of flexibility and agility?
We virtualized all the web services, but we use just what we need in our test cases.

Yükselten: As you say, we virtualized more than 150 services, but we use 48 of them actively. We use these portions of the service because we virtualized our third-party infrastructures for our needs. For example, we virtualized all the other CRM systems, but we don’t need all of them. In gateway remote, you can simulate all the other web services totally. So, we virtualized all the web services, but we use just what we need in our test cases.

Gardner: And this must be a major basis for your savings when you only use what you need. The utilization rate goes up, but your costs can go down. Tell us a little bit about how this has been an investment that’s paid back for you.

Yükselten: In three months we got the investment back actually, maybe shorter than three months. It could have been two and half months. For example, for the campaign test cases, we gained 100 percent of efficiency. Before HP, we could run just seven campaigns in a month, but after HP, we managed to run 14 campaigns in a month.

We gained 100 percent efficiency and three man-months in this way, because three test engineers were working on campaigns like this. For another example, last month we got the metrics and we saw that we had a total blockage for seven days, so that was 21 working days for March. We saved 33 percent of our manpower with SV and there are 20 test engineers working on it. We gained 140 man-months last month.

For our basic test scenarios, we could run all test cases in 112 hours. After SV, we managed to run it in 54 hours. So we gained 100 percent efficiency in that area and also managed to do automation for the campaign test cases. We managed to automate 52 percent of our campaign test cases, and this meant a very big efficiency for us. Totally, we saved more than $50,000 per month.

Broader applications

Gardner: That’s very impressive and that was in a relatively short period of time. Do you expect now to be able to take this to a larger set of applications, maybe beyond your organization, more generally across Türk Telekom?

Yükselten: Yes. Türk Telekom licenses these tools and started to use these tools in their test service to get this efficiency for those systems. We have a branch company called AVEA, and they also want to use this tool. After our getting this efficiency, many companies want to use this virtualization. Eight companies visited us in Turkey to get our experiences on this tool. Many companies want this and want to use this tool in their test systems.

Gardner: Do you have any advice for other organizations like those you've been describing, now that you have done this? Any recommendations on what you would advise others that might help them improve on how they do it?

Yükselten: Companies must know their needs first. For example, in our company, we have three blockage systems for third parties and the other systems don't change everyday. So it was easy to implement SV in our systems and virtualize the other systems. We don’t need to do virtualization day by day, because the other systems don't change every day.

Once a month, we consult and change our systems, update our web services on SV, and this is enough for us. But if the other party's systems changes day by day or frequently, it may be difficult to do virtualization every day.
Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

This is an important point. Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

Gardner: As to where you go next, do you have any thoughts about moving towards UFT, using cloud deployment models more? Where can you go more to attain more benefits and efficiencies?

Yükselten: We started to use UFT with integrating SV. As I told you, we managed to automate 52 percent of our campaign test cases so far. So we would like to go on and try to automate more test cases, our end-to-end test cases, the basic scenarios, and other systems.

Our first goal is doing more automation with SV and UFT and the other is using SV in development sites. We plan to find early defects in development sites and getting more quality products into the test.

Rapid deployment

Of course, in this way, we get rapid deployment and we make shorter release times because the product will have more quality. Using performance test and SV also helps us on performance. We use HP LoadRunner for our performance test cases. We have three goals now, and the last one is using SV with integrating LoadRunner.

Gardner: Well, it's really impressive. It sounds as if you put in place the technologies that will allow you to move very rapidly, to even a larger payback. So congratulations on that.

Well, Hasan, I'm afraid we’ll have to leave it there; we've run out of time. We’ve learned how TTNET the largest internet service provider in Turkey has significantly improved on mission-critical application deployment, while also cutting costs and reducing that important time to delivery.
We plan to find early defects in development sites and getting more quality products into the test.

I like to thank first our supporter for this series, HP Software, and remind our audience to carry on the dialogue on the Discover Performance Group on LinkedIn. Of course, I'd like to extend a huge thank you to our special guest Hasan Yükselten. He is the Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom in Istanbul. Thanks so much. Hasan.

Yükselten: You're welcome, and thank you for your time too.

Gardner: And you can gain more insights and information on the best of IT Performance Management at www.hp.com/go/discoverperformance. And you can always access this and other episodes in our HP Discover performance podcast series on iTunes under BriefingsDirect.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I've been your host and moderator for this discussion part of our ongoing series on IT Innovation. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how Türk Telekom subsidiary TTNET has leveraged Service Virtualization to significantly improve productivity. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Wednesday, September 17, 2008

iTKO's John Michelsen Explains Roles and Methods for SOA Validation Across Complex Integration Lifecycles

Transcript of BriefingsDirect podcast with iTKO's John Michelsen on SOA testing and virtualization market trends.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about integration, validation, and testing for services-oriented architecture (SOA) and middleware -- particularly for business process management and to extend business processes more efficiently.

We’re going to be looking at how integration nowadays is really across multiple dimensions. We are talking about integrating technology, about various formats, and for extending frameworks, vendors, application sets, and specific application suites. There are also now enterprise service buses (ESBs) that are creating multiple types of integration across services -- from different hosting locations and from different technologies.

Not only that, we’re also dealing with traditional enterprise application integration (EAI) issues and middleware. And, of course, there’s more talk about cloud computing and software as a service (SaaS).

The whole notion of integration in the enterprise has exploded in terms of complexity -- but that puts more onus and importance on validation, testing and understanding what’s actually going on within these integration activities.

To help us understand more about integration, middleware and SOA validation and testing, we’re joined by John Michelsen, chief architect and founder of iTKO. Welcome to the show, John.

John Michelsen: Thanks, Dana, good to be here.

Gardner: We’ve talked several times in the past about the integration in SOA, and what’s been going on. How do you look at integration now among business applications and middleware? Is it, in fact, more onerous and complex than ever, and how would you characterize the current state of the market?

Michelsen: It really is, and it’s for a number of reasons. Most of us can surmise that, as soon as we look at it. We tend not to turn anything off. Existing systems don’t go away, and yet we bring in additional [IT] systems and new things all the time. We’re changing technologies, because we're always looking for the faster, cheaper, more effective way. That's great, and yet today, IT becomes legacy faster than before. In fact, you and I had a conversation a few weeks ago about that.

So, it gets more complex over time. And yet, to get real value out of IT you’ve got to think not from the perspective of these systems, but from the business’s processes, as they need to function. We have to do what you can, considering unreasonable gyrations in the systems, in order to make it reflect the way the business operates.

So there is a real mismatch here, and in order for us to accomplish value for the business, we’ve got to solve for it.

Gardner: Of course, at the same time, IT organizations are under pressure to reduce their complexity, reduce their maintenance and total cost of ownership (TCO). They’re dealing with long-term activities such as datacenter consolidation and application modernization. What is it that brings testing and validation into this mixture, in terms of end-to-end visibility?

Michelsen: Let’s say three or four systems are already interoperating in some way, and now you’ve become a part of a larger organization. You’ve merged into a large organization, or you’ve taken into your organization something you've acquired. You add another three or four end points, and now you’ve got this explosion of additional permutations. The interactions are so many that without good testing and validation, there’s just almost no hope of getting real visibility, and predictability out of these systems.

When things do fail, which unfortunately happens, you’ll have an extremely long recovery time without this test and validation capability, because knowing that something broke somewhere is the best you can do.

Gardner: I suppose we’re also looking now more at the lifecycle of these applications based on what’s going on at design time. Folks who are using agile development principles and faster iterations of development are throwing services up fairly quickly -- and then changing them on a fairly regular basis. That also throws a monkey wrench into how that impacts the rest of the services that are being integrated.

Michelsen: That’s right, and we’re doing that on purpose. We like the fact that we’re changing systems more frequently. We’re not doing that because we want chaos. We’re doing it because it’s helping the businesses get to market faster, achieving regulatory compliance faster, and all of those good things. We like the fact that we’re changing, and that we have more tightly componentized the architecture. We’re not changing huge applications, but we’re just changing pieces of applications -- all good things.

Yet, if my application is dependent upon your application, Dana, and you change it out from under me, your lifecycle impacts mine, and we have a “testable event,” even though I’m not in a test mode at the moment. What are we going to do about this? We've got to rethink the way that we do services lifecycles, we've got to rethink the way we do integration and deployment.

Gardner: There is, of course, a very high penalty if you don’t do this properly. If you don’t have that visibility, you lose agility, and the business outcomes suffer.

Michelsen: That’s right. And too often, we see customers where they’re in this dynamic of these highly interconnected systems. That frequency of change and the amount of failure that’s occurring because of those changes are actually having such a negative effect that they’re artificially reducing their pace of change -- which is, of course, not the goal for the business -- in order to try to accomplish some level of stability.

This means that we’ve gone through all this effort to provide this highly adaptable and agile platform and we’re doing all this work to get agile and integrated, but we have to then undo the benefit in order to accomplish stability.

Gardner: One of the basic principles of SOA is that you get benefit as a result of the “whole being greater than the sum of the parts,” but many of the parts come from specific vendors and/or open-source projects. They have management capabilities and insights built into them specifically. Yet when you rise up a bit more holistically, that’s where the issue comes in of how to get visibility across multiple systems.

Explain to us how you got started on this journey, and where your background and history comes in terms of addressing that higher abstraction of visibility.

Michelsen: Right, that’s a good point, because if the world were as simple as we wanted it to be, we could have one vendor produce that system that is completely self-contained, self-managed, very visible or very "monitorable," if you will. That’s great, but that becomes one box of the dozens on the white board. The challenge is that not every box comes from that same vendor.

So we end up in this challenge where we’ve got to get that same kind of visibility and monitoring management across all of the boxes. Yet that’s not something that you just buy and that you get out of the box.

This is exactly what pushed me into this phase throughout the 1990s. I had a company prior to founding this one that built mission-critical applications for lots of large companies, including some airlines and financial service companies; logistics, even database engines, and things like this.

The great thing was that I was able to put my little team together to build really cool stuff and deploy it really fast into an organization. They loved it. The challenge was that I was doing this in a very disruptive way to the rest of the IT organization. I'd come, bring in this new capability, and integrate it into the rest of the applications.

Well, in doing so, I’m actually causing this very same dynamic that we’re talking about now -- where all of a sudden my new thing, my new technology, integrated into a bunch of legacy, is causing disruption across all kinds of systems. We just didn’t have a sense for how to do this.

So I had to learn how to do this, how to transform these organizations into integration-based thinking, and put in test-and-validation best practices. That’s what caused us to end up building what we now call LISA.

Gardner: Unfortunately, a lot of organizations, when they face that disruption, their first instinct is probably just to put up a wall and say, “Okay, let’s sequester or isolate this set of issues.” But that, of course, aborts this business process level of innovation and value.

Michelsen: Exactly, and here's a classic example. A number of the types of systems that we built in the late 1990s were the e-commerce applications that were customer facing. The companies said, “I just don’t want to hear that this system can’t talk to that system. I want a Web-based presence that’s brain-dead simple, and that does things the way a customer wants to be able to do them. You’re going to interconnect all those back ends in order to get that to work. … You just do it for me. And if you won’t do it, I’m going to go find a vendor outside that will.”

The challenge is, no matter how it ends up there, now we've got to reckon with it. Frankly, even though those are sometimes difficult conversations the business is having with IT, the business needs those things, because the company that does it gains market share and increases the scope of their growth cycle. That obviously is something that every IT organization wants, because that leads to a bigger budget and a better company, and the success that we want to see.

Gardner: Now, we've certainly established that there is a problem, and that’s been evident for some time. We’ve underscored the fact that we want to get visibility, and offer new elements into an integrated environment, to take advantage of the technologies that are coming online, but not be in disruptive mode, or we certainly want to reduce the risk.

So we know there’s a problem, we know what we want to do. Now, how do you approach this technically, when you’re dealing with so many different vendors, so many variables?

Michelsen: Well, I’m the founder of a product company, and yet you don’t start by going and buying some software, installing it, and thinking you’re done. Let’s start with thinking around a new set of best practices for what this needs to look like. We frequently leverage a framework we call "the 3Cs" in order to accomplish this -- Complete, Collaborative and Continuous.

In a nutshell, we’ve got to be able to touch, from the testing point of view, all these different technologies. We have to be able to create some collaboration across all these teams, and then we have to do continuous validation of these business processes over time, even when we are not in lifecycles.

It’s a very high, broad-stroked approach to our solutions, but essentially, drilling down to the detail with the customer, we can show them how these 3 Cs establish that predictable, highly efficient, lots-of-visibility way to do these kinds of applications.

Gardner: There must be secret sauce? There must be technology in addition to the vision and methodological approach?

Michelsen: Right. In order to get that testability across all these technologies and collaboration among all the teams and, of course, continuous validation takes tooling and technology. Of course, we provide that, which is great. I personally like it, just as, from a professional point of view, I like the fact that the way we message to the market is: "These are the ways you’ve got to go about doing it." Once you see that that is an appropriate approach for you -- then you become a great candidate for using our products.

But let’s talk about making sure that this is right for you. Then we’ll talk about our product being useful, because that really is the way the things should work. I can’t tell you how many times I’ve seen a customer who has said, “Well, we've run out and bought this ESB and now we’re trying to figure out how to use it.” I've said, “Whoa! You first should have figured out you needed it, and in what ways you would use it that would cause you to then buy it.”

It’s the other way around sometimes. That’s why we’ll start with the best practices, even though we’re not a large services firm. Then, we’ll come in with product, as we see the approach get defined.

Gardner: Are there any specific types of enterprise companies -- whether in a particular maturity around IT or suffering from certain ills or ailments -- that pique your interest to say, “Well, this is a perfect candidate for our solution and product set?” What are some of the indicators that a company is ready for this level of validation and testing?

Michelsen: There are a couple. First, the large-scale, top-down SOA initiatives clearly need this, because this is the perfect example of … interconnecting things, wrapping legacy systems in modernization, creating business-process modeling environments, increasing the pace of change, and distributed development across many different teams. SOA does all of those things for you, and certainly scratches every one of those itches that we’ve been talking about.

The other is when you go into a large integration initiative. There are a lot of partner solutions -- from companies like TIBCO, WebMethods, Oracle Fusion and SAP NetWeaver, and forgive me for not naming all of our friends. When you’re going down this kind of path, you’re going down a path to interconnect your systems in this same kind of ways. Call it service orientation or call it a large integration effort, either way, the outcome from a system’s point of view is the same.

Then, traditionally, by the time a business has been large for many years, they just have this enormous amount of technology. A classic example is a large financial institution that does fixed-asset trades. In order for one trade to place, it takes Web services and EJBs, from Java Swing-based application into CORBA, into messaging, into C code, into two different databases, and out the other end of a Web application.

All of that technology, integrated together, is what the business thinks of the app. Of course, that takes hundreds of people across many different teams – U.S., Europe and Asia -- from an IT point of view. But, all of that technology together is the app. So that’s your reality. That’s where we really can sit and where these best practices really get to work.

Gardner: So when you went to enter into these organizations where there’s a pretty powerful need, what is it that they’re getting in terms of value and impact? How do they use these tools? Then, we’ll try to ask a little bit about validation examples of what the outcomes have been.

Michelsen: What they’re doing is adopting these best practices on a team level so that each of these individual components is getting their own tests and validation. That helps them establish some visibility and predictability. It’s just good, old-fashioned automated test coverage at the component level.

As these components start to orchestrate with each other in order to accomplish this higher-level objective -- where this component becomes a part of a larger solution -- then there’s a validation aspect to it. The application that is causing this component-to-component orchestration has a validation challenge to make sure that things continue to work over time, even in the face of change.

As these components come together, there’s a validation layer that’s put in place. At iTKO, we even have a virtualization capability that allows you to do these kinds of things in a very agile way and without some of the constraints that you typically have. At the very end of the process, we are near the glass, if you will, of the user screen. Then you’ve got business-process level validation or testing across the whole thing. So think of it as, “Here’s a business process model that I’ve modeled in a business process modeling (BPM) tool of choice."

The complement of that are one or more tests or validations of that particular business process, where I invoke the process and verify my technical outcomes. So that if placing an order means to do this, this, and this in these systems, you do that with a BPM tool. To validate the business process function as expected, you’ll invoke that business process with our product LISA and then make sure all of those expected outcomes occurred.

For example, the customer database is going to have an update in it, the order management systems is going to be creating a new order. The account activity system -- which might be completely independent -- the inventory system, or the shipping system, all of these things are going to have to have their expected outcomes verified in order for us to know that this system works as expected.

Gardner: This really sounds like a metaview of the integration, paths, occurrences, and requirements. It almost sounds as if you’re moving to what we used to refer to, and still do, as application lifecycle management (ALM). But, it sounds like you’re really approaching this additionally as “integration lifecycle management.”

Michelsen: That’s a great point. In fact, we’ve heard people say, “Wow, it sounds a little bit like also business activity monitoring (BAM), where you’re basically chasing all these transactions through the production system and making sure they are doing their thing.” Certainly, it's a valid point. But let’s be really clear. We must be capable of doing this as a part of our development cycles.

We can’t build stuff, throw it over the wall into the production system to see if it works, and then have a BAM-type tool tell us -- once it gets into the statistics -- "By the way, they’re not actually catching orders. You’re not actually updating inventory or your account. Your customer accounts aren’t actually seeing an increase in their credit balance when orders are being placed."

That’s not when you find out it doesn’t work, right? And the challenge is that’s what we do today. We largely complete these applications. We go into some user-acceptance test mode, where we have a people see if they can find any problems with this enormous amount of software, millions of lines of code. We give them a few weeks to see if they can find any bugs, and then we go to production.

We really can’t let that happen any more. These apps are too big, their connections are too many and the numbers of possible testable items are way too great. And, of course, tomorrow we invalidate all the work we just did in that human labor, when something changes somewhere.

So this is why, as a part of lifecycles, we have to do this kind of activity. In doing so, we drive into value, we get something for having done our work.

Gardner: Clearly from my observations, there’s a struggle now under way in the market to find better ways of relating, finding the relationship and dependencies between the design time activities and the run time activities -- and then creating more of a virtual feedback set of loops that allow for this to continue without that handing off; or waiting for the red light/green light value. Tell me how you think LISA provides a bridge, or maybe a catalyst, to increased feedback values between design time and run time, particularly in an SOA environment.

Michelsen: Great question, and I’m glad that you’re seeing that as well, Dana, because we think that it's an indication that things are maturing. When we see our customers asking us, “How do I essentially do that second C of yours, collaboration? How do I better collaborate?”… we know that they’re finally seeing the pain between a siloed-based lifecycle, and testing and operations being a disjointed activity. Development and test don’t talk to each other, or with project management. And the business analysts don’t really even know each other.

We know that when we’re hearing questions around collaboration, people are becoming aware that they really needed to accomplish it. This is great. Some specifics of how our products can help is by first being a test capability that every one of the teams I just mentioned can use to do their own part of the testing effort. Developers have a test responsibility. Certainly, quality assurance (QA) has one. Operations even has one, from a functional monitoring point of view.

The business analysts have this whole "validate the business process" activity they need to accomplish. Everyone has their part to play, and if we can provide a tool that helps all of them do their part with the same product, there’s an enormous amount of efficiency. More important, there’s a much more highly automated back channel through this lifecycle.

If a business process is not functioning as expected, that failing test case is consumable all the way back to that individual developer who can see the context in which my component is being exercised. [And that comes from seeing] the input and output, seeing the expected outcome, and seeing the unexpected actual outcome. Then I get a really good awareness of what my component is supposed to do in the context of the business process.

When we have this common tooling across the board -- instead of one way of doing it for development, one way of doing it for QA, one way for the business analyst and for operations and everything -- we get much greater collaboration.

One other important point here is that we also have an opportunity to introduce this continuous validation framework, where once we start these integration labs, those components are being delivered into that integration lab, and then into pre-production, performance labs and production. We need an infrastructure for all of this continuous validation that properly notifies whoever should be notified when failures occur.

So our application has lots of good technology for being able to do this as well.

Gardner: Well, of course, the proof of the pudding is in the eating. Can you give us some examples of organizations that have employed these methods, and then some of these tools? Start to think in terms of the 3 Cs that you’ve outlined. What sorts of results or paybacks are there in terms of return on investment (ROI) and TCO? What validates this?

Michelsen: A great example of this would be Lenovo, the ThinkPad guys, where they went through a major next generation of all of their customers and partner-facing order management systems. This is www.lenovo.com, and a number of the systems behind it. They went with a new vendor to bring in a new application and interconnected into all the existing back-end and legacy systems. It's a classic example, as I said a few minutes ago, of when this kind of activity becomes important.

Lenovo realized from their past experiences that they wanted to get better at doing this kind of activity because they didn’t want what happened to them sometime in the past, where application failures underneath the screens would be causing customer experience to degrade -- but you couldn’t even tell at the website.

They were not capturing the order, even though an order number was showing up on the Web page, and things like this. They realized this challenge was too great for them, and they brought our solution in, in order to validate all these individual components and then validate at the user’s business-process level.

They wanted to validate what it means to configure ThinkPads, to price them, to do all of the bundling, to make sure that I can place orders, check orders, verify shipping, and do all these different things. That takes a pretty significant amount of visibility. Of course, our product has some capability to give you that visibility, because you’re going to need it.

So you have this kind of capability, and Lenovo was able to move away from, "I hope this thing continues to run." What was very possible in the past was that the customer update occurred, but the order placement didn’t -- a partial commit.

Instead of having that reality, they now have a reality on a literally continuous basis. From seven different places all over the world, we’re continuously validating the performance and functional integrity of the entire system -- both for the component level, and at what I call this orchestration level.

In doing so, they have a whole lot more confidence that the thing actually performs the way they expect it to.

Gardner: There’s no question, John, that the organizations that are advancing, that are deeply into integration issues, are looking for this business process management value, at the orchestration level.

They've moved an abstraction up in terms of the approaches, and the accomplishments of what their IT departments and systems can deliver. But, of course, any time we move up an abstraction technologically into the functions of IT, that requires the company go up a level in validation testing and quality.

It makes sense now that you’re going to see a growing market. Is there any sense that you can give us from your business as to how these things are growing now? Are people really getting to that level where they want to bring together a lifecycle approach?

Michelsen: Well, hopefully the Lenovo example means, yes. By the way, a company partner of ours named i2 -- they see this. We all know there’s an amazing amount of effort to do a large-scale implementations of either a packaged applications or large-scale custom applications. I think we’ve done this long enough to realize that this has to be part of the way to do it.

I’m seeing that more and more. As a consequence, we are able to provide value to many customers. It’s just been thrilling. We brought our product to market in early 2003 with a single customer or two. If our growth rate is an indication -- as an IT discipline – the market has finally realized that we have to get this right, which is terrific. If you think about it, the evangelist in all of us wants to get this right, or wants to do the right thing. I’m seeing it more and more, and that’s certainly terrific.

Gardner: Great. Well, we've been discussing the issues around integration, middleware, and SOA, as well as the need to abstract value up to the integrations and into the business processes. We have talked about how these elements relate to one another, and, of course, explain the need for greater visibility, validation and testing in these environments.

We’ve been talking about LISA and iTKO with John Michelsen, the chief architect and founder of iTKO. I appreciate your input, and we look forward to learning more about how this market evolves. It is an exciting time.

Michelsen: Thanks a lot, Dana. I appreciate the time.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: iTKO.

Transcript of BriefingsDirect podcast with iTKO's John Michelsen on validation and testing in application integrations. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.