Friday, October 24, 2014

Big Data Analysis Provides New Degree of Real-Time Financial Position Insights to Large Russian Bank

Transcript of a BriefingsDirect podcast on how a major Russian bank is using HP Vertica data analytics tools to provide up-to-the-minute information for top executives to make better business decisions.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

This time, we're coming to you from the recent HP Big Data 2014 Conference in Boston. We're here to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.

Our next innovation case study interview highlights how Otkritie Bank in Moscow has deployed HP Vertica and business intelligence (BI) for business activity monitoring.
Fully experience the HP Vertica analytics platform ...
Become a member of myVertica.
To learn more about their drive for improved analytics, we're joined by Alexei Blagirev, Chief Data Officer at Otkritie Bank (formerly OpenBank). Tell us about your organization.

Alexei Blagirev: Otkritie Bank is a member of the Open Financial Corporation (now Otkritie Financial Corporation Bank), which is one of the largest private financial services groups in Russia.

Gardner: Tell us about your choice for BI platforms.

Blagirev: The reason we selected HP Vertica was that we tried to establish a data warehouse that could provide operational data storage and could also be an analytical OLAP solution.

Blagirev
It was a very hard decision. We tried to refer to the past experience from our team, from my side, etc. Everyone had some negative experience with different solutions like Oracle, because there was a big constraint.

We cannot integrate operational data storage and OLAP solutions. Why? Because there should be high transactional data put in the data warehouse (DWH), which in every case, was usually the biggest constraint to build high-transactional data storage.

Vertica was a very good solution that removed this constraint. While selecting Vertica, we were also evaluating different solutions like IBM. We identified advantages of Vertica against IBM from two different perspectives.

One was performance. The second was that Vertica is cost-efficient. Since we were comparing Netezza (now part of IBM), we were comparing not only software, but also software plus hardware. You can’t build a cluster of Netezza custom-size. You can only build it with 32 terabytes, and so on.

Very efficient

We were also limited by the logistics of these buildings blocks, the so-called big green box of Netezza. In terms of Vertica, it's really efficient, because we can use any hardware.

So we calculated our total cost of ownership (TCO) on a horizon of five years, and it was lower than if we built the data warehouse with different solutions. This was the reason we selected Vertica.

From the technical perspective and from the cost-efficient perspective, there was a big difference in the business case. Our bank is not a classical bank in the Russian market, because in our bank the technology team leads the innovation, and the technology team is actually the influence-maker inside the business.

So, the business was with us when we proposed the new data warehouse. We proposed to build the new solution to collect all data from the whole of Russia and to organize via a so-called continuous load. This means that within the day, we can show all the data, what’s going on with the business operations, from all line of business inside all of Russia. It sounds great.

When we were selecting HP Vertica, we selected not only Vertica, but the technical bundle. We also hosted the Replicator. We chose Oracle GoldenGate.

We selected the appropriate ETL tool, and the BI front end. So all together, it was a technical bundle, where Vertica was the middleware technical solution. So far, we have build a near-real-time DWH, but we don’t call it near-real-time; we call it "just-in-time, because we want to be congruent with the decision-making process. We want to influence the business to let them think more about their decisions and about their business processes.
Everything appears really quick and it's actually influencing business to make decisions, to think more, and to think fast.

As of now, I can show all data collected and put inside the DWH within 15 minutes and show the first general process in the bank, the process of the loan application. I can show the number of created applications, plus online scoring and show how many customers we have at that moment in each region, the amounts, the average check, the approval rate, and the booking rate. I can show it to the management the same day, which is absolutely amazing.

The tricky part is what the business will do with this data. It's tricky, because the business was not ready for this. The business was actually expecting that they could run a script, go to the kitchen, make a coffee, and then come back.

But, boom, everything appears really quickly, and it's actually influencing the business to make decisions, to think more, and to think fast. This, I believe, is the biggest challenge, to grow business analytics inside the business for those who will be able to use this data.

As of now, we are setting the pilot stage, the pilot phase of what we call business activity monitoring (BAM). This is actually a funny story, because this is the same term referenced in Russia to Baikal-Amur Mainline (BAM), a huge railroad across the whole country that connects all the cities. It's kind of our story, too; we connect all departments and show the data in near real-time.

Next phase

In this case, we're actually working on the next phase of BAM, and we're trying to synchronize the methodology across all products, across all departments, which is very hard. For example, approval rates could be calculated differently for the credit cards or for the cash loans because of the process.

Since we're trying to establish a BI function almost from ground zero, HP Vertica is only the technical side. We need to think more about the educational side, and we need to think about the framework side. The general framework that we're trying to follow, since we're trying to build a BI function, is a United Business Glossary (or accepted services directory), first of all.

It's obvious to use Business Glossary and to use a single term to refer to the same entity everywhere. But it is not happening as of now, because the business unit is still trying to use different definitions. I think it's a common problem everywhere in the business.

The second is to explain that there are two different types of BI tools. One is BI for the data mart, a so-called regular report. Another tool is a data discovery tool. It's the tool for the data lab (i.e. mining tool).
Fully experience the HP Vertica analytics platform ...
Become a member of myVertica.
So we differentiate data lab from data mart. Why? Because we're trying to build a service-oriented model, which in the end produces analytical services, based on the functional map.

When you're trying to answer the question using some analytics, actually it is a regular question, this is tricky. All the questions that are raised by the business, by any business analyst, are regular questions; they are fundamental. 

The correct way to develop an analytical service is to collect all these questions into kind of a question library. You can call it a functional map and such, but these questions, define the analytical service for those functions.

For example, if you're trying to produce cost control, what kind of business questions do you want to answer? What kind of business analytics or metrics do you want to bring to the end-users? Is this really mapped to the question raised, or you are trying to present different analytics? As of now, we feel it's difficult to present this approach. And this is the first part.

The second part is a data lab for ad hoc data discovery. When, for example, you're trying to produce a marketing campaign for the customers, trying to produce customer segments, trying to analyze some great scoring methodology, or trying to validate scientific expectations, you need to produce some research.

It's not a regular activity. It's more ad hoc analysis, and it will use different tools for BI. You can’t combine all the tools and call it a universal BI tool, because it doesn't work this way. You need to have a different tool for this.

Creating a constraint

This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

This is a key constraint that we have now, because end-users are more satisfied to work with Excel, which is great. I think it's the most popular BI data discovery tool in the world, but it has its own constraints.

I love Microsoft. Everyone loves Microsoft, but there are different beautiful tools like TIBCO Spotfire, for example, which combines MATLAB, R, and so on. You can input models of SAS and so on. You can also write the scripts inside it. This is a brilliant data discovery tool.

But try to teach this tool to your business analyst. In the beginning, it's hard, because it's like a J curve. They will work through the valley of despair, criticizing it. "Oh my God, what are you trying to create, because this is a mess from my perspective?" And I agree with them in the beginning, but they need to go through this valley of despair, because in the end, there will be really good stuff. This is because of the cultural influence.
This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

Gardner: Tell me, Alexei, what sort of benefits have you been able to demonstrate to your banking officials, since you've been able to get this near real-time, or just-in-time analytics -- other than the fact that you're giving them reports? Are there other paybacks in terms of business metrics of success?

Blagirev: First of all, we differentiate our stakeholders. We have top management stakeholders, which is the board. There are the middle-level stakeholders, which are our regional directors.

I'll start from the bottom, and the regional directors. They just open the dashboard. They don’t click anything or refresh. They just see that they have data and analytics, what’s going on in their region.

They don’t care about the methodology, because there is BAM, and they just use figures for decision making. You don’t think about how it got there, but you think about what to do with these figures. You focus more on your decision, which is good.

They start to think more on their decision and they start to think more on the processing side. We may show, for example, that at 12 o’clock our stream of cash loan applications went down. Why? I have no idea. Maybe they all went out for dinner. I don’t know.

But nobody says that. They say, "Alexei, something is happening." They see true figures and they know they are true figures. They have instruments to exercise operational excellence. This is the first benefit.

Top management

The second, is top management. We had a management board where everyone came and showed different figures. We'd spend 30 minutes, or maybe hour, just debating which figures were true. I think this is a common situation in Russian banks, and maybe not only banks.

Now, we can just open the report, and I say, "This is a single report, because it shows intra-day figures and shows this metrics, it was calculated according to methodology." We actually linked the time of calculation, which shows that this KPI, for example, was calculated at 12 o’clock. You can take figures at 12 o’clock, and if you don’t believe them, you can ask the auditors to repeat calculation, and it will be the same way.

Nobody cares about how to calculate the figures. So they started to think about what methodology to apply to the business process. Actually, this is reverse of the focus from the outside, focusing on what’s going on with our business process. This is the second benefit.

Gardner: Any other advice that you would give to organizations who are beginning a process toward BI?
Try to disclose all your company and software vision, because Vertica or other BI tools are only a part. Try to see all the company's lines, all information.

Blagirev: First of all, don’t be afraid to make mistakes. It's a big thing, and we all forget that, but don’t be afraid. Second, try to create your own vision of strategy for at least one year.

Third, try to disclose all your company and software vision, because HP Vertica or other BI tools are only a part. Try to see all the company's lines, all information, because this is important. You need to understand where the value is, where is the shareholder value is lost, or are you creating the value for the shareholder. If the answer is, yes, don’t be afraid to protect your decision and your strategy, because otherwise in the end, there will be problems. Believe me.

As Gandhi mentioned, in the beginning everyone laughs, then they begin hating you, and in the end, you win. 

Gardner: With your business activity monitoring, you've been able to change business processes, influence the operations, and maybe even the culture of the organization, focusing on the now and then the next set of processes. Doesn’t this give you a competitive advantage over organizations that don’t do this?

Blagirev: For sure. Actually, this gives a competitive advantage, but this competitive advantage depends on the decision that you're making. This actually depends on everyone in the organization.

Understanding this brings a new value to the business, but this depends on the final decision from people who sit in the position. Now, those people understand. They're actually handling the business and they see how they're handling the business.
Fully experience the HP Vertica analytics platform ...
Become a member of myVertica.
I can compare the solution to other banks. I have been working for Société Générale and for the Alfa-Bank, which is the largest bank in Russia. I've been the auditor of financial services in PwC. I saw the different reporting and different processes, and I can say that this solution is actually unique in the market.

Why? It shows congruent information in near real-time, inside the day, for all the data, for the whole of Russia. Of course, it brings benefit, but you need to understand how to use it. If you don’t understand how to use this benefit, it's going to be just a technical thing.

Gardner: Very good. I'm afraid we will have to leave it there. We've been hearing about how Otkritie Bank in Moscow has increased and improved its business-activity monitoring and we've heard how that’s helped them improve their business and become more competitive.

I'd like to thank our guest, Alexei Blagirev, Chief Data Officer at Otkritie Bank. Thank you.

Blagirev: Thank you, everyone.

Gardner: And a big thank you to our audience for joining us for the special new style of IT discussion, coming to you directly from the HP Big Data 2014 Conference in Boston.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a major Russian bank is using HP Vertica data analytics tools to provide up-to-the-minute information for top executives to make major business decisions. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Wednesday, October 22, 2014

A Practical Guide to Rapid IT Service Management as a Foundation for Overall Business Agility

Transcript of a Briefings Direct podcast on how enterprises can benefit from the newest IT service management methods and procedures.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast panel discussion on how rapidly advancing IT service management (ITSM) capabilities form an IT imperative, and therefore a bedrock business necessity.

Gardner
Businesses of all stripes rate the need to move faster as a top priority, and many times, that translates into the need for better and faster IT projects. But traditional IT processes and disjointed project management don't easily afford rapid, agile, and adaptive IT innovation.

The good news is that a new wave of ITSM technologies and methods allow for a more rapid ITSM adoption -- and that means better rapid support of agile business processes.

To help us explore a practical guide to fast ITSM adoption as a foundation for overall business agility, please join me in welcoming our panel, John Stagaman, Principal Consultant at Advanced MarketPlace based in Tampa, Florida. Welcome, John.

John Stagaman: Hello.

Gardner: We're also here with Philipp Koch, Managing Director of InovaPrime, Denmark. Welcome, Philipp.

Philipp Koch: Thanks.

Gardner: And lastly, we are here with Erik Engstrom, the CEO of Effectual Systems in Berkeley, California. Welcome, Erik.

Erik Engstrom: Good morning, Dana. Glad to be here.
Unleash the power of your user base ...
with a free white paper 

Gardner: John Stagaman, let me start with you. We hear a lot, of course, about the faster pace of business, and cloud and software as a service (SaaS) are part of that. What, in your mind, are the underlying trend or trends that are forcing IT's hand to think differently, behave differently, and to be more responsive?

Stagaman: If we think back to the typical IT management project historically, what happened was that, very often, you would buy a product. You would have your requirements and you would spend a year or more tailoring and customizing that product to meet your internal vision of how it should work. At the end of that, it may not have resembled the product you bought. It may not have worked that well, but it met all the stakeholders’ requirements and roles, and it took a long time to deploy.

Stagaman
That level of customization and tailoring resulted in a system that was hard to maintain, hard to support, and especially hard to upgrade, if you had to move to a new version of that product down the line. So when you came to a point where you had to upgrade, because your current version was being retired or for some other reason, the cost of maintenance and upgrade was also huge.

It was a lesson learned by IT organizations. Today, saying that it will take a year to upgrade, or it will take six months to upgrade, really gets a response. Why should it? There's been a change in the way it’s approached with most of the customers we go on-site to now. Customers say we want to use out of box, it used to be, we want to use out of box, and sometimes it still happens that they say, and here’s all the things we want that are not out of box.

But they've gotten much better at saying they want to start from out of box, leverage that, and then fill in the gaps, so that they can deploy more quickly. They're not opening the box, throwing it away, and building something new. By working on that application foundation and extending where necessary, it makes support easier and it makes the upgrade path to future versions easier.

Moving faster

Gardner: It sounds like moving toward things like commodity hardware and open-source projects and using what you can get as is, is part of this ability to move faster. But is it the need to move faster that’s driving this or the ability to reduce customization? Is it a chicken and egg? How does that shape up?

Engstrom: I think that the old use case of "design, customize, and implement" is being forced out as an acceptable approach, because SaaS, platform as a service (PaaS), and the cloud are driving the ability for stakeholders. Stakeholders are retiring, and fresher sets of technologies and experiences are coming in. These two- and three-year standup projects are not acceptable.

Engstrom
If you're not able to do fast time-to-value, you're not going to get funding. Funding isn’t in the $8 million and $10 million tranches anymore; it’s in the $200,000 and $300,000 tranche. This is having a direct effect on on-premise tools, the way the customers are planning, and OPEX versus CAPEX.

Gardner: Philipp, how do you come down on this? Is this about doing less customization or doing customization later in the process and, therefore, more quickly?

Koch: I don't think it's about the customization element in itself. It is actually more that, in the past, customers reacted. They said they wanted to tailor the tool, but then they said they wanted this and they took the software off the shelf and started to rebuild it.

Now with the SaaS tool offerings coming into play, you can’t do that anymore. You can't build your ITSM solution from scratch. You want be able to take it according to use case and adjust it with customization or configuration. You don’t want to be able to tailor.

Koch
But customization happens while you deploy the project and that has to happen in a faster way. I can only concur with all the other things that have already been said. We don't have huge budgets anymore. IT, as such, never had huge budgets, but, in the past, it was accepted that a project like this took a long time to do. Nowadays, we want to have implementations of weeks. We don’t want to have implementations of months anymore.

Gardner: Let’s just unpack a little bit the relationship between ITSM and IT agility. Obviously, we want things to move quickly and be more predictable, but what is it about moving to ITSM rapidly that benefits? And I know this is rather basic, but I think we need to do it just for all the types of listeners we have.

Back to you, John. Explain and unpack what we mean by rapid ITSM as a means to better IT performance and rapid management of projects.

Best practices

Stagaman: For an organization that is new to ITSM processes, starting with a foundational approach and moving in with an out-of-box build helps them align with best practice and can be a lot faster than if they try to develop from scratch. SaaS is a model for that, because with SaaS you're essentially saying you're going to use this standard package.

The standard package is strong, and there's more leverage to use that. We had a federal customer that, based on best practice, reorganized how they did all their service levels. Those service levels were aligned with services that allowed them, for the first time, to report to their consuming bureaus the service levels per application that those bureaus subscribed to. They were able to provide much more meaningful reporting.

They wouldn’t have done that necessarily if the model didn't point in that direction. Previously, they hadn't organized their infrastructure along the lines to say, "We provide these application services to our customer."

Gardner: Erik, how do see the relationship between rapid and better ITSM and better IT overall performance? Are there many people struggle with this relationship?

Engstrom: Our approach at Effectual, what we focus on, is the accountability of data and the ability for an organization to reduce waste through using good data. We're not service [process] management experts, in that we are going to define a best practice; we are strictly on “here is the best piece of data everyone on your team is working [with] across all tools.” In that way, what our customers are able to see is transparency. So data from one system is available on another system.
Those kinds of mistakes are reduced when you share across tools. So that’s our focus and that’s where we're seeing benefit.

What that means is that you see a lot more reduction in types of servers that are being taken offline when they're the wrong server. We had a customer bring down their [whole] retail zone of systems that the same team had just stood up the week before. Because of the data being good, and the fact they were using out-of-the-box features, they were able to reduce mistakes and business impact they otherwise would not have seen.

Had they stayed with one tool or one silo of data, it’s only one source of opinion. Those kinds of mistakes are reduced when you share across tools. So that’s our focus and that’s where we're seeing benefit.

Gardner: Philipp, can you tell us why rapid ITSM has a powerful effect here in the market? But, before we get into that and how to do it, why is rapid ITSM so important now?

Koch: What we're seeing in our market is that customers are demanding service like they're getting at home at the end of the day. This sounds a little bit cliché-like, but they would like to get something ordered on the Internet, have it delivered 10 minutes later, and working half an hour later.

If we're talking about doing a classical waterfall approach to projects as was done 5 or 10  years ago, we're talking about months, and that’s not what the customer wants.

IT is delivering that. In a lot of organizations, IT is still fairly slow in delivering bigger projects, and ITSM is considered a bigger project. We're seeing a lot of shadow IT appearing, where business units who are demanding that agility are not getting it from IT, So they're doing it themselves, and then we have a big problem.

Counter the trend

With rapid ITSM, we can actually counter that trend. We can go in and give our customers what's needed to be able to please the business demand of getting something fast. By fast, we're talking about weeks now. We're of course not talking 10 minutes in project sizes of an ITSM implementation, but we can do something where we're deploying a SaaS solution.

We can have it ready for production after a week or two and get it into use. Before, when we did on-premise or when we did tailoring from scratch, we were talking months. That’s a huge business advantage or business benefit of being able to deliver what the business units are asking for.

Gardner: John Stagaman, what holds back successful rapid ITSM approach? What hinders speed, why has it been months rather than days typically?

Stagaman: Erik referenced one thing already. It has to do with the quality of source data when you go to build a system. One thing that I've run into numerous times is that there is often an assumption that finding all the canonical sources of data for just the general information that you need to drive your IT system is already available and it’s easy to populate. By that I mean things like, what are our locations, what are our departments, who are our people?
The other major thing that I run into that introduces risks into a project is when requirements aren't really requirements.

I'm not even getting to the point of asking what are our configuration items and how are they related? A lot of times, the company doesn't have a good way to even identify who a person is uniquely over time, because they use something with their name. They get married, it changes, and all of a sudden that’s not a persistent ID.

One thing we address early is making sure that we identify those gold sources of data for who and what, for all the factual data that has to be loaded to support the process.

The other major thing that I run into that introduces risks into a project is when requirements aren't really requirements. A lot of times, when we get requirements, it’s a bunch of design statements. Those design statements are about how they want to do this in the tool. Very often, it’s based on how the tool we're replacing worked.

If you don't go through those and say that this is the statement of design and not a statement of functional requirement and ask what is it that they need to do, it makes it very hard to look at the new tools you're deploying to say that this new tool does that this way. It can lead to excess customization, because you're trying to meet a goal that isn’t consistent with how your new product works.

Those are two things we usually do very early on, where we have to quality check the requirements, but those are also the two things that most often will cause a project to extend or derail.

Gardner: Philipp, any thoughts on problems, hurdles, why poor data quality or incomplete configuration management and data? What is it, from your perspective, that hold things back?

Old approach

Koch: I agree with what John says. That’s definitely something that we see when we meet customers.

Other areas that I see are more towards the execution of the projects itself. Quite often, customers know what agile is, but they don’t understand it. They say they're doing something in an agile way. Then, they show us a drawing that has a circle on it and then they think they are agile.

When you start to actually work with them, they're still in the old waterfall approach of stage gates, and milestones.

So, you're trying to do rapid ITSM implementation that follows agile principles, but you're getting stuck by internal unawareness or misunderstanding what this really means. Therefore, you're struggling with doing an agile implementation, and they become non-agile by doing this. That, of course, delays projects.

Quite often, we see that. So in the beginning of the projects, we try to have a workshop or try to get the people to understand what it really means to do an agile project implementation for an ITSM project. That’s one angle.
They should be asking whether it's easy to tailor the solution. It doesn’t really matter how.

The other angle, which I also see quite often, goes into the area of the requirements, the way John had described them. Quite often, those requirements are really features, as in they are hidden features that the customer wants. They are turned into some sort of requirements to achieve that feature. But very seldom do we see something that actually addresses the business problem.

They should not really care if you can right-click in the background and add a new field to this format. That’s not what they should be asking for. They should be asking whether it's easy to tailor the solution. It doesn’t really matter how. So that’s where quite often you're spending a lot of time reading those requirements and then readjusting them to match what you really should be talking about. That, of course, delays projects.

In a nutshell, we technology guys, who work with this on a daily basis, could actually deliver projects faster if we could manage to get the customers to accept the speed that we deliver. I see that as a problem.

Gardner: So being real about agile, having better data, knowing more about what your services are and responding to them are all part of overcoming the inertia and the old traditional approaches. Let’s look more deeply into what makes a big difference as a solution in practice.

Erik Engstrom, what helps get agile into practice? How are we able to overcome the drawbacks of over-customization and the more linear approach? Do you have any thoughts about moving towards a solution?

Maturity and integration

Engstrom: Our approach is to provide as much maturity, and as complete an integration as possible, on day one. We've developed a huge amount of libraries of different packages that do things such as to advance the tuning of a part of a tool, or to advance the integration between tools. Those represent thousands of hours that can be saved for the customer. So we start a project with capabilities that most projects would arrive at.

This allows the customer to be agile from day one. But it requires that mentality that both Philipp and John were speaking about, which is, if there’s a holdout in the room that says “this is the way you want things,” you can’t really work with the tools the way that they [actually] do work. These tools have a lot of money and history behind them, but one person’s vision of how the tools should work can derail everything.

We ask customers to take a look at an interoperable functioning matured system once we have turned the lights on, and have the data moving through the system. Then they can start to see what they can really do.

It’s a shift in thinking that we have covered well over the last few minutes, so I won't go into it. But it's really a position of strength for them to say, "We've implemented, we’ve integrated. Now, where do we really want to go with this amazing solution?
So the faster we can help customers start to see a working system with their data, the easier it is to start to move and maintain an agile approach.

Gardner: What is it about the new toolset that’s allowing this improvement, the pre-customization approach? How does the technology come to bear on what’s really a very process-centric endeavor?

Engstrom: There are certain implementation steps that every customer, every project, must undergo. It’s that repetition that we're trying to remove from the picture. It’s the struggle of how to help an organization start to understand what the tools can do. What does it really look like when people, party, location, and configuration information is on hand? Customers can’t visualize it.

So the faster we can help customers start to see a working system with their data, the easier it is to start to move and maintain an agile approach. You start to say, "Let’s keep this down to a couple of weeks of work. Let us show it to you. Let’s visit it."

If we're faster as consultancies, if we're not taking six months, if we're not taking two months and we can solve these things, they'll start to follow our lead. That’s essential. That momentum has to be maintained through the whole project to really deliver fast.

Gardner: John Stagaman, thoughts about moving fast, first as consultants, but then also leveraging the toolsets? What’s better about the technology now that, in a sense, changes this game too?

Very different

Stagaman: In the ITSM space, the maturity of the product out of box, versus 10 years ago, is very different.  Ten or 15 years ago, the expectation was that you were going to customize the whole thing.

There would be all these options that were there so you could demo them, but they weren’t necessarily built in a cohesive way. Today, the tools are built in different ways so that it's much closer to usable and deployable right out of the box.

The newest versions of those tools very often have done a much better job of creating broadly applicable process flow, so that you can use that same out of the box workflow if you're a retailer, a utility, or want to do some things for the HR call center without significant change to the core workflow. You might need to have the specific data fields related to your organization.

And, there's more. We can start from this ITSM framework that’s embedded and extend  it where we need to.

Gardner: Philipp, thoughts about what’s new and interesting about tools, and even the SaaS approach to ITSM, that drives, from the technology perspective, better results in ITSM?
If you’re looking at ITSM solutions today, they're web based. They're Web 2.0 technology, HTML5, and responsive UIs.

Koch: I'll concur with John and Erik that the tools have changed drastically. When I started in this business 10 or 15 years ago, it was almost like the green screens of computers that slide through when you look for the ITSM solution.

If you’re looking at ITSM solutions today, they're web based. They're Web 2.0 technology, HTML5, and responsive UIs. It doesn’t really matter which device you use anymore, mobile phone, tablet, desktop, or laptop. You have one solution that looks the same across all devices. A few years ago, you had to install a new server to be able to run a mobile client, if it even existed.

So, the demand has been huge for vendors to deliver upon what the need is today. That has drastically changed in regards to technology, because technology nowadays allows us, and allows the vendors, to deliver up on how it should be.

We want Facebook. We want to Tweet. We want an Amazon- or a Google-like behavior, because that’s what we get everywhere else. We want that in our IT tools as well, and we're starting to see that coming into our IT tools.

In the past we had rule sets, objects, and conditions towards objects, but it wasn’t really a workflow engine. Nowadays, SaaS solutions, as well as on-premise solutions, have workflow engines that can be adjusted and tailored according to the business needs.

No difference

You’re relying on a best practice. An incident management process flow is an incident management process flow. There really is no difference no matter which vendor you go to, they all look the same, because they should. There is a best practice out there or a good practice out there. So they should look the same.

The only adjustments that customers will have to do is to add on that 10-20 percent that is customer-specific with a new field or a specific approval that needs to be put in between. That can be done with minimal effort when you have workflow engine.

Looking at this from a SaaS perspective, you want this off the shelf. You want to be able to subscribe to this on the Internet and adjust it in the evening, so when you come back the next day and go to work, it's already embedded in the production environment. That's what customers want.

Gardner: Now if we’ve gotten a better UI and we're more ubiquitous with who can access the ITSM and how, maybe we've also muddied the waters about that data, having it in a single place or easily consolidated. Let’s go back to Erik, given that you are having emphasis on the data.
Unleash the power of your user base ...
with a free white paper 
When we look at a new-generation ITSM solution and practice, how do we assure that the data integrity remains strong and that we don't lose control, given that we're going across peers of devices and across a cloud and SaaS implementations? How do we keep that data whole and central and then leverage it for better outcomes?

Engstrom: The concept of services and the way that service management is done is really around services. If we think about ITIL and the structure of ITIL [without getting into too many acronyms] the ability to take Services, Assets, and Configuration Management information, [and to have] all of that be consistent, it needs to be the same.

A platform that doesn't have really good bidirectional working data integrations with things like your asset tool or your DCIM tool or your UCMDB tool or your – wherever it is your data is coming from-- the data needs to be a primary focus for the future.

Because we're talking about a system [UCMDB] that can not only discover things and manage computers, but what about the Internet of Things? What about cloud scenarios, where things are moving so quickly that traditional methods of managing information whether it would be a spreadsheet or even a daily automated discovery, will not support the service-management mission?

It's very important, first of all, that all of the data be represented. Historically, we’ve not been able to do that because of performance. We've not been able to do that because of complexities. So that’s the implementation gap that we focus on, dropping in and making all of the stuff work seamlessly.

Same information

The benefit to that is that you’re operating as an organization on the same piece of information, no matter how it’s consumed or where it’s consumed. Your asset management folks would open their HP IT Asset Manager, see the same information that is shown downstream at Service Manager. When you model an application or service, it’s the same information, the same CI managed with UCMDB, that keeps the entire organization accountable. You can see the entire workflow through it.

If you have the ability to bridge data, if you have multiple tools taking the best of that information, making it an inherent automated part of service management, means that you can do things like Incident and Change, and Service Asset and Configuration Management (SACM) and roll up the costs of these tickets, and really get to the core of being efficient in service management.

Gardner: John Stagaman, if we have rapid ITSM multiple device ease of interface, but we also now have more of this drive towards the common data shared across these different systems, it seems to me that that leads to even greater paybacks. Perhaps it's in the form of security. Perhaps it's in a policy-driven approach to service management and service delivery.

Any thoughts about ancillary or future benefits you get when you do ITSM well and then you have that quality of data in mind that is extended and kept consistent across these different approaches?
The ability to know what’s connected to your network can identify failure points and chokepoints or risks of failure in that infrastructure.

Stagaman: Part of it comes to the central role of CMDB and the universality of that data. CMDB drives asset management. It can drive ITSM and the ability to start defining models and standards and compare your live infrastructure to those models for compliance along with discovery.

The ability to know what’s connected to your network can identify failure points and chokepoints or risks of failure in that infrastructure. Rather than being reactive, "Oh, this node went down. We have to address this," you can start anticipating potential failures and build redundancy. Your possibility of outage can be significantly reduced, and you can build that CMDB and build the intelligence in, so that you can simulate what would happen if these nodes or these components went down. What's the impact of that?

You can see that when you go to build, do a change, that level of integration with CMDB data lets you see well, if we have a change and we have an outage for these servers, what's the impact on the end user due to the cascading effect of those outages through the related devices and services so that you can really say, well, if we bring this down, we were good, but oh, at the same time we have another change modifying this and with those together coming down we may interrupt service to online banking and we need to schedule those at different times.

The latest update we're seeing is the ability to put really strict controls on the fact that this change will potentially impact this system or service and based on our business rules that say that this service can only be down during these times or may not be down at that time. We can even identify that time period conflict in an automated way and require additional process approvals for that to go forward at that time or require a reschedule.

Gardner: Philipp, any thoughts on this notion of predictive benefits from a good ITSM and good data, and perhaps even this notion of an algorithmic approach to services, delivery, and management?

Federation approach

Koch: It actually nicely fits into one of our reference installations, where that integration that Erik also talked about of having the data and utilize the data in a kind of on-the-fly federation approach. You can no longer wait to have a daily batch job to run. You need to have it at your fingertips. I can take an example from an Active Directory integration where we utilized all the data from active directory to allocate roles and rights and access inside HP Service Manager.

We've made a high-level analysis of how much we actually save by doing this. By doing that integration and utilizing that information, we say that we have an 80 percent reduction of manual labor done inside service manager for user administration.

Instead of having a technician to have to go into service manager to allocate the role, or to allocate rights, to a new employee who needs access to HP Service Manager, you actually get it automatic from Active Directory when the user logs in. The only thing that has to be done is for HR to say where this user sits, and that happens no matter what.

We've drastically reduced the amount of time spent there. There's a tangible angle there, where you can save a lot of time and a lot of money, mainly with regards to human effort.
With big-data analytics, you'll be able to see that that manual change model is used often and it could be easily automated.

The second angle that you touched on is smart analytics, as we can call it as well, in the new solutions that we now have. It's cool to see, and we now need to see where it's going in the future and see how much further we can go with this. We can do smart analytics on utilizing all the data of the solutions. So you're using the buzzword big data.

If we go in and analyze everything that's happening to a change-management area, we now have KPIs that can tell me -- this an old KPI as such -- that 48 percent of your change records have an element of automation inside the change execution. You have the KPI of how much you're automating in change management.

With smart analytics on top of that, you can get feedback in your KPI dashboard that says you have 48 percent. That’s nice, but below that you see if you enhance those two change models as well and automate them, you'll get an additional 10 percent of automation on your KPI.

With big-data analytics, you'll be able to see that manual change model is used often and it could be easily automated. That is the area that is so underutilized in using such analytics to go and focus on the areas that actually really make a difference and to be able to see that on a dashboard for a change manager or somebody who is responsible for the process.

That really sticks into your eye and says “Well, if I spend half an hour here, making this change model better, then I am going to save a lot more time, because I am automating 10 percent more." That is extremely powerful. Now just extrapolating that to the rest of the processes, that’s the future.

Gardner: Well Erik, we've heard both John and Philipp describe intelligent ITSM. Do you have any examples where some of your customers are also exploring this new level of benefit?

Success story

Engstrom: Absolutely. Health Shared Services British Columbia (HSSBC) will be releasing a success story through HP shortly, probably in the next few weeks. In that case, it was a five-week implementation where we dropped in our packages for Asset Management (ITAM), Service Management (ITSM), and Executive Scorecard, which are all HP products.

We even used Business Service Management (BSM), but the thinking behind this was that this is a service-management project. It’s all about uniting different health agencies in British Columbia under one shared service.

The configuration information is there. The asset information is there, right down to purchase orders, maintenance contracts, all of the parties, all of the organizations. The customer was able to identify all of their business services. This was all built in, normalized in CMDB, and then pushed into ITSM.

With this capability, they're able to see across these various organizations that roll-up in the shared service, who the parties are, because people opening tickets don’t work with those folks. They're in different organizations. They don’t have relevant information about what services are impacted. They don't have relevant information about who is the actual cost center or their budget. All that kind of stuff that becomes important in a shared service.
The customer was able to identify all of their business services. This was all built in, normalized in CMDB, and then pushed into ITSM.

This customer, from week six to their go-live day had the ability see, what is allocated in assets, what is allocated in terms of maintenance and support, and this is the selected service that the ticket, incident, or change is being created upon.

They understood the impact for the organization as a result of having what we call a Configuration Management System (CMS), having all of these things working together. So it is possible. It gives you very high-level control, particularly when you put it into something like Executive Scorecard, to see where things are taking longer, how they're taking longer, and what's costing more.

More importantly, in a highly virtual environment, they can see whether they're oversubscribed, whether they have their budgeted amount of ESX servers, or whether they have the right number of assets that are playing a part in service delivery. They can see the cost of every task, because it's tied to a person, a business service, and an organization.

They started with a capability to do SACM, and this is what this case is really about. It plays into everything that we've talked about in this call. It's agile and it is out-of-the-box. They're using features from all of these tools that are out-of-the-box, and they're using a solution to help them implement faster.

They can see what we call “total efficiency of cost.” What am I spending, but really how is it being spent and how efficient is it? They can see across the whole lifecycle of service management. It’s beautiful.

Future trends

Gardner: It’s impressive. What is it about the future trends that we can now see or have a good sense of how the events fold that makes rapid ITSM adoption, this common data, and this intelligent ITSM approach, all so important?

I'm thinking perhaps the addition of mobile tier and extensibility out through new networks. I'm thinking about DevOps and trying to coordinate a rapid-development approach with operations and making that seamless.

We're hearing a lot about containers these days as well. I'm also thinking about hybrid cloud, where there's a mixture of services, a mixture of hosting options, and not just static but dynamic, moving across these boundaries.

So, let's go down the list, as this would be our last question for today. John Stagaman, what is it about some of these future trends that will make ITSM even more impactful, even more important?

Stagaman: One of the big shifts that we're starting to see in self-service is the idea that you want to enable a customer to resolve their own issue in as many cases as possible. What you can see in the newest release of that product is the ability for them to search for a solution and start a chat.
The other thing that we're seeing is the ability to bridge between on-premises system and SaaS solution.

When they ask a question, they can check your entire knowledge base and history to see the propose solutions. If that’s not the case, they can ask for additional information and then initialize a chat with the service desk, if needed.

Very often, if they say they're unable to open this file or their headset is broken, someone can immediately tell them how to procure a replacement headset. It allows that person to complete that activity or resolve their issue in a guided way. It doesn't require them to walk through a level of menus to find what they need. And it makes it much more approachable than finding a headset on the procurement system.

The other thing that we're seeing is the ability to bridge between on-premises system and SaaS solution. We have some customers for whom certain data is required to be onsite  for compliance or policy reasons. They need an on-premise system, but they may have some business units that want to use a SaaS solution.

Then, when they have system supported by central IT, that SaaS system can do an exchange of that case with the primary system and have bidirectional updates. So we're getting the ability to link between the SaaS world and the on-premises world more effectively.

Gardner: Philipp, thoughts from you on future trends that are driving the need for ITSM that will make it even more valuable, make it more important.

Connected intelligence

Koch: Definitely. Just to add on to what John said, it goes into the direction of the connected intelligence, utilizing that big data example that we have just gone through. It all points towards that we want to have a solution that is connected across and brings back intelligence towards the end user, just as much as towards the operator that has that integration.

Another angle, more from the technology side, is that now, with the SaaS offerings that we have today, the new way of going forward as I see it happening -- and the way I think HP has made a good decision with HP Service Anywhere -- is the continuous delivery. You're losing the aspects of having version numbers for software. You no longer need to do big upgrades to move from version 9 to a version 10, because you are doing continuous delivery.

Every time new code is ready to be deployed, it is actually deployed. You do not wait and bundle it up in a yearly cycle to give a huge package that means months of upgrading. You're doing this on the fly. So Service Anywhere or Agile Manager are good examples where HP is applying that. That is the future, because the customer doesn’t want to do upgrade projects anymore. Upgrades are of the past, if we really want to believe that. We hope we can actually go there.
Mobile and bring your own device were buzzwords -- now it's already here. We don’t really need to talk about it anymore, because it already exists.

You touched on mobile. Mobile and bring your own device were buzzwords -- now it's already here. We don’t really need to talk about it anymore, because it already exists. That’s now the standard. You have to do this, otherwise you're not really a player in the market.

To close off with a paradigm statement, future solutions need to be implemented -- and we consultants need to deliver solutions -- that solve end-user problems compared to what we did in the past, where we deployed solutions manage tickets!

We're no longer in the business of helping them and giving them features to more easily manage tickets and save money on quicker resolution. This is of the past. What we need to do today is to make it possible for organizations to empower end users to solve their problems themselves to become a ticket-less IT -- this is ideal world of course -- where we reduce the cost of an IT organization by giving as much as possible back to the end user to enable him to do self service.

Gardner: Last word to you, Erik. Any thoughts about future trends to drive ITSM and why it will be even more important to do it fast and do it well?

Engstrom: Absolutely. And in my worldview it's SACM. It's essentially using vendor strengths, the portfolio, the entire portfolio, such as HP’s Service and Portfolio Management (SPM), where you have all of these combined silos that normally operate completely independently of each other.

There are a couple of truths in IT. Data is expensive to re-create; the concept that you have knowledge, and that you have value in a tool. The next step in the new style of IT is going to require that these tools work together as one suite, one offering, so that your best data is coming from the best source and being used to make the best decisions.

Actionable information

It's about making big data a reality. But in the use of UCMDB and the HP portfolio, data is very small, it's actionable information, because it's a set of tools. This whole portfolio helps customers save money, be more efficient with where they spend, and do more with “yes.”

So the idea that you have all of this data out there, what can it mean? It can mean, for example, that you can look and see that a business service is spending 90 percent more on licensing or ESX servers or hardware, anything that it might need. You have transparency across the board.

Smarter service management means doing more with the information you already have and making informed decision that really help you drive efficiencies. It's doing more with “yes,” and being efficient. To me, that’s SACM. The requirement for a portfolio, it doesn’t matter how small or how large it is, is [that] it must provide the ways for which this data can be shared, so that information becomes intelligence.
Organizations that have these tools will beat the competition. They will wipe them out, because they're so efficient and so informed.

Organizations that have these tools will beat the competition at an SG and A (Selling, General and Administrative) level. They will wipe them out, because they're so efficient and so informed. Waste is reduced. Time is faster. Good decisions are made ahead of time. You have the data and you can act appropriately. That's the future. That's why we support HP software, because of the strength of the portfolio.

Gardner: Well, great. I am afraid we'll have to leave it there. We have been listening to a sponsored BriefingsDirect Podcast panel discussion on how rapidly advancing ITSM capability forms an IT imperative, and therefore bedrock, business necessity. We've seen how a new wave of ITSM technologies and methods allow for rapid ITSM adoption, and that means better, rapid support of agile business.
Unleash the power of your user base
with a free white paper 
With that, a big thanks to our guests, John Stagaman, Principal Consultant at Advanced MarketPlace; Philipp Koch, Managing Director at InovaPrime, Denmark, and Erik Engstrom, CEO of Effectual Systems.

Gardner: This is Dana Gardner. I'd like to thank our audience as well for joining, and don’t forget to come back next time to BriefingsDirect.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a Briefings Direct podcast on how enterprises can benefit from the newest IT service management methods and procedures. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Wednesday, October 15, 2014

Journey to SAP Quality — Home Trust Builds Center of Excellence with HP Tools

Transcript of a BriefingsDirect podcast on the steps to build a successful SAP test environment with HP quality assurance tools.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

This time, we're coming to you from the HP Discover 2014 Conference in Las Vegas to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.

Our next innovation case study interview highlights how Home Trust Company in Toronto has created a center of excellence to improve quality assurance for improved ongoing performance of their SAP applications. To learn how they do it, we are delighted to be joined by Cindy Shen, SAP QA Manager at Home Trust. Welcome.

Cindy Shen: Thank you.

Gardner: First, tell us a little bit about Home Trust. You're a large financial services organization.

Shen: We're one of the leading trust companies in Toronto, Canada. There are two main businesses we deal with. The first bucket is mortgages. We deal with a lot of residential mortgages.

Shen
The other bucket is we're a deposit-taking institution. People will deposit their money with us, and they can invest in a registered retirement savings plan (RRSP) (along with other options for their investment), which is equivalent of the US 401(k) plan.

We're also Canada Deposit Insurance Corporation (CDIC)-compliant. If a customer has money with us and if anything happens with the company, the customer can get back up to a certain amount of money.

We're regulated under the Office of the Superintendent of Financial Institutions (OSFI), and they regulate the Banks and Trust Companies, including us.

Some of the hurdles

Gardner: So obviously it's important for you to have your applications running properly. There's a lot of auditing and a lot of oversight. Tell us what some of the hurdles were, some of the challenges you had as you began to improve your quality-assurance efforts.

Shen: We're primarily an SAP shop. I was an SAP consultant for a couple of years. I've worked in North America, Europe, and Asia. I’ve been through many industries, not just the financial industry. I've touched on consumer packaged goods SAP projects, retail SAP projects, manufacturing SAP projects, and banking SAP projects. I usually deal with global projects, 100 million-plus, and 100-300 people.

What I noticed is that, regardless of the industries or the functional solutions that project has, it's always a common set of QA challenges when it comes to their SAP testing and it’s very complicated. It took me a couple of years to figure the tools, where each tool fits into the whole picture, and how pieces fit together.

For example, some of the common challenges that I'm going to talk about in my session (here at HP Discover) is, first of all, what tools you should be using. The HP ALM, Test Management Tool is, in my opinion, the market leader. That's what pretty much all the Fortune 500 companies, and even smaller companies, are using primarily as their test management tool. But testing SAP is unique.
Reduce post-production issues by 80% by building better apps.  
Learn Seven Best Practices for Business-Ready Applications
with a free white paper.
What are the additional tools on the SAP side that you need to have in order to integrate back to ALM test suite and have that system record of development plus the system record of testing, all integrated together, and make it flow which makes sense for SAP applications? That’s unique.
Most errors and defects happen in the integration area.

One is toolset and the other one is methodology. If you parachute me into any project, however large or small, complex or simple, local or global, I can guarantee you that the standards are not clear, or there is no standard in place.

For example, how do you properly write a test case to test SAP? You have to go into the granular detail that actually details the action words that you use for different application areas that can enable automation very easily in the future. How do you parameterize?

What’s the appropriate level of parameterization to enable that flexibility for automation? What’s the naming convention for your input parameter and output parameters to make it flow through from the very first test case, all the way to the end, when you test end to end application?

Most errors and defects happen in the integration area. So, how do you make sure your test coverage covers all your key integration points? SAP is very complex. If you change one thing, I can guarantee you that there's something else in some other areas of the application or in the interface that’s going to change without your knowing it, and that’s going to cause problems for you sooner or later.

So, how do you have those standards and methodology consistently enforced through every person who's writing test cases or who's executing testing at the same quality, in the same format, so that you can generate the same reports across all different projects to have the executive oversight and to minimize the duplucate work you have to do on the manual test cases in order to automate in the future.

Testing assets

The other big part is how to maintain such testing assets, so it's repeatable, reusable, and flexible -- and so that you can shorten your project delivery time in the future through automation and a consistent writing test case in manual testing, accelerate new projects coming up, and also improve your quality in terms of post-production support so you can catch critical errors fast.

Those are all very common SAP testing QA themes, challenges, or problems that practitioners like me see in any SAP environment.

Gardner: So when you arrived at Home Trust, and you understood this unique situation, and how important SAP applications are, what did you do to create a center of excellence and an ability to solve these issues?

Shen: I was fortunate to have been the lead on the SAP area for a lot of global projects. I've seen the worst of it. I've also seen a fraction of the clients that actually do it much better than other companies. So, I'm fortunate to know the best practices I want to implement, what will work, and what won't work, what are the critical things you have to get in place in the beginning, and what are the pieces you can wait for down the road.
We had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Coming from an SAP background, I'm fortunate to have that knowledge. So, from the start, I had a very clear vision as to how I wanted to drive this. First, you need to conduct an analysis of the current state, and what I saw was very common in the industry as well.

When I started, there were only two people in the QA space. It was a brand new group. And there was an overall software development lifecycle (SDLC) methodology in the company. But the company had just gone live with SAP application. So it was basically a great opportunity to set up a methodology, because it was a green field. That was very exciting.

One of the things you have to have is an overarching methodology. Are you using Business Process Testing (BPT), or are you using some other methodology. We also had to comply with, or fit in with, the methodology of SAP which is ASAP, and that’s primarily the industry standard in the SAP space as well. So, we had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Two, you had to get all the right tools in place. So, Home Trust is very good at getting the industry-leading toolsets. When I joined, they already had HP QC. At that time, it was called QC; now it's ALM. Solution Manager, was part of the SAP solution of the purchase. So, it was free. We just had to configure and implement it.

We also had QTP, which now is called UFT, and we also had LoadRunner. All the right toolsets were already in place. So I didn't have to go through the hassle of procuring all those tools.

Assessing the landscape

When we assessed the landscape of tools, we realized that, like any other company, they were not maximizing the return on investment (ROI) on the toolsets. The toolsets were not leveraged as much, because in a typical SAP environment, the demand of time to market is very high for project delivery and new product introduction.

When you have a new product, you have to configure the system fast, so it’s not too late to bring the product to the market. You have a lot of time pressure. You also have resource constraints, just like any other company. We started with two people, and we didn’t have a dedicated testing team. That was also something we felt we had to resolve.

We had to tackle it from a methodology and a toolset perspective, and we had to tackle it from a personnel perspective, how to properly structure the team and ramp the resource up. We had to tackle it through those three perspectives. Then, after all the strategic things are in place, you figure out your execution pieces.

From a methodology perspective, what are the authoring  standards, what are action words, and what are naming conventions? I can't emphasize this enough, because I see it done so differently on each project. People don’t know the implications  down the road.
It's different from company to company. You have to figure out the minimum effort required, but what makes sense.

How do you properly structure your testing assets in QC that makes sense for SAP? That is a key area. You can't structure at too high of a level. That means that you have a mega scenario of everything in one test case or just a few test cases. If something changes, which I can guarantee you it will, something changes in the application, because you have to redevelop it or modify it for another feature.

If you structure your testing assets at such a high level, you have to rewrite every single asset. You don’t know where it’s changing something somewhere else, because you probably hard-coded everything.

If you put it at a too much of a granular level, maintenance becomes a nightmare. It really has to be at the right level to enable the flexibility and get ready for automation. It also has to be easy to maintain, because maintenance is usually a higher cost than the actual initial creation. So, those are all the standards we are setting up.

What’s your proper defect flow? It's different from company to company. You have to figure out the minimum effort required, but what makes sense. You also have to have the right control in place for this company. You have to figure out naming conventions, the relevant test cases, and all that. That's the methodology part of it.

The toolset is a lot more technical. If you're talking about the HP ALM Suite, what's the standard configuration you need to enable for all your projects? I can guarantee you that every company has concurrent projects going on after post-production.

Even when they're implementing their initial SAP, there are many concurrent streams going on at the same time. How do you make sure its configuration accommodates all the different types of projects? However, with the same set of configuration -- this is a key point -- you cannot, let me repeat, you cannot, have very different configurations for HP ALM  across different projects.

Sharing assets

This will prevent you from sharing the test assets across different projects or prevent you from automating them in the same manner or automating them for the near future and prevent you from delivering projects consistently with consistent quality and with consistent reporting format across the company. It prevents all of those and that would generate nightmares for maintenance and having standards put in place. That’s key. I can't  emphasize that enough.

So from the toolset, how do you design a configuration that fits all? That’s the mandate. The rule of thumb is do not customize. Use out-of-box functionality. Do not code. If you really have to write a query, minimize it.

The good thing about HP ALM is that it's flexible enough to accommodate all the critical requests. If you find you have to write something for it or you have to have a custom field or custom label, you probably should consider changing your process first, because ALM is a pretty mature toolset.
Reduce post-production issues by 80% by building better apps.  
Learn Seven Best Practices for Business-Ready Applications
with a free white paper.
I've been on very complex global projects in different countries. HP ALM is able to accommodate all the key metrics, all the key deliverables you're looking to deliver. It has the capacity.
When I see other companies that do a lot of customization, it's because their process isn't correct. They're fixing the tool to accommodate for processes that don’t make sense. People really have to have that open mind, and seek out the best practice and expertise in the industry to understand what out of box functionality to configure for HP ALM to manage their SAP projects, instead of weakening the tool to fit how they do SAP projects.
When I see other companies that do a lot of customization, it's because their process isn't correct.

Sometimes, it involves a lot of change management, and for any company, that’s hard. You really have to keep that open mind, stick with the best practice, and think hard about whether your process makes sense or whether you really need to tweak the tool.

Gardner: It's fascinating that in doing due diligence on process, methodology, leveraging the tools, and recognizing the unique characteristics of this particular application set, if you do that correctly, you're going to improve the quality of that particular roll out or application delivery into production, and whatever modifications you need to do over time.

It's also going to set you up to be in a much better position to modernize and be aggressive with those applications, whether it's delivering them out to a mobile tier, for example, or whether there’s different integrations with different data. So when you do this well, there are multiple levels of payback. Right?

Shen: I love this question, because this is really the million-dollar view, or the million dollar understanding, that anybody can take away from this podcast or my session (at HP Discover). This is the million dollar vision that you should seriously consider and understand.

From an SAP and HP ALM perspective and the Center for Excellence, the vision is this (I'm going to go slowly, so you get all the components and all the pieces):

Work closely

SAP and HP work very closely. So your account rep will help you greatly in the toolsets in that area. It starts with Solution Manager from SAP, which should be your system record of development. The best part is when you implement SAP, you use Solution Manager to input all your Business Process Hierarchy (BPH). BPH is your key ingredient in Solution Manager that lays out all the processes in your environment.

Tied with it you should input all the transaction codes (T-codes). The DNA of SAP is T-codes. If you go to any place in SAP, most likely you have to enter a T-code. That will bring you to the right area. When we scope out an SAP project, the key starts with the list of T-codes. The key is to build out that BPH in SAP and associate all the T-codes in different areas.

With that T-code, you actually have all the documentation, functional specification, technical specification, all of the documentation and mapping associated at each level in your BPH along with your T-code. Not only that, you should have all your security IDs and metrics associated with each level at the BPH and T-codes, and all the flows and requirements all tied together, and of course the development, the code.

So, your Solution Manager should be the system record of development. The best practice is to always implement your SAP initial implementation with Solution Manager. So by the time you go live, you've already done all that. That’s the first bucket.

The second bucket is HP Tool Suite. We'll start with HP ALM Test Management Tool. It allows you to input your testing requirements, and they flow through the requirement to a test. If you’re using Business Process Testing (BPT), then you should flow through to the component in BPT, and flow through the test case module. Then, you flow through to the test plan, test lab and flow through to the defects. Everything is well integrated and connected.
Your Solution Manager should be the system record of development.

And then there is something we call an adapter. It’s a Solution Manager and HP ALM adapter. It enables Solution Manager and HP ALM to talk. You have to configure that adapter between Solution Manager and ALM. This is able to bring your hierarchy, your BPH in Solution Manager, and all the related assets, including the T-codes, over to the requirement model in HP ALM.

So if you have your Solution Manager straightened out, whatever you bring over to ALM, that's already your scope. It tells you what T-codes is in scope to test. By the way, in SAP it's often a headache that each T-code can do many, many things, especially if you're heavily customized.

So a T-code is not enough. You have to go down to a granular level of getting the variants. What are the typical scenarios or typical testing variants it has? Then, you can create that variance in the Solution Manager in the BPH. Then, it's going to flow through to the Requirement module in HP ALM and list out all your T-codes' possible variants.

Then, based on that, you start scoping out your testing assets. What are the components, test cases, or whatever you have to write. You put them in a BPT or you put them in your test case model. Then you link the requirement over. So you already have your test coverage. Then, you flow through a test case, flow through your execution in test lab, flow through to defects, and then it all ties back together.

And where does automation come in play? That's the bucket after HP ALM. So, UFT today is still the primary tool people use to automate. In the SAP space, SAP actually has its own. It's called, Test Acceleration and Optimization (TAO). That’s also leveraging UFT. That's the foundation to create a specific SAP automation, but either is fine. If you already have UFT, you really could start today.

Back and forth

So, the automation comes in place. This is very interesting. This is how it goes back and forth. For example, you already transported something to production and you want to check if anything slipped through the cracks? Is all the testing coverage there?

There's something called Solution Document Assistant. From the Solution Manager side, you can actually read from EarlyWatch reports to see what T codes are actually being used in your Production system today. After something is transported over into Prod, you can re-run it again to see what are the net new T-codes in the production system. Then, you can compare that. So there's a process.

Then you can see what are the net new ones from the BPH and flow through that to your HP QC or HP ALM, and see whether we have coverage for that. If not, here’s your scope for net new manual and automated testing.
I have yet to see a company that’s very good with documentation, especially with SAP.

Then, you keep building that regression and you eventually will get a library. That’s how you flow through back and forth. There is also something called Business Process Change Analyzer (BPCA). That already comes free with Solution Manager. You just have to configure it.

It allows you to load whatever you want to change in production into the buffer. So, before you actually transfer the code into production, you'll be able to know what area it impacts. It goes into the core level. So, it allows you to do targeted regression as well. We talked about Solution Manager. We talked about ALM. We talked about UFT. Then, there is LoadRunner, the performance center, the load testing, the performance testing, stress testing, etc., and this all goes into the same picture.

The ideal solution is that you can flow through your content in Solution Manager to HP ALM and you can enable automation for all tests together -- and all those performance, stress, whatever, testing -- in one end-to-end flow and you're able to build that regression library. You're able to build that technical testing library. And you're able to build that library and Solution Manager and maintain them at same time.

Gardner: So the technology is really powerful, but it's incumbent on the users to go through those steps of configuring, integrating, creating the diligence of the libraries and then building on that.

I'd like to go up to the business-level discussion. When you go to your boss's boss, can you explain to them what they're going to get as a value for having gone through this? It's one thing to do it because it's the right thing to do and it's got super efficient benefits, but that needs to translate into dollars and cents and business metrics. So what do you tell them you get at that business level when they do this properly?

Business takes notice

Shen: Very good question, because this exercise we did can be applied to any other companies. It's at the level that business really takes notice. One common challenge is that when you on-board somebody, do they have the proper documentation to ramp it up?

I yet have to see a company that’s very good with documentation, especially with SAP, where is that list of scope of all the T-codes that are today in production we use? What are the functional specs? What are the technical specs? Where is the field map? Where are the flows? You have to have that documentation in order to ramp somebody up or what typically ends up happening is that you hire somebody and you have to take other team members for a few weeks to ramp the person up.

Instead of putting them on the project to deliver right away, start writing the code, start configuring SAP, or whatever, they can’t start until few months later. How do you  accelerate that process? You build everything up with Solution Manager, you build everything up in HP ALM, you build everything up in your QTP and UFT and everything.

So this way, the person will come in, they can go to Solution Manager and look at all the T-codes and scope, look at all the updated T-codes, updated business areas, look at updated functional specs, understand what the company’s application does and what's the logic and what's configuration. Then, the person can easily go to HP ALM and figure out, the testing scenarios, how people test, how they use application, and what should be the expected behavior of the application.

Point one is that you can really speed up the hiring process and the knowledge transfer process for your new personnel. A more important application of this is on projects. Whether SAP or not, companies usually use very high-end products, because you have to constantly draw out new applications, new releases, and new features based on market conditions and based on business needs.
Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery.

When a project starts, a very common challenge is the documentation of existing functionality? How can you identify what to build? If you have nothing, I can guarantee you that you'll spend a few weeks of the entire project team trying to figure out current status.

Again, with the library and Solution Manager, the regression testing suite, the automated suite in HP ALM and UFT, and all of that, you can get that on day one. It's going to shorten the project time. It's going to accelerate the project time with good quality.

The other thing is that a project is so important that anything in the project is very necessary. When you actually figure out your status quo, you start building.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery. How do you accelerate that? Without existing regression library, documented test scenarios, and even automated existing regression libraries, you have to invent everything from scratch.

By the way, that involves figuring out the scope, the testing scope that involves writing the test case from scratch, building all the parameters, and building all the data. That takes a lot of time. If you already have an existing library, that’s going to shorten your lifecycle a lot.

So all this translates into dollar saving plus better coverage and faster delivery, which is key for business. By the way, when you have all this set in place, you're able to catch a lot more defects before it goes to production. I saw study that said it's about 10 times more expensive if you catch a defect in production. So the earlier you catch it, the better.

Security confidence

Gardner:  Right, of course. It also strikes me that doing this will allow you to have better security confidence, governance risk and compliance benefits, and auditability when that kicks in. In a banking environment, of course, that’s really important.

Shen: Absolutely. The HP ALM tool allows the complete audit trail for the testing aspect of it. Not at this current company, but on other projects, usually an auditor comes in and they ask for access to HP QC. They look at HP ALM, auto test cases, who executed, the recorded results, and defects, that’s what auditors look for.
Reduce post-production issues by 80% by building better apps.  
Learn Seven Best Practices for Business-Ready Applications
with a free white paper.
Gardner: Cindy, what is it that’s of interest to you here at HP Discover in terms of what comes next in HP's tool, seeing as they're quite important to you? Also, are you looking for anything in the HP-SAP relationship moving forward?

Shen: I love that question. Sometimes, I feel very lonely in this niche field. SAP is a big beast. HP-SAP integration is part of what they do, but it's not what they market. The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

It's a niche market. There are only a handful of people in the world that can do this from end to end properly. HP has many other products. So, you're looking at a small circle of SAP end clients who are using HP toolsets, who need to know how to properly configure and run this efficiently and properly. Sometimes I feel very lonely, overlapping the circle of HP and SAP.
The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

That’s why Discover is very important to me. It feels like a homecoming, just because here I'll actually speak to the project managers and experts on HP ALM sprinter, the integration, and the HP adapter. So I know what the future releases are. I know what's coming down the line, and I know the configuration I might have to change in the future.

The other really good of part, which I'm passionate about, having doing enough projects, is that I've helped clients, and there's always this common set of questions and challenges. It took me a couple of years to figure these out. There are many, many people out there in the same boat as I was years back, and I love to share my experience, expertise, and knowledge with the end clients.

They're the ones managing and creating their end-to-end testing. They're the ones facing all these challenges. I love to share with them what the best practices are, how to structure things correctly, so that you don’t have to suffer down the road. It really takes expertise to make it right. That’s what I love to share.

As far as the ecosystem of HP and SAP. I'd like to see them integrate more tightly. I'd like to see them engage more with the end-user community, so that we can definitely share the lessons and share the experience with end user more.

Also, I know all the vendors in the space. Basically, the vendors in the space are very niche and most of them come from SAP and HP backgrounds. So I keep running into people I know. My vendors keep running to people they know, and it's that community that’s very critical to enable success for the end user and for the business.

Gardner: This has been very interesting and I appreciate your candor and depth of understanding. We've been learning about how Home Trust Company in Toronto has been creating a Center of Excellence and improving on their Application Lifecycle Management across SAP implementations, and how the combination of HP tools and SAP in integration together with proper methodologies can have very substantial paybacks, both technically, security- and compliance-wise and in business and productivity terms.

So a huge thank you to our guest, Cindy Shen, SAP QA Manager at Home Trust Company. Thanks so much.

Shen: Thank you very much. My pleasure.

Gardner: And I'd also like to thank our audience for joining us for this special new style of IT discussion coming to you directly from the HP Discover 2014 Conference in Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions.  Thanks again for listening, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Transcript of a BriefingsDirect podcast on the steps to build a successful SAP test environment with HP quality assurance tools. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: