Wednesday, October 15, 2014

Journey to SAP Quality — Home Trust Builds Center of Excellence with HP Tools

Transcript of a BriefingsDirect podcast on the steps to build a successful SAP test environment with HP quality assurance tools.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

This time, we're coming to you from the HP Discover 2014 Conference in Las Vegas to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.

Our next innovation case study interview highlights how Home Trust Company in Toronto has created a center of excellence to improve quality assurance for improved ongoing performance of their SAP applications. To learn how they do it, we are delighted to be joined by Cindy Shen, SAP QA Manager at Home Trust. Welcome.

Cindy Shen: Thank you.

Gardner: First, tell us a little bit about Home Trust. You're a large financial services organization.

Shen: We're one of the leading trust companies in Toronto, Canada. There are two main businesses we deal with. The first bucket is mortgages. We deal with a lot of residential mortgages.

Shen
The other bucket is we're a deposit-taking institution. People will deposit their money with us, and they can invest in a registered retirement savings plan (RRSP) (along with other options for their investment), which is equivalent of the US 401(k) plan.

We're also Canada Deposit Insurance Corporation (CDIC)-compliant. If a customer has money with us and if anything happens with the company, the customer can get back up to a certain amount of money.

We're regulated under the Office of the Superintendent of Financial Institutions (OSFI), and they regulate the Banks and Trust Companies, including us.

Some of the hurdles

Gardner: So obviously it's important for you to have your applications running properly. There's a lot of auditing and a lot of oversight. Tell us what some of the hurdles were, some of the challenges you had as you began to improve your quality-assurance efforts.

Shen: We're primarily an SAP shop. I was an SAP consultant for a couple of years. I've worked in North America, Europe, and Asia. I’ve been through many industries, not just the financial industry. I've touched on consumer packaged goods SAP projects, retail SAP projects, manufacturing SAP projects, and banking SAP projects. I usually deal with global projects, 100 million-plus, and 100-300 people.

What I noticed is that, regardless of the industries or the functional solutions that project has, it's always a common set of QA challenges when it comes to their SAP testing and it’s very complicated. It took me a couple of years to figure the tools, where each tool fits into the whole picture, and how pieces fit together.

For example, some of the common challenges that I'm going to talk about in my session (here at HP Discover) is, first of all, what tools you should be using. The HP ALM, Test Management Tool is, in my opinion, the market leader. That's what pretty much all the Fortune 500 companies, and even smaller companies, are using primarily as their test management tool. But testing SAP is unique.
Reduce post-production issues by 80% by building better apps.  
Learn Seven Best Practices for Business-Ready Applications
with a free white paper.
What are the additional tools on the SAP side that you need to have in order to integrate back to ALM test suite and have that system record of development plus the system record of testing, all integrated together, and make it flow which makes sense for SAP applications? That’s unique.
Most errors and defects happen in the integration area.

One is toolset and the other one is methodology. If you parachute me into any project, however large or small, complex or simple, local or global, I can guarantee you that the standards are not clear, or there is no standard in place.

For example, how do you properly write a test case to test SAP? You have to go into the granular detail that actually details the action words that you use for different application areas that can enable automation very easily in the future. How do you parameterize?

What’s the appropriate level of parameterization to enable that flexibility for automation? What’s the naming convention for your input parameter and output parameters to make it flow through from the very first test case, all the way to the end, when you test end to end application?

Most errors and defects happen in the integration area. So, how do you make sure your test coverage covers all your key integration points? SAP is very complex. If you change one thing, I can guarantee you that there's something else in some other areas of the application or in the interface that’s going to change without your knowing it, and that’s going to cause problems for you sooner or later.

So, how do you have those standards and methodology consistently enforced through every person who's writing test cases or who's executing testing at the same quality, in the same format, so that you can generate the same reports across all different projects to have the executive oversight and to minimize the duplucate work you have to do on the manual test cases in order to automate in the future.

Testing assets

The other big part is how to maintain such testing assets, so it's repeatable, reusable, and flexible -- and so that you can shorten your project delivery time in the future through automation and a consistent writing test case in manual testing, accelerate new projects coming up, and also improve your quality in terms of post-production support so you can catch critical errors fast.

Those are all very common SAP testing QA themes, challenges, or problems that practitioners like me see in any SAP environment.

Gardner: So when you arrived at Home Trust, and you understood this unique situation, and how important SAP applications are, what did you do to create a center of excellence and an ability to solve these issues?

Shen: I was fortunate to have been the lead on the SAP area for a lot of global projects. I've seen the worst of it. I've also seen a fraction of the clients that actually do it much better than other companies. So, I'm fortunate to know the best practices I want to implement, what will work, and what won't work, what are the critical things you have to get in place in the beginning, and what are the pieces you can wait for down the road.
We had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Coming from an SAP background, I'm fortunate to have that knowledge. So, from the start, I had a very clear vision as to how I wanted to drive this. First, you need to conduct an analysis of the current state, and what I saw was very common in the industry as well.

When I started, there were only two people in the QA space. It was a brand new group. And there was an overall software development lifecycle (SDLC) methodology in the company. But the company had just gone live with SAP application. So it was basically a great opportunity to set up a methodology, because it was a green field. That was very exciting.

One of the things you have to have is an overarching methodology. Are you using Business Process Testing (BPT), or are you using some other methodology. We also had to comply with, or fit in with, the methodology of SAP which is ASAP, and that’s primarily the industry standard in the SAP space as well. So, we had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Two, you had to get all the right tools in place. So, Home Trust is very good at getting the industry-leading toolsets. When I joined, they already had HP QC. At that time, it was called QC; now it's ALM. Solution Manager, was part of the SAP solution of the purchase. So, it was free. We just had to configure and implement it.

We also had QTP, which now is called UFT, and we also had LoadRunner. All the right toolsets were already in place. So I didn't have to go through the hassle of procuring all those tools.

Assessing the landscape

When we assessed the landscape of tools, we realized that, like any other company, they were not maximizing the return on investment (ROI) on the toolsets. The toolsets were not leveraged as much, because in a typical SAP environment, the demand of time to market is very high for project delivery and new product introduction.

When you have a new product, you have to configure the system fast, so it’s not too late to bring the product to the market. You have a lot of time pressure. You also have resource constraints, just like any other company. We started with two people, and we didn’t have a dedicated testing team. That was also something we felt we had to resolve.

We had to tackle it from a methodology and a toolset perspective, and we had to tackle it from a personnel perspective, how to properly structure the team and ramp the resource up. We had to tackle it through those three perspectives. Then, after all the strategic things are in place, you figure out your execution pieces.

From a methodology perspective, what are the authoring  standards, what are action words, and what are naming conventions? I can't emphasize this enough, because I see it done so differently on each project. People don’t know the implications  down the road.
It's different from company to company. You have to figure out the minimum effort required, but what makes sense.

How do you properly structure your testing assets in QC that makes sense for SAP? That is a key area. You can't structure at too high of a level. That means that you have a mega scenario of everything in one test case or just a few test cases. If something changes, which I can guarantee you it will, something changes in the application, because you have to redevelop it or modify it for another feature.

If you structure your testing assets at such a high level, you have to rewrite every single asset. You don’t know where it’s changing something somewhere else, because you probably hard-coded everything.

If you put it at a too much of a granular level, maintenance becomes a nightmare. It really has to be at the right level to enable the flexibility and get ready for automation. It also has to be easy to maintain, because maintenance is usually a higher cost than the actual initial creation. So, those are all the standards we are setting up.

What’s your proper defect flow? It's different from company to company. You have to figure out the minimum effort required, but what makes sense. You also have to have the right control in place for this company. You have to figure out naming conventions, the relevant test cases, and all that. That's the methodology part of it.

The toolset is a lot more technical. If you're talking about the HP ALM Suite, what's the standard configuration you need to enable for all your projects? I can guarantee you that every company has concurrent projects going on after post-production.

Even when they're implementing their initial SAP, there are many concurrent streams going on at the same time. How do you make sure its configuration accommodates all the different types of projects? However, with the same set of configuration -- this is a key point -- you cannot, let me repeat, you cannot, have very different configurations for HP ALM  across different projects.

Sharing assets

This will prevent you from sharing the test assets across different projects or prevent you from automating them in the same manner or automating them for the near future and prevent you from delivering projects consistently with consistent quality and with consistent reporting format across the company. It prevents all of those and that would generate nightmares for maintenance and having standards put in place. That’s key. I can't  emphasize that enough.

So from the toolset, how do you design a configuration that fits all? That’s the mandate. The rule of thumb is do not customize. Use out-of-box functionality. Do not code. If you really have to write a query, minimize it.

The good thing about HP ALM is that it's flexible enough to accommodate all the critical requests. If you find you have to write something for it or you have to have a custom field or custom label, you probably should consider changing your process first, because ALM is a pretty mature toolset.
Reduce post-production issues by 80% by building better apps.  
Learn Seven Best Practices for Business-Ready Applications
with a free white paper.
I've been on very complex global projects in different countries. HP ALM is able to accommodate all the key metrics, all the key deliverables you're looking to deliver. It has the capacity.
When I see other companies that do a lot of customization, it's because their process isn't correct. They're fixing the tool to accommodate for processes that don’t make sense. People really have to have that open mind, and seek out the best practice and expertise in the industry to understand what out of box functionality to configure for HP ALM to manage their SAP projects, instead of weakening the tool to fit how they do SAP projects.
When I see other companies that do a lot of customization, it's because their process isn't correct.

Sometimes, it involves a lot of change management, and for any company, that’s hard. You really have to keep that open mind, stick with the best practice, and think hard about whether your process makes sense or whether you really need to tweak the tool.

Gardner: It's fascinating that in doing due diligence on process, methodology, leveraging the tools, and recognizing the unique characteristics of this particular application set, if you do that correctly, you're going to improve the quality of that particular roll out or application delivery into production, and whatever modifications you need to do over time.

It's also going to set you up to be in a much better position to modernize and be aggressive with those applications, whether it's delivering them out to a mobile tier, for example, or whether there’s different integrations with different data. So when you do this well, there are multiple levels of payback. Right?

Shen: I love this question, because this is really the million-dollar view, or the million dollar understanding, that anybody can take away from this podcast or my session (at HP Discover). This is the million dollar vision that you should seriously consider and understand.

From an SAP and HP ALM perspective and the Center for Excellence, the vision is this (I'm going to go slowly, so you get all the components and all the pieces):

Work closely

SAP and HP work very closely. So your account rep will help you greatly in the toolsets in that area. It starts with Solution Manager from SAP, which should be your system record of development. The best part is when you implement SAP, you use Solution Manager to input all your Business Process Hierarchy (BPH). BPH is your key ingredient in Solution Manager that lays out all the processes in your environment.

Tied with it you should input all the transaction codes (T-codes). The DNA of SAP is T-codes. If you go to any place in SAP, most likely you have to enter a T-code. That will bring you to the right area. When we scope out an SAP project, the key starts with the list of T-codes. The key is to build out that BPH in SAP and associate all the T-codes in different areas.

With that T-code, you actually have all the documentation, functional specification, technical specification, all of the documentation and mapping associated at each level in your BPH along with your T-code. Not only that, you should have all your security IDs and metrics associated with each level at the BPH and T-codes, and all the flows and requirements all tied together, and of course the development, the code.

So, your Solution Manager should be the system record of development. The best practice is to always implement your SAP initial implementation with Solution Manager. So by the time you go live, you've already done all that. That’s the first bucket.

The second bucket is HP Tool Suite. We'll start with HP ALM Test Management Tool. It allows you to input your testing requirements, and they flow through the requirement to a test. If you’re using Business Process Testing (BPT), then you should flow through to the component in BPT, and flow through the test case module. Then, you flow through to the test plan, test lab and flow through to the defects. Everything is well integrated and connected.
Your Solution Manager should be the system record of development.

And then there is something we call an adapter. It’s a Solution Manager and HP ALM adapter. It enables Solution Manager and HP ALM to talk. You have to configure that adapter between Solution Manager and ALM. This is able to bring your hierarchy, your BPH in Solution Manager, and all the related assets, including the T-codes, over to the requirement model in HP ALM.

So if you have your Solution Manager straightened out, whatever you bring over to ALM, that's already your scope. It tells you what T-codes is in scope to test. By the way, in SAP it's often a headache that each T-code can do many, many things, especially if you're heavily customized.

So a T-code is not enough. You have to go down to a granular level of getting the variants. What are the typical scenarios or typical testing variants it has? Then, you can create that variance in the Solution Manager in the BPH. Then, it's going to flow through to the Requirement module in HP ALM and list out all your T-codes' possible variants.

Then, based on that, you start scoping out your testing assets. What are the components, test cases, or whatever you have to write. You put them in a BPT or you put them in your test case model. Then you link the requirement over. So you already have your test coverage. Then, you flow through a test case, flow through your execution in test lab, flow through to defects, and then it all ties back together.

And where does automation come in play? That's the bucket after HP ALM. So, UFT today is still the primary tool people use to automate. In the SAP space, SAP actually has its own. It's called, Test Acceleration and Optimization (TAO). That’s also leveraging UFT. That's the foundation to create a specific SAP automation, but either is fine. If you already have UFT, you really could start today.

Back and forth

So, the automation comes in place. This is very interesting. This is how it goes back and forth. For example, you already transported something to production and you want to check if anything slipped through the cracks? Is all the testing coverage there?

There's something called Solution Document Assistant. From the Solution Manager side, you can actually read from EarlyWatch reports to see what T codes are actually being used in your Production system today. After something is transported over into Prod, you can re-run it again to see what are the net new T-codes in the production system. Then, you can compare that. So there's a process.

Then you can see what are the net new ones from the BPH and flow through that to your HP QC or HP ALM, and see whether we have coverage for that. If not, here’s your scope for net new manual and automated testing.
I have yet to see a company that’s very good with documentation, especially with SAP.

Then, you keep building that regression and you eventually will get a library. That’s how you flow through back and forth. There is also something called Business Process Change Analyzer (BPCA). That already comes free with Solution Manager. You just have to configure it.

It allows you to load whatever you want to change in production into the buffer. So, before you actually transfer the code into production, you'll be able to know what area it impacts. It goes into the core level. So, it allows you to do targeted regression as well. We talked about Solution Manager. We talked about ALM. We talked about UFT. Then, there is LoadRunner, the performance center, the load testing, the performance testing, stress testing, etc., and this all goes into the same picture.

The ideal solution is that you can flow through your content in Solution Manager to HP ALM and you can enable automation for all tests together -- and all those performance, stress, whatever, testing -- in one end-to-end flow and you're able to build that regression library. You're able to build that technical testing library. And you're able to build that library and Solution Manager and maintain them at same time.

Gardner: So the technology is really powerful, but it's incumbent on the users to go through those steps of configuring, integrating, creating the diligence of the libraries and then building on that.

I'd like to go up to the business-level discussion. When you go to your boss's boss, can you explain to them what they're going to get as a value for having gone through this? It's one thing to do it because it's the right thing to do and it's got super efficient benefits, but that needs to translate into dollars and cents and business metrics. So what do you tell them you get at that business level when they do this properly?

Business takes notice

Shen: Very good question, because this exercise we did can be applied to any other companies. It's at the level that business really takes notice. One common challenge is that when you on-board somebody, do they have the proper documentation to ramp it up?

I yet have to see a company that’s very good with documentation, especially with SAP, where is that list of scope of all the T-codes that are today in production we use? What are the functional specs? What are the technical specs? Where is the field map? Where are the flows? You have to have that documentation in order to ramp somebody up or what typically ends up happening is that you hire somebody and you have to take other team members for a few weeks to ramp the person up.

Instead of putting them on the project to deliver right away, start writing the code, start configuring SAP, or whatever, they can’t start until few months later. How do you  accelerate that process? You build everything up with Solution Manager, you build everything up in HP ALM, you build everything up in your QTP and UFT and everything.

So this way, the person will come in, they can go to Solution Manager and look at all the T-codes and scope, look at all the updated T-codes, updated business areas, look at updated functional specs, understand what the company’s application does and what's the logic and what's configuration. Then, the person can easily go to HP ALM and figure out, the testing scenarios, how people test, how they use application, and what should be the expected behavior of the application.

Point one is that you can really speed up the hiring process and the knowledge transfer process for your new personnel. A more important application of this is on projects. Whether SAP or not, companies usually use very high-end products, because you have to constantly draw out new applications, new releases, and new features based on market conditions and based on business needs.
Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery.

When a project starts, a very common challenge is the documentation of existing functionality? How can you identify what to build? If you have nothing, I can guarantee you that you'll spend a few weeks of the entire project team trying to figure out current status.

Again, with the library and Solution Manager, the regression testing suite, the automated suite in HP ALM and UFT, and all of that, you can get that on day one. It's going to shorten the project time. It's going to accelerate the project time with good quality.

The other thing is that a project is so important that anything in the project is very necessary. When you actually figure out your status quo, you start building.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery. How do you accelerate that? Without existing regression library, documented test scenarios, and even automated existing regression libraries, you have to invent everything from scratch.

By the way, that involves figuring out the scope, the testing scope that involves writing the test case from scratch, building all the parameters, and building all the data. That takes a lot of time. If you already have an existing library, that’s going to shorten your lifecycle a lot.

So all this translates into dollar saving plus better coverage and faster delivery, which is key for business. By the way, when you have all this set in place, you're able to catch a lot more defects before it goes to production. I saw study that said it's about 10 times more expensive if you catch a defect in production. So the earlier you catch it, the better.

Security confidence

Gardner:  Right, of course. It also strikes me that doing this will allow you to have better security confidence, governance risk and compliance benefits, and auditability when that kicks in. In a banking environment, of course, that’s really important.

Shen: Absolutely. The HP ALM tool allows the complete audit trail for the testing aspect of it. Not at this current company, but on other projects, usually an auditor comes in and they ask for access to HP QC. They look at HP ALM, auto test cases, who executed, the recorded results, and defects, that’s what auditors look for.
Reduce post-production issues by 80% by building better apps.  
Learn Seven Best Practices for Business-Ready Applications
with a free white paper.
Gardner: Cindy, what is it that’s of interest to you here at HP Discover in terms of what comes next in HP's tool, seeing as they're quite important to you? Also, are you looking for anything in the HP-SAP relationship moving forward?

Shen: I love that question. Sometimes, I feel very lonely in this niche field. SAP is a big beast. HP-SAP integration is part of what they do, but it's not what they market. The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

It's a niche market. There are only a handful of people in the world that can do this from end to end properly. HP has many other products. So, you're looking at a small circle of SAP end clients who are using HP toolsets, who need to know how to properly configure and run this efficiently and properly. Sometimes I feel very lonely, overlapping the circle of HP and SAP.
The good thing is that most SAP clients have HP ALM. It's a very necessary toolset for both HP and SAP to continue to evolve and support.

That’s why Discover is very important to me. It feels like a homecoming, just because here I'll actually speak to the project managers and experts on HP ALM sprinter, the integration, and the HP adapter. So I know what the future releases are. I know what's coming down the line, and I know the configuration I might have to change in the future.

The other really good of part, which I'm passionate about, having doing enough projects, is that I've helped clients, and there's always this common set of questions and challenges. It took me a couple of years to figure these out. There are many, many people out there in the same boat as I was years back, and I love to share my experience, expertise, and knowledge with the end clients.

They're the ones managing and creating their end-to-end testing. They're the ones facing all these challenges. I love to share with them what the best practices are, how to structure things correctly, so that you don’t have to suffer down the road. It really takes expertise to make it right. That’s what I love to share.

As far as the ecosystem of HP and SAP. I'd like to see them integrate more tightly. I'd like to see them engage more with the end-user community, so that we can definitely share the lessons and share the experience with end user more.

Also, I know all the vendors in the space. Basically, the vendors in the space are very niche and most of them come from SAP and HP backgrounds. So I keep running into people I know. My vendors keep running to people they know, and it's that community that’s very critical to enable success for the end user and for the business.

Gardner: This has been very interesting and I appreciate your candor and depth of understanding. We've been learning about how Home Trust Company in Toronto has been creating a Center of Excellence and improving on their Application Lifecycle Management across SAP implementations, and how the combination of HP tools and SAP in integration together with proper methodologies can have very substantial paybacks, both technically, security- and compliance-wise and in business and productivity terms.

So a huge thank you to our guest, Cindy Shen, SAP QA Manager at Home Trust Company. Thanks so much.

Shen: Thank you very much. My pleasure.

Gardner: And I'd also like to thank our audience for joining us for this special new style of IT discussion coming to you directly from the HP Discover 2014 Conference in Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions.  Thanks again for listening, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Transcript of a BriefingsDirect podcast on the steps to build a successful SAP test environment with HP quality assurance tools. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Thursday, October 09, 2014

ITSM Adoption Forces a Streamlined Operations Culture at Desjardins that Paves the Way to Better Cloud Use

Transcript of a BriefingsDirect podcast on how a Canadian cooperative banking organization is using ITSM to put best practices in place and deliver higher IT value to the company.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

This time, we're coming to you from the HP Discover 2014 Conference in Las Vegas. We're here to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.

Our next innovation case study interview highlights how Desjardins Group in Montréal is improving their operations in IT through an advanced IT services management (ITSM) approach.

To learn more, we are joined by Trung Quach, ITSM Manager at Desjardins in Québec. Welcome.

Trung Quach: Thank you very much, Dana, for having us.

Gardner: First tell us a little bit about your organization, for those who aren’t familiar. You're in Canada and you have a large network of credit unions.

Quach: It’s more cooperative banking. We are around 50,000 people across Québec, and we've started moving into both Canada and the US.
Gain better control over help desk quality and impact.  
Learn how to make your help desk more relevant
with a free white paper.
Gardner: Tell us a little bit about your IT organization, the size, how many people, how many datacenters? What sort of IT organization do you have?

Quach: We're around 2,500 and counting. We're mainly based in Montréal and Lévis, which is near Québec City. Most of them are in Montréal, but some technical people are in Lévis. 

Gardner: Tell us about your role. What are you doing there as ITSM manager?

The ITIL process

Quach: I joined Desjardins last year in the ITSM leader position. This is more about the process, the ITIL process and everything that's invloved with the tool, as well as to support those overall processes.

Gardner: Tell us why ITSM has become important to you. What were some of the challenges, some of the requirements? What was the environment you were in that required you to adopt better ITSM principles?

Quach
Quach: A couple of years ago, when they merged 10-plus silos of IT into one big group, Desjardins needed to centralize the process, put best practice in place, to be more efficient and competitive -- and to give a higher value to the business.

Gardner: What, in particular, were issues that cropped up as a result of that decentralization? Was this poor performance, too much cost, too many manual processes, all of the above?

Quach: We had a lot of manual processes, and a lot of tools. To be able to measure the performance of a team, you need to use the same process and the same tools, and then measure yourself on it. You need to optimize the way you do it, so that you can provide better IT services.

Gardner: What have been some of the results of your movement toward ITSM? What sort of benefits have you realized as a result?

Quach: We had many of them. Some were financial, but the most important thing, I think, is the services quality and the availability of those services. So one indicator is a reduction in major incidents of 30 percent for the last two years.

Gardner: What is it about your use of ITSM that has led to that significant reduction in incidents? How does that translate?

Quach: We put our new problem management approach to work as well with the problem processes. When we open tickets, we can take care of the incidents in a coordinated way at an enterprise level. So the impact is everywhere. We can now advise the line of businesses, follow up with the incident, and close the incident rapidly. We follow up with any problems, and then we fix the real issues so that they don’t come back.

Gardner: Have you used this to translate back to any applications development, or custom development in your organization? Or is this more on the operations side strictly?

Better support

Quach: We started all of this on the operations side. But then we started last year on the development side, too. They're involved in our process slowly, and that’s going to soon get better, so we can support the full IT lifecycle better.

Gardner: Tell us about HP Discover. What's of interest to you? Have you been looking at what HP has been doing with their tools? What's of most importance to you in terms of what they do with their technology?

Quach: I can tell you how important it is for us. Last year we didn't go to HP Discover. This year, around eight in my team and the architecture team are here. That shows you how important it is.

Now we spread out. A lot of my team members went to explore tools and everything else that HP has to offer -- and HP has a lot of offer. We went to learn about the cloud, as well as big data. It all works together. That’s why it was important for us to come here. ITSM is the main reason we're here, but I want to make sure that everything works together, because the IT processes touch everything.
Gain better control over help desk quality and impact.  
Learn how to make your help desk more relevant
with a free white paper.
Gardner: I've talked to a number of organizations, Trung, and they've mentioned that before they feel comfortable moving into more cloud activities, and before they feel comfortable adopting big data, analytics platforms, they want to make sure they have everything else in order. So ITSM is an important step for them to then go to larger, more complex undertakings. Is that your philosophy as well?

Quach: Yes. There are two ways to do this. You use that technology to force yourself to be disciplined, or you discipline yourself. ITSM is one way to do it. You force yourself to work in a certain manner, a streamlined manner, and then you can go to the cloud. It's easier that way.

Gardner: Then, of course, you also have standardization in culture, in organization, not just technology, but the people and the process, and that can be very powerful.

Quach: If asked me about cloud -- and I have done this with another company -- in a 30-minute interview about cloud, I would use 29 minutes to not talk about technology but about people and processes.

Gardner: How about the future of IT? Any thoughts about or the big picture of where technology is going? Even as we face larger data volumes, perhaps more complexity, and mobile applications, what are your thoughts about how we solve some of those issues in the big picture?

Time to market

Quach: IT more and more is going to have a challenge for meeting the speed demanded for improved time to market. But to do that, you need processes, technology, and of course, people. So the client, the business, is going to ask us to be faster. That’s why we'll need to go in that cloud. But to go in the cloud, we need to master our IT services, and then go in the cloud. If not, it would be like not going to the cloud and not having that agility. We would not be competitive.

Gardner: Looking back, now that you have gone through an ITSM advancement, for those who are just beginning, what are some thoughts that you could share with them?

Quach: In an ITSM project, it's very hard to manage change. I'm talking about the people change, not the change-management technology process. Most of the time, you put that in place and say that everybody has to work with it. If I would redo it, I would bring more people to understand the latest ITSM science and processes, and explain why in five or 10 years, it's going to really help us.
You always have to be close to your clients. Even if they are IT, they are your client or partner.

After that, we'll put in the project, but we'll follow them and train them every year. ITSM is a never-ending story. You always have to be close to your clients. Even if they are IT, they are your client or partners. You need to coach them, to make sure they understand why they're doing this. Sometimes it’s a bit longer to get it right at the beginning, but it’s all worth it at the end.

Gardner: Thanks so much. We’ll have to leave it there. We've been talking about how Desjardins Group in Montréal has been embarking on an ITSM journey and we've been learning how that's helped them improve their quality and reduced incidence of IT problems. So thank you to our guest Trung Quach, ITSM Manager at Desjardins Group.
Gain better control over help desk quality and impact.  
Learn how to make your help desk more relevant
with a free white paper.
Quach: Thank you very much.

Gardner: And thank you, too, to our audience for joining this special new style of IT discussion coming to you directly from the HP Discover 2014 Conference in Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a Canadian cooperative banking organization is using ITSM to put best practices in place and deliver higher IT value to the company. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Tuesday, October 07, 2014

MIT Media Lab Computing Director Details the Virtues of Cloud Computing for Agility and DR

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you directly from the VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT strategy discussions.

Gardner
We’re here in San Francisco the week of August 25 to explore the latest developments in hybrid cloud computing, user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the MIT Media Lab in Cambridge, Massachusetts and how they're exploring the use of cloud and hybrid cloud and enjoying such use benefits as speed, agility and disaster recovery (DR)

To learn more about how the MIT Media Lab is using cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. Welcome.

Michail Bletsas: Thank you. 

Gardner: Tell us about the MIT Media Lab. How big is the organization? What’s your charter?

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/
Bletsas
The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We're not an applied research lab in the sense that we're not looking at what's going to happen two or three years from now. We're not looking at short-term future products. We're looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we're kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We're much heavier than other departments in how many devices you're going to see. We're on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I've been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you're using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What's very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we've been pretty successful with that.
We really need to interact with our infrastructure on a much shorter cycle than the average operation.

Gardner: So your requirements are to support those 350 people with dynamic workloads, many devices. What is it that you needed to do in your data center to accommodate that? How have you created a data center that’s responsive, but also protects your property, and that allows you to reduce your security risk?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We've been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it. 

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what's going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: Of course, virtualization sounds like a great fit when you have such dynamic, different, and varied workloads. But what about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What's the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They're a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an emergency].
We know that remote events are remote, until they happen, and sometimes they do.

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can't do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you're familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: I'd like to explore a little bit more this three-tiered cloud using common management, common APIs. It sounds like you're essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it's DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It's lucky for us, because we're not such a big operation. We're relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we've been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.
We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT's administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this." It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that's not as large as ours.

Gardner: We’re here at VMworld 2014. There's been quite a bit of news, particularly in the vCloud Air arena. We've talked and heard about betas for ObjectStore and for virtual private cloud. Are these of interest to you now that you’ve done a hybrid cloud using DR-as-a-service? Does anything else intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I've said, we're a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We're a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.
That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can't afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

Gardner: The benefits of having a highly virtualized environment -- including the data center, including the end user computing endpoints -- gives you that flexibility of taking workloads and apps from development to test to deployments. So there's a common infrastructure approach there, but also a common infrastructure across cloud, hybrid cloud, and DR.

So it’s sort of a snowball effect. The more virtualization you're adapting, the more dynamic and agile you can be across many more aspects of IT.

Bletsas: For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.

In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Gardner: Very good. I'm afraid we’ll have to leave it there. We’ve been discussing the virtues of cloud computing and hybrid cloud computing with the MIT Media Lab. I’d like to thank our guest, Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab in Cambridge, Mass. Thanks so much.

Bletsas: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the 2014 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: