Showing posts with label HP Discover. Show all posts
Showing posts with label HP Discover. Show all posts

Wednesday, November 18, 2015

Big Data Enables Top User Experiences and Extreme Personalization for Intuit TurboTax

Transcript of a BriefingsDirect discussion on how TurboTax uses big data analytics to improve performance despite high data volumes during peak usage.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next big data innovation case study highlights how Intuit uses deep-data analytics to gain a 360-degree view of its TurboTax application's users’ behavior and preferences. Such visibility allows for rapid applications improvements and enables the TurboTax user experience to be tailored to a highly detailed degree.

Here to share how analytics paves the way to better understanding of end-user needs and wants, we're joined by Joel Minton, Director of Data Science and Engineering for TurboTax at Intuit in San Diego. Welcome to Briefings Direct, Joel.

Joel Minton: Thanks, Dana, it’s great to be here.
HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality
Gardner: Let’s start at a high-level, Joel, and understand what’s driving the need for greater analytics, greater understanding of your end-users. What is the big deal about big-data capabilities for your TurboTax applications?

Minton: There were several things, Dana. We were looking to see a full end-to-end view of our customers. We wanted to see what our customers were doing across our application and across all the various touch points that they have with us to make sure that we could fully understand where they were and how we can make their lives better.

Minton
We also wanted to be able to take that data and then give more personalized experiences, so we could understand where they were, how they were leveraging our offerings, but then also give them a much more personalized application that would allow them to get through the application even faster than they already could with TurboTax.

And last but not least, there was the explosion of available technologies to ingest, store, and gain insights that was not even possible two or three years ago. All of those things have made leaps and bounds over the last several years. We’ve been able to put all of these technologies together to garner those business benefits that I spoke about earlier.

Gardner: So many of our listeners might be aware of TurboTax, but it’s a very complex tax return preparation application that has a great deal of variability across regions, states, localities. That must be quite a daunting task to be able to make it granular and address all the variables in such a complex application.

Minton: Our goal is to remove all of that complexity for our users and for us to do all of that hard work behind the scenes. Data is absolutely central to our understanding that full end-to-end process, and leveraging our great knowledge of the tax code and other financial situations to make all of those hard things easier for our customers, and to do all of those things for our customers behind the scenes, so our customers do not have to worry about it.

Gardner: In the process of tax preparation, how do you actually get context within the process?

Always looking

Minton: We're always looking at all of those customer touch points, as I mentioned earlier. Those things all feed into where our customer is and what their state of mind might be as they are going through the application.

To give you an example, as a customer goes though our application, they may ask us a question about a certain tax situation.

When they ask that question, we know a lot more later on down the line about whether that specific issue is causing them grief. If we can bring all of those data sets together so that we know that they asked the question three screens back, and then they're spending a more time on a later screen, we can try to make that experience better, especially in the context of those specific questions that they have.

As I said earlier, it's all about bringing all the data together and making sure that we leverage that when we're making the application as easy as we can.

Gardner: And that's what you mean by a 360-degree view of the user: where they are in time, where they are in a process, where they are in their particular individual tax requirements?

Minton: And all the touch points that they have with not only things on our website, but also things across the Internet and also with our customer-care employees and all the other touch points that we use try to solve our customers’ needs.
During our peak times of the year during tax season, we have billions and billions of transactions.

Gardner: This might be a difficult question, but how much data are we talking about? Obviously you're in sort of a peak-use scenario where many people are in a tax-preparation mode in the weeks and months leading up to April 15 in the United States. How much data and how rapidly is that coming into you?

Minton: We have a tremendous amount of data. I'm not going to go into the specifics of the complete size of our database because it is proprietary, but during our peak times of the year during tax season, we have billions and billions of transactions.

We have all of those touch points being logged in real-time, and we basically have all of that data flowing through to our applications that we then use to get insights and to be able to help our customers even more than we could before. So we're talking about billions of events over a small number of days.

Gardner: So clearly for those of us that define big data by velocity, by volume, and by variety, you certainly meet the criteria and then some.

Unique challenges

Minton: The challenges are unique for TurboTax because we're such a peaky business. We have two peaks that drive a majority of our experiences: the first peak when people get their W-2s and they're looking to get their refunds, and then tax day on April 15th. At both of those times, we're ingesting a tremendous amount of data and trying to get insights as quickly as we can so we can help our customers as quickly as we can.

Gardner: Let’s go back to this concept of user experience improvement process. It's not just something for tax preparation applications but really in retail, healthcare, and many other aspects where the user expectations are getting higher and higher. People expect more. They expect anticipation of their needs and then delivery of that.

This is probably only going to increase over time, Joel. Tell me a little but about how you're solving this issue of getting to know your user and then being able to be responsive to an entire user experience and perception.

Minton: Every customer is unique, Dana. We have millions of customers who have slightly different needs based on their unique situations. What we do is try to give them a unique experience that closely matches their background and preferences, and we try to use all of that information that we have to create a streamlined interaction where they can feel like the experience itself is tailored for them.
So the most important thing is taking all of that data and then providing super-personalized experience based on the experience we see for that user and for other users like them.

It’s very easy to say, “We can’t personalize the product because there are so many touch points and there are so many different variables.” But we can, in fact, make the product much more simplified and easy to use for each one of those customers. Data is a huge part of that.

Specifically, our customers, at times, may be having problems in the product, finding the right place to enter a certain tax situation. They get stuck and don't know what to enter. When they get in those situations, they will frequently ask us for help and they will ask how they do a certain task. We can then build code and algorithms to handle all those situations proactively and be able to solve that for our customers in the future as well.

So the most important thing is taking all of that data and then providing super-personalized experience based on the experience we see for that user and for other users like them.

Gardner: In a sense, you're a poster child for a lot of elements of what you're dealing with, but really on a significant scale above the norm, the peaky nature, around tax preparation. You desire to be highly personalized down to the granular level for each user, the vast amount of data and velocity of that data.

What were some of your chief requirements at your architecture level to be able to accommodate some of this? Tell us a little bit, Joel, about the journey you’ve been on to improve that architecture over the past couple of years?

Lot of detail

Minton: There's a lot of detail behind the scenes here, and I'll start by saying it's not an easy journey. It’s a journey that you have to be on for a long time and you really have to understand where you want to place your investment to make sure that you can do this well.

One area where we've invested in heavily is our big-data infrastructure, being able to ingest all of the data in order to be able to track it all. We've also invested a lot in being able to get insights out of the data, using Hewlett Packard Enterprise (HPE) Vertica as our big data platform and being able to query that data in close to real time as possible to actually get those insights. I see those as the meat and potatoes that you have to have in order to be successful in this area.

On top of that, you then need to have an infrastructure that allows you to build personalization on the fly. You need to be able to make decisions in real time for the customers and you need to be able to do that in a very streamlined way where you can continuously improve.

We use a lot of tactics using machine learning and other predictive models to build that personalization on-the-fly as people are going through the application. That is some of our secret sauce and I will not go into in more detail, but that’s what we're doing at a high level.

Gardner: It might be off the track of our discussion a bit, but being able to glean information through analytics and then create a feedback loop into development can be very challenging for a lot of organizations. Is DevOps a cultural parallel path along with your data-science architecture?
HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality
I don’t want to go down the development path too much, but it sounds like you're already there in terms of understanding the importance of applying big-data analytics to the compression of the cycle between development and production.

Minton: There are two different aspects there, Dana. Number one is making sure that we understand the traffic patterns of our customer and making sure that, from an operations perspective, we have the understanding of how our users are traversing our application to make sure that we are able to serve them and that their performance is just amazing every single time they come to our website. That’s number one.

Number two, and I believe more important, is the need to actually put the data in the hands of all of our employees across the board. We need to be able to tell our employees the areas where users are getting stuck in our application. This is high-level information. This isn't anybody's financial information at all, but just a high-level, quick stream of data saying that these people went through this application and got stuck on this specific area of the product.

We want to be able to put that type of information in our developer’s hands so as the developer is actually building a part of the product, she could say that I am seeing that these types of users get stuck at this part of the product. How can I actually improve the experience as I am developing it to take all of that data into account?

We have an analyst team that does great work around doing the analytics, but in addition to that, we want to be able to give that data to the product managers and to the developers as well, so they can improve the application as they are building it. To me, a 360-degree view of the customer is number one. Number two is getting that data out to as broad of an audience as possible to make sure that they can act on it so they can help our customers.

Major areas

Gardner: Joel, I speak with HPE Vertica users quite often and there are two major areas that I hear them talk rather highly of the product. First, has to do with the ability to assimilate, so that dealing with the variety issue would bring data into an environment where it can be used for analytics. Then, there are some performance issues around doing queries, amid great complexity of many parameters and its speed and scale.

Your applications for TurboTax are across a variety or platforms. There is a shrink-wrap product from the legacy perspective. Then you're more along the mobile lines, as well as web and SaaS. So is Vertica something that you're using to help bring the data from a variety of different application environments together and/or across different networks or environments?

Minton: I don't see different devices that someone might use as a different solution in the customer journey. To me, every device that somebody uses is a touch point into Intuit and into TurboTax. We need to make sure that all of those touch points have the same level of understanding, the same level of tracking, and the same ability to help our customers.

Whether somebody is using TurboTax on their computer or they're using TurboTax on their mobile device, we need to be able to track all of those things as first-class citizens in the ecosystem. We have a fully-functional mobile application that’s just amazing on the phone, if you haven’t used it. It's just a great experience for our customers.

From all those devices, we bring all of that data back to our big data platform. All of that data can then be queried, because you want to understand, many questions, such as when do users flow across different devices and what is the experience that they're getting on each device? When are they able to just snap a picture of their W-2 and be able to import it really quickly on their phone and then jump right back into their computer and finish their taxes with great ease?
You need to be able to have a system that can handle that concurrency and can handle the performance that’s going to be required by that many more people doing queries against the system.

We need to be able to have that level of tracking across all of those devices. The key there, from a technology perspective, is creating APIs that are generic across all of those devices, and then allowing those APIs to feed all of that data back to our massive infrastructure in the back-end so we can get those insights through reporting and other methods as well.

Gardner: We've talked quite a bit about what's working for you: a database column store, the ability to get a volume variety and velocity managed in your massive data environment. But what didn't work? Where were you before and what needed to change in order for you to accommodate your ongoing requirements in your architecture?

Minton: Previously we were using a different data platform, and it was good for getting insights for a small number of users. We had an analyst team of 8 to 10 people, and they were able to do reports and get insights as a small group.

But when you talk about moving to what we just discussed, a huge view of the customer end-to-end, hundreds of users accessing the data, you need to be able to have a system that can handle that concurrency and can handle the performance that’s going to be required by that many more people doing queries against the system.

Concurrency problems

So we moved away from our previous vendor that had some concurrency problems and we moved to HPE Vertica, because it does handle concurrency much better, handles workload management much better, and it allows us to pull all this data.

The other thing that we've done is that we have expanded our use of Tableau, which is a great platform for pulling data out of Vertica and then being able to use those extracts in multiple front-end reports that can serve our business needs as well.

So in terms of using technology to be able to get data into the hands of hundreds of users, we use a multi-pronged approach that allows us to disseminate that information to all of these employees as quickly as possible and to do it at scale, which we were not able to do before.
There's always going to be more data that you want to track than you have hardware or software licenses to support.

Gardner: Of course, getting all your performance requirements met is super important, but also in any business environment, we need to be concerned about costs.

Is there anything about the way that you were able to deploy Vertica, perhaps using commodity hardware, perhaps a different approach to storage, that allowed you to both accomplish your requirements, goals in performance, and capabilities, but also at a price point that may have been even better than your previous approach?

Minton: From a price perspective, we've been able to really make the numbers work and get great insights for the level of investment that we've made.

How do we handle just the massive cost of the data? That's a huge challenge that every company is going to have in this space, because there's always going to be more data that you want to track than you have hardware or software licenses to support.

So we've been very aggressive in looking at each and every piece of data that we want to ingest. We want to make sure that we ingest it at the right granularity.

Vertica is a high-performance system, but you don't need absolutely every detail that you’ve ever had from a logging mechanism for every customer in that platform. We do a lot of detail information in Vertica, but we're also really smart about what we move into there from a storage perspective and what we keep outside in our Hadoop cluster.

Hadoop cluster

We have a Hadoop cluster that stores all of our data and we consider that our data lake that basically takes all of our customer interactions top to bottom at the granular detail level.

We then take data out of there and move things over to Vertica, in both an aggregate as well as a detail form, where it makes sense. We've been able to spend the right amount of money for each of our solutions to be able to get the insights we need, but to not overwhelm both the licensing cost and the hardware cost on our Vertica cluster.

The combination of those things has really allowed us to be successful to match the business benefit with the investment level for both Hadoop and with Vertica.

Gardner: Measuring success, as you have been talking about quantitatively at the platform level, is important, but there's also a qualitative benefit that needs to be examined and even measured when you're talking about things like process improvements, eliminating bottlenecks in user experience, or eliminating anomalies for certain types of individual personalized activities, a bit more quantitative than qualitative.
We're actually performing much better and we're able to delight our internal customers to make sure that they're getting the answers they need as quickly as possible.

Do you have any insight, either anecdotal or examples, where being able to apply this data analytics architecture and capability has delivered some positive benefits, some value to your business?

Minton: We basically use data to try to measure ourselves as much as possible. So we do have qualitative, but we also have quantitative.

Just to give you a few examples, our total aggregate number of insights that we've been able to garner from the new system versus the old system is a 271 percent increase. We're able to run a lot more queries and get a lot more insights out of the platform now than we ever could on the old system. We have also had a 41 percent decrease in query time. So employees who were previously pulling data and waiting twice as long had a really frustrating experience.

Now, we're actually performing much better and we're able to delight our internal customers to make sure that they're getting the answers they need as quickly as possible.

We've also increased the size of our data mart in general by 400 percent. We've massively grown the platform while decreasing performance. So all of those quantitative numbers are just a great story about the success that we have had.

From a qualitative perspective, I've talked to a lot of our analysts and I've talked to a lot of our employees, and they've all said that the solution that we have now is head and shoulders over what we had previously. Mostly that’s because during those peak times, when we're running a lot of traffic through our systems, it’s very easy for all the users to hit the platform at the same time, instead of nobody getting any work done because of the concurrency issues.

Better tracking

Because we have much better tracking of that now with Vertica and our new platform, we're actually able to handle that concurrency and get the highest priority workloads out quickly, allow them to happen, and then be able to follow along with the lower-priority workloads and be able to run them all in parallel.

The key is being able to run, especially at those peak loads, and be able to get a lot more insights than we were ever able to get last year.
HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality
Gardner: And that peak load issue is so prominent for you. Another quick aside, are you using cloud or hybrid cloud to support any of these workloads, given the peak nature of this, rather than keep all that infrastructure running 365, 24×7? Is that something that you've been doing, or is that something you're considering?

Minton: Sure. On a lot of our data warehousing solutions, we do use cloud in points for our systems. A lot of our large-scale serving activities, as well as our large scale ingestion, does leverage cloud technologies.

We don't have it for our core data warehouse. We want to make that we have all of that data in-house in our own data centers, but we do ingest a lot of the data just as pass-throughs in the cloud, just to allow us to have more of that peak scalability that we wouldn’t have otherwise.
The faster than we can get data into our systems, the faster we're going to be able to report on that data and be able to get insights that are going to be able to help our customers.

Gardner: We're coming up toward the end of our discussion time. Let’s look at what comes next, Joel, in terms of where you can take this. You mentioned some really impressive qualitative and quantitative returns and improvements. We can always expect more data, more need for feedback loops, and a higher level of user expectation and experience. Where would you like to go next? How do you go to an extreme focus even more on this issue of personalization?

Minton: There are a few things that we're doing. We built the infrastructure that we need to really be able to knock it out of the park over the next couple of years. Some of the things that are just the next level of innovation for us are going to be, number one, increasing our use of personalization and making it much easier for our customers to get what they need when they need it.

So doubling down on that and increasing the number of use cases where our data scientists are actually building models that serve our customers throughout the entire experience is going to be one huge area of focus.

Another big area of focus is getting the data even more real time. As I discussed earlier, Dana, we're a very peaky business and the faster than we can get data into our systems, the faster we're going to be able to report on that data and be able to get insights that are going to be able to help our customers.

Our goal is to have even more real-time streams of that data and be able to get that data in so we can get insights from it and act on it as quickly as possible.

The other side is just continuing to invest in our multi-platform approach to allow the customer to do their taxes and to manage their finances on whatever platform they are on, so that it continues to be mobile, web, TVs, or whatever device they might use. We need to make sure that we can serve those data needs and give the users the ability to get great personalized experiences no matter what platform they are on. Those are some of the big areas where we're going to be focused over the coming years.

Recommendations

Gardner: Now you've had some 20/20 hindsight into moving from one data environment to another, which I suppose is equivalent of keeping the airplane flying and changing the wings at the same time. Do you have any words of wisdom for those who might be having concurrency issues or scale, velocity, variety type issues with their big data, when it comes to moving from one architecture platform to another? Any recommendations you can make to help them perhaps in ways that you didn't necessarily get the benefit of?

Minton: To start, focus on the real business needs and competitive advantage that your business is trying to build and invest in data to enable those things. It’s very easy to say you're going to replace your entire data platform and build everything soup to nuts all in one year, but I have seen those types of projects be tried and fail over and over again. I find that you put the platform in place at a high-level and you look for a few key business-use cases where you can actually leverage that platform to gain real business benefit.

When you're able to do that two, three, or four times on a smaller scale, then it makes it a lot easier to make that bigger investment to revamp the whole platform top to bottom. My number one suggestion is start small and focus on the business capabilities.

Number two, be really smart about where your biggest pain points are. Don’t try to solve world hunger when it comes to data. If you're having a concurrency issue, look at the platform you're using. Is there a way in my current platform to solve these without going big?

Frequently, what I find in data is that it’s not always the platform's fault that things are not performing. It could be the way that things are implemented and so it could be a software problem as opposed to a hardware or a platform problem.
HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality
So again, I would have folks focus on the real problem and the different methods that you could use to actually solve those problems. It’s kind of making sure that you're solving the right problem with the right technology and not just assuming that your platform is the problem. That’s on the hardware front.

As I mentioned earlier, looking at the business use cases and making sure that you're solving those first is the other big area of focus I would have.

Gardner: I'm afraid we will have to leave it there. We've been learning about how Intuit uses deep-data analytics to gain a 360-degree view of its TurboTax applications user behavior and preferences. And we have heard about how such visibility allows for rapid applications improvements, providing an extreme personalization level and enabling the user of TurboTax to experience a higher degree of customization, something tailored directly for their situation.

So join me in thanking Joel Minton, Director of Data Science and Engineering for TurboTax at Intuit in San Diego. Thanks so much, Joel.

Minton: Thank you, Dana. I really enjoyed it.

Gardner: And I'd also like to thank our audience for joining this big-data innovation case study discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how TurboTax uses big data analytics to improve performance despite high data volumes during peak usage. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Monday, October 05, 2015

How Analytics as a Service Changes the Game and Expands the Market for Big Data Value

Transcript of a BriefingsDirect discussion on how cloud models propel big data as a service benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next big-data thought leadership discussion highlights how big-data analytics as a service expands the market for advanced analytics and insights. We'll see how bringing analytics to a cloud services model allows smaller and less data-architecture-experienced firms to benefit from the latest in big-data capabilities. And we'll learn how Dasher Technologies is helping usher in this democratization of big data.

Here to share how big data as a service has evolved, we're joined by Justin Harrigan, Data Architecture Strategist at Dasher Technologies in Campbell, California. Welcome, Justin.

Justin Harrigan: Hi, Dana. Thanks for having me.

Gardner: We're glad you could join us. We are also here with Chris Saso, Senior Vice President of Technology at Dasher Technologies. Welcome, Chris.
Read more on tackling big data analytics
Learn how the future is all about fast data
Find out how big data trends affect your business
Chris Saso: Hi, Dana. Looking forward to our talk.

Gardner: Justin, how have big-data practices changed over the past five years to set the stage for multiple models when it comes to leveraging big-data?

Harrigan: Back in 2010, we saw big data become mainstream. Hadoop became a household name in the IT industry, doing scale-out architectures. Linux databases were becoming common practice. Moving away from traditional legacy, smaller, slower databases allowed this whole new world of analytics to open up to previously untapped resources within companies. So data that people had just been sitting on could now be used for actionable insights.

Harrigan
Fast forward to 2015, and we've seen big data become more approachable. Five years ago, only the largest organizations or companies that were specifically designed to leverage big-data architectures could do so. The smaller guys had maybe a couple of hundred or even tens of terabytes, and it required too much expertise or too much time and investment to get a big-data infrastructure up and running.

Today, we have approachable analytics, analytics as a service, hardened architectures that are almost turnkey with back-end hardware, database support, and applications -- all integrating seamlessly. As a result, the user on the front end, who is actually interacting with the data and making insights, is able to do so with very little overhead, very little upkeep, and is able to turn that data into business-impact data, where they can make decisions for the company.

Gardner: Justin, how big of an impact has this had? How many more types of companies or verticals have been enabled to start exploring advanced, cutting-edge, big-data capabilities? Is this a 20 percent increase? Perhaps almost any organization that wants to can start doing this.

Tipping point

Harrigan: The tipping point is when you outgrow your current solutions for data analytics. Data analytics is nothing new. We've been doing it for more than 50 years with databases. It’s just a matter of how big you can get, how much data you can put in one spot, and then run some sort of query against it and get a timely report that doesn’t take a week to come back or that doesn't time out on a traditional database.

Saso
Almost every company nowadays is growing so rapidly with the type of data they have. It doesn’t matter if you're an architecture firm, a marketing company, or a large enterprise getting information from all your smaller remote sites, everyone is compiling data to create better business decisions or create a system that makes their products run faster.

For people dipping their toes in the water for their first larger dataset analytics, there's a whole host of avenues available to them. They can go to some online providers, scale up a database in a couple of minutes, and be running.

They can download free trials. HP Vertica has a community edition, for example, and they can load it on a single server, up to terabytes, and start running there. And it’s significantly faster than traditional SQL.

It’s much more approachable. There are many different flavors and formats to start with, and people are realizing that. I wouldn’t even use the term big data anymore; big data is almost the norm.

Gardner: I suppose maybe the better term is any data, anytime.

Harrigan: Any data, anytime, anywhere, for anybody.

Gardner: I suppose another change over the past several years has been an emphasis away from batch processing, where you might do things at an infrequent or occasional basis, to this concept that’s more applicable to a cloud or an as-a-service model, where it’s streaming, continuous, and then you start reducing the latency down to getting close to real time.

Are we starting to see more and more companies being able to compress their feedback, and start to use data more rapidly as a result of this shift over the past five years or so?

Harrigan: It’s important to address the term big data. It’s almost like an umbrella, almost like the way people use cloud. With big data, you think large datasets, but you mentioned speed and agility. The ability to have real-time analytics is something that's becoming more prevalent and the ability to not just run a batch process for 18 hours on petabytes of data, but having a chart or a graph or some sort of report in real time. Interacting with it and making decisions on the spot is becoming mainstream.

We did a blog post on this not long ago, talking about how instead of big data, we should talk about the data pipe. That’s data ingest or fast data, typically OLTP data, that needs to run in memory or on hardware that's extremely fast to create a data stream that can ingest all the different points, sensors, or machine data that’s coming in.

Smarter analysis

Then we've talked about smarter analytic data that required some sort of number-crunching dataset on data that was relevant, not data that was real-time, but still fairly new, call it seven days or older and up to a year. And then, there's the data lake, which essentially is your data repository for historical data crunching.

Those are three areas you need to address when you talk about big data. The ability to consume that data as a service is now being made available by a whole host of companies in very different niches.

It doesn’t matter if it’s log data or sensor data, there's probably a service you can enable to start having data come in, ingest it, and make real-time decisions without having to stand up your own infrastructure.

Gardner: Of course, when organizations try to do more of these advanced things that can be so beneficial to their business, they have to take into consideration the technology, their skills, their culture -- people, process and technology, right?

Chris, tell us a bit about Dasher Technologies and how you're helping organizations do more with big-data capabilities, how you address this holistically, and this whole approach of people, process and technology.
Dasher has built up our team to be able to have a set of solutions that can help people solve these kinds of problems.

Saso: Dasher was founded in 1999 by Laurie Dasher. To give you an idea of who we are, we're a little over 65 employees now, and the size of our business is somewhere around $100 million.

We started by specializing in solving major data-center infrastructure challenges that folks had by actually applying the people, process and technology mantra. We started in the data center, addressing people’s scale out, server, storage, and networking types of problems. Over the past five or six years, we've been spending our energy, strategy, and time on the big areas around mobility, security, and of course, big data.

As a matter of fact, Justin and I were recently working on a project with a client around combining both mobility information and big data. It’s a retail client. They want to be able to send information to a customer that might be walking through a store, maybe send a coupon or things like that. So, as Justin was just talking about, you need fast information and making actionable things happen with that data quickly. You're combining something around mobility with big data.

Dasher has built up our team to be able to have a set of solutions that can help people solve these kinds of problems.

Gardner: Justin, let’s flesh that out a little bit around mobility. When people are using a mobile device, they're creating data that, through apps, can be shared back to a carrier, as well as application hosts and the application writers. So we have streams of data now about user experience and activities.

We also can deliver data and insights out to people in the other direction in that real-time of fashion, a closed loop, regardless of where they are. They don’t have to be at their desk, they don’t have to be looking at a specific business-intelligence (BI) application for example. So how has mobility changed the game in the past five years?

Capturing data

Harrigan: Dana, it’s funny you brought up the two different ways to capture data. Devices can be both used as a sensor point or as a way to interact with data. I remember seeing a podcast you did with HP Vertica and GUESS regarding how they interacted with their database on iPads.

In regards to interacting with data, it has become not only useful to data analysts or data scientists, but we can push that down into a format so lower-level folks who aren't so technical. With a fancy application in front of them, they can use the data as well to make decisions for companies and actually benefit the company.

You give that data to someone in a store, at GUESS for example, who can benefit by understanding where in the store to put jeans to impact sales. That’s huge. Rather than giving them a quarterly report and stuff that's outdated for the season, they can do it that same day and see what other sites are doing.

On the flip side, mobile devices are now sensors. A mobile device is constantly pinging access points over wi-fi. We can capture that data and, through a MAC address as an unique identifier, follow someone as they move through a store or throughout a city. Then, when they return, that person’s data is captured into a database and it becomes historical. They can track them through their device.
Read more on tackling big data analytics
Learn how the future is all about fast data
Find out how big data trends affect your business
It allows a whole new world of opportunities in terms of the way retailers interact with where they place merchandise, the way they interact with how they staff stores to make sure they have the proper amount of people for the certain time, what weather impact has on the store.

Lastly, as Chris mentioned, how do we interact with people on devices by pushing them data that's relevant as they move throughout their day?

The next generation of big data is not just capturing data and using it in reports, but taking that data in real time and possibly pushing it back out to the person who needs it most. In the retail scenario, that's the end users, possibly giving them a coupon as they're standing in front of something on a shelf that is relevant and something they will use.

Gardner: So we're not just talking about democratization of analytics in terms of the types of organizations, but now we're even talking about the types of individuals within those organizations.

Do you have any examples of some Dasher’s clients that have been able to exploit these advances and occurrences with mobile and cloud working in tandem, and how that's produced some sort of a business benefit?

Business impact

Harrigan: A good example of a client who leveraged a large dataset is One Kings Lane. They were having difficulty updating the website their users were interacting with because it’s a flash shopping website, where the information changes daily, and you have to be able to update it very quickly. Traditional technologies were causing a business impact and slowing things down.

They were able to leverage a really fast columnar database to make these changes and actually grow the inventory, grow the site, and have updates happen in almost real time, so that there was no impact or downtime when they needed to make these changes. That's a real-world example of when big data had the direct impact on the business line.

Gardner: Chris, tell us a little bit about how Dasher works with Hewlett Packard Enterprise technologies, and perhaps even some other HP partners like GoodData, when it comes to providing analytics as a service?
Once Vertica . . . has done the analysis, you have to report on that and make it in a nice human-readable form or human-consumable form.

Saso: HP has been a longtime partner from the very beginning, actually when we started the company. We were a partner of Vertica before HP purchased them back in 2011.

We started working with Vertica around big data, and Justin was one of our leads in that area at the time. We've grown that business and in other business units within HP to combine solutions, Vertica, big data, and hardware, as Justin was just talking about. You brought up the applications that are analyzing this big data. So we're partners in the ecosystem that help people analyze the data.

Once HP Vertica, or what have you, has done the analysis, you have to report on that and make it in a nice human-readable form or human-consumable form. We’ve built out our ecosystem at Dasher to have not only the analytics piece, but also the reporting piece.

Gardner: And on the as a service side, do you work with GoodData at all or are you familiar with them?

Saso: Justin, maybe you can talk a little bit about that. You've worked with them more I think on their projects.

Optimizing the environment

Harrigan: GoodData is a large consumer of Vertica and they actually leverage it for their back-end analytics platform for the service that they offer. Dasher has been working with GoodData over the past year to optimize the environment that they run on.

Vertica has different deployment scenarios, and you can actually deploy it in a virtual-machine (VM) environment or on bare-metal. And we did an analysis to see if there was a return on investment (ROI) on moving from a virtualized environment running on OpenStack to a bare-metal environment. Through a six-month proof of concept (POC), we leveraged HP Labs in Houston. We had a four-node system setup with multiple terabytes of data.

We saw 4:1 increase in performance in moving from a VM with the same resources to a bare-metal machine. That’s going to have a significant impact on the way they move data in their environment in the future and how they adjust to customers with larger datasets.

Gardner: When we think about optimizing the architecture and environment for big data, are there any other surprises or perhaps counter-intuitive things that have come up, maybe even converged infrastructure for smaller organizations that want to get in fast and don’t want to be too concerned with the architecture underlying the analytics applications?
That’s going to have a significant impact on the way they move data in their environment in the future and how they adjust to customers with larger datasets.

Harrigan: There's a tendency now with so many free solutions out there to pick a free solution, something that gets the job done now, something that grows the business rapidly, but to forget about what businesses will need three years down the road, if it's going to grow, if it’s going to survive.

There are a lot of startups out there that are able to build a big data infrastructure, scale it to 5,000 nodes, and then they reach a limit. There are network limits on how fast the switch can move data between nodes, constantly pushing the limits of 10 Gbyte, 40 Gyte and soon 100 Gbyte networks to keep those infrastructures up.

Depending on what architecture you choose, you may be limited in the number of nodes you can go to. So there are solutions out there that can process a million transactions per second with 100 nodes, and then there are solutions that can process a million transactions per second with 20 nodes, but may cost slightly more.

If you think long-term, if you start in the cloud, you want to be able to move out of the cloud. If you start with an open ecosystem, you want to make sure that your hardware refresh is not going to cost so much that the company can’t afford it three years down the road. One of the areas we help consult with, when picking different architectures, is thinking long-term. Don't think six weeks down the road, how are we going to get our service up and running? Think, okay, we have a significant client install base, how we are going to grow the business from three to five years and five to 10 years?

Gardner: Given that you have quite a few different types of clients, and the idea of optimizing architecture for the long-term seems to be important, I know with smaller companies there’s that temptation to just run with whatever you get going quickly.

What other lessons can we learn from that long-term view when it comes to skills, security, something more than the speeds and feeds aspects of thinking long term about big data?

Numerous regulations

Harrigan: Think about where your data is going to reside and the requirements and regulations that you may run into. There are a million different regulations we have to do now with HIPAA, ITAR, and money transaction processes in a company. So if you ever perceive that need, make sure you're in an ecosystem that supports it. The temptation for smaller companies is just to go cloud, but who owns that data if you go under, or who owns that data when you get audited?

Another problem is encryption. If you're going to start gaining larger customers once you have a proven technology or a proven service, they're going to want to make sure that you're compliant for all their regulations, not just your regulations that your company is enforcing.

There's logging that they're required to have, and there is going to be encryption and protocols and the ability to do audits on anyone who is accessing the data.

Gardner: On this topic of optimizing, when you do it right, when you think about the long term, how do you know you have that right? Are there some metrics of success? Are there some key performance indicators (KPIs) or ROIs that one should look to so they know that they're not erring on the side of going too commercial or too open source or thinking short term only? Maybe some examples of what one should be looking for and how to measure that.
If you implement a system and it costs you $10 million to run and your ROI is $5 million, you've made a bad decision.

Harrigan: That’s going to be largely subjective to each business. Obviously if you're just going to use a rule of thumb, it shouldn't cost you more money than it makes you. If you implement a system and it costs you $10 million to run and your ROI is $5 million, you've made a bad decision.

The two factors are the value to the business. If you're a large enterprise and you implement big data, and it gives you the ability to make decisions and quantify those decisions, then you can put a number to that and see how much value that big-data system is creating. For example, a new marketing campaign or something you're doing with your remote sites or your retail branches and it’s quantifiable and it’s having an impact on the business,

The other way to judge it is impact on business. So, for ad serving companies, the way they make money is ad impressions, and the more ad impressions they can view, for the least cost in their environment, the higher return they're going to make. The delta is between the infrastructure costs and the top line that they get to report to all their investors.

If they can do 56 billion ad impressions in a day, and you can double that by switching architectures, that’s probably a good investment. But if you can only improve it by 10 percent by switching architectures, it’s probably too much work for what it’s worth.

Gardner: One last area on this optimization idea. We've seen, of course, organizations subjectively make decisions about whether to do this on-premises, maybe either virtualized or on bare metal. They will do their cost-benefit analysis. Others are looking at cloud and as a service model.

Over time, we expect to have a hybrid capability, and as you mentioned, if you think ahead that if you start in the cloud and move private, or if you start private you want to be able to move to the cloud, we're seeing the likelihood of more of that being able to move back and forth.

Thinking about that, do you expect that companies will be able to do that? Where does that make the most sense when it comes to data? Is there a type of analysis that you might want to do in a cloud environment primarily, but other types of things you might do private? How do we start to think about breaking out where on the spectrum of hybrid cloud set of options one should be considering for different types of big-data activity?

Either-or decision

Harrigan: In the large data analytics world, it’s almost an either-or decision at this time. I don’t know what it will look like in the future.

Workloads that lend themselves extremely well to the cloud are inconsistent, maybe seasonal, where 90 percent of your business happens in December. Seasonal workloads like that lend themselves extremely well to the cloud.

Or, if your business is just starting out, and you don't know if you're going to need a full 400-node cluster to run whatever platform or analytics platform you choose, and the hardware sits idle for 50 percent of the time, or you don’t get full utilization. Those companies need a cloud architecture, because they can scale up and scale down based on needs.

Companies that benefit from on-premise are ones that can see significant savings by not using cloud and paying someone else to run their environment. Those companies typically pin the CPU usage meter at 100 percent, as much as they can, and then add nodes to add more capacity.

The best advice I could give is, if you start in the cloud or you start on bare metal, make sure you have agility and you're able to move workloads around. If you choose one sort of architecture that only works in the cloud and you are scaling up and you have to do a rip and replace scenario just to get out of the cloud and move to on-premise, that’s going to be significant business impact.

One of the reasons I like HP Vertica is that it has a cloud instance that can run on a public cloud. That same instance, that same architecture runs just as well on bare metal, only faster.

Gardner: Chris, last word to you. For those organizations out there struggling with big data, trying to figure out the best path, trying to think long term, and from an architectural and strategic point of view, what should they consider when coming to an organization like Dasher? Where is your sweet spot in terms of working with these organizations? How should they best consider how to take advantage of what you have to offer?

Saso: Every organization is different, and this is one area where that's true. When people are just looking for servers, they're pretty much all the same. But when you're actually trying to figure out your strategy for how you are going to use big-data analytics, every company, big or small, probably does have a slightly different thing they are trying to solve.

That's where we would sit down with that client and really listen and understand, are they trying to solve a speed issue with their data, are they trying to solve massive amounts of data and trying to find the needle in a haystack, the golden egg, golden nugget in there? Each of those approaches certainly has a different answer to it.
Read more on tackling big data analytics
Learn how the future is all about fast data
Find out how big data trends affect your business
So coming with your business problem and also what you would like to see as a result -- we would like to see x-number of increase in our customer satisfaction number or x-number of increase in revenue or something like that -- helps us define the metric that we can then help design toward.

Gardner: Great, I'm afraid we will have to leave it there. We've been discussing how optimizing for a big-data environment really requires a look across many different variables. And we have seen how organizations were able to spread the benefits of big data more generally now, not only the type of organization that can take advantage of it, but the people within those organizations.

We've heard how Dasher Technologies uses advanced technology like HP and HP Vertica to help organizations bring the big-data capabilities to more opportunities for business benefits and across more types of companies and vertical industries.

So a big thank you to our guests, Justin Harrigan, Data Architecture Strategist at Dasher Technologies, and Chris Saso, Senior Vice President of Technology at Dasher Technologies.

And I'd like to thank our audience for joining us as well for this big data thought leadership discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on how cloud models propel big data as a service benefits. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Wednesday, September 02, 2015

Focus on Data, Risk, and Predictive Analysis Drives New Era of Cloud-Based IT Service Management, Says Expert Panel

Transcript of a BriefingsDirect panel discussion on how agile ITSM plays an essential role in IT today.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next data center innovation panel discussion focuses on the changing role of IT service management (ITSM) in a hybrid computing world. As IT systems, resources, assets, and information are more scattered across more enterprise locations and devices -- as well as across various service environments -- how can IT leaders hope to know where their "stuff" is, who’s using it, how to secure it, and then accurately pay for it?

Well, it turns out that advanced software asset management (SAM) methods can enforce compliance, reduce risk, cut costs, and enhance end-user productivity -- even as the complexity of IT itself increases.
HP Service Desk Software
Brings Together ITSM Capabilities

Get Your Free Trial
We'll hear from four IT leaders about how they have improved ITSM despite such challenges, and we'll learn how the increased use of big data and analytics when applied to ITSM improves inventory control and management. We'll also hear how a service brokering role can also be used to great advantage, thanks to ITSM-generated information.

To learn more about how ITSM solves multiple problems for IT, we're joined by our panel, Charl Joubert, a change and configuration management expert based in Pretoria, South Africa. Welcome, Charl.

Charl Joubert: Thank you, Dana.

Gardner: We're also here with Julien Kuijper, an expert in asset and license management based in Paris. Welcome, Julien.

Julien Kuijper: Thank you. Good afternoon.

Gardner: We're also here with Patrick Bailly, IT Quality and Process Director at Steria, also based in Paris. Welcome, Patrick.

Patrick Bailly: Thank you. Good afternoon.

Gardner: And lastly, Edward Jackson, Operational System Support Manager at Redcentric, based in Harrogate, UK. Welcome, Edward.

Edward Jackson: Thank you. Good afternoon.

Gardner: Let’s talk about modern SAM, software asset management. There seems to be a lot going on with getting more information about software and how it’s distributed and used. Julien, tell us how you're seeing organizations deal with this issue.

Complicated circle

Kuijper: SAM has to square quite a complicated circle. One is compliance in a company, compliance with regard to software installation and usage, and also ensuring that while doing this, we must ensure that the software that is entering a company isn't dangerous. It's things like not letting a virus come in, opening threats or complications. Those are three very technical and very factual environments.

Kuijper
But, you also want to please your end-user. If you don’t please your end-user and you don’t give them the ability to work, they're going to be frustrated. They're going to complain about IT. It’s already a complicated enough.

You have to square that circle by implementing the correct processes first, while giving the correct information around how to behave in the end-to-end software lifecycle.

Gardner: And asset management when it comes to software is not small, there are some very big numbers -- and costs -- involved.

Kuijper: It’s actually a very inconvenient truth. An audit from a publisher or a vendor can easily reach 7 or 8 digits, and a typical company has between 10 and 50 publishers. So, at 7 digits per publisher, you can easily do the math. That’s typically the financial risk.

You also have a big reputation risk. If you don’t pay for software and you are caught, you end up being in the press. You don’t want your company, your branding, to be at that level of exposure.

You have to bring this risk to the attention of IT leaders at the CIO level, but they don’t really want to hear that, because it costs a lot. When they hear this risk, they can't avoid investment, and the investment can be quite large as well.
Typically, if this investment is reaching five percent of your overall yearly software spending, you're on the right level. It’s a big number, but still it’s worth investing.

But you have to compare this investment with regard to your overall software spending. Typically, if this investment is reaching five percent of your overall yearly software spending, you're on the right level. It’s a big number, but still it’s worth investing.

Coming with this message to IT management and getting the ear of a person who is interested in the topic and then getting the investment authorization, you've gone through half the journey. Implementation afterward will be defining your processes, finding the right tool, implementing it, and running it.

Gardner: When it comes to value to the end-user, by having an understood, clearly-defined process in place allows them to get to the software they want, make sure they can use it, and look for it on a sanctioned list, for example. While some end-users might see this as a hurdle, I think it enables them eventually to get the tools they need when they need them.

Smart communication

Kuijper: Right. At the beginning, every end-user will see all those SAM processes as a burden or a complication. So you have to invest a lot in communication, smart communication, with your company and make people understand that it’s everyone’s responsibility to be [software license] compliant and also that it can help in recovering money.

If you do this in a smart way, and the process has a delivery time not longer than three days, then you're good. You have to ensure, of course, that you have a software catalog that is up-to-date, with an easy access to your main titles. All those points from the end-to-end software lifecycle are implemented -- from software tool, then software delivery, then software re-usage, software, and also disposal. When all this is lean, then you’ve made your journey. Then, the software lifecycle process will not be seen any more as a pain, but it will be seen as a business-enabler.

Gardner: Now, asset management doesn’t just cover the realm of software. It includes hardware, and in a network environment, that can be very large numbers of equipment and devices, endpoints as well as network equipment.

Edward at Redcentric, tell us about how you see the management of assets through the lens of a network.

Jackson: We have more than 10,000 devices in management from a multitude of vendors and we use asset management in terms of portfolio management, managing the models, the versions, and the software.

Jackson
We also have a configuration management tool that takes the configurations of these devices and runs them against compliance. We can run them against a gold or a silver build. We can also run them against security flaws. It gives us an end-to-end management.

All of this feeds into our ITSM product and then also it feeds into things like the configuration management data base (CMDB). So we have a complete end-to-end knowledge of the software, the hardware, and the services that we're giving the customer.

Gardner: Knowing yourself and your organization allows for that lifecycle benefit that Julien referred to. Eventually, that gives you the freedom to manage and extend those benefits into things like helpdesk support, even IT operations, where the performance can be maintained better.

Jackson: Yes, that's 360-degree management from hardware being delivered on-site, to being discovered, being automatically populated into the multitude of support and operational systems that we use, and then into the ITSM side.

If you don’t get it right from the start and you don’t have the correct models defined for example a Cisco device or the correct OS version on that device, one perhaps where it has security flaws, then you run the risk of deploying a vulnerable service to the customer.

Thinking about scale

Gardner: Looking at the different types of tools and approaches, this goes beyond thinking about assets alone. We're thinking also about scale. Tell us about your organization, and why the scale and ability to manage so many devices and information is important?

Jackson: Being a managed service provider (MSP), we have about 1,000 external customers, and each one of those has a tailored service, ranging from voice, storage, to data, and cloud. So we need to be able to manage these services that are contained within the 10,000 plus devices that we have.

We need to understand the service end-to-end. So there’s quite bit of service level management in there. It all ties down to having the correct kind of vendor, the correct kind of service mapping, and information needs to be accurate in the configuration items (CIs), so support can utilize this information.

If we have an incident that is automatically generated on the management platforms, it goes into the ITSM platform. We can create an effective customer list within, say, five minutes of the network outage and then email or SMS the customer pretty much directly.
We need to understand the service end-to-end. So there’s quite bit of service level management in there.

There’s more ways of doing it, but it’s all due to having a tight control on the assets that are out there in the field, having an asset management tool that can actually control that, and being able to understand the topology of the network and where everything lies. This gives us the ability to create relationships between these devices and have hierarchical logical and physical entities.

Gardner: You have confidence that you work with tools and platforms that can handle that scale?

Jackson: All the tools that we have are pretty much carrier-grade. So we can scale a lot more than the 10,000 devices that we currently have. If you set it up and plan it right, it doesn’t really matter how many devices you have in management. You have to have the right processes and structure to be able to manage them.

Gardner: We've talked about software, hardware, and networks. Nowadays, cloud services, microservices, and APIs are also a big part of the mix. IT consumes them, they make value from them, and they extend that value into the organization.

Let’s go to Patrick at Steria. How are you seeing in your organization an evolution of ITSM into a service brokering role? And does the current generation of ITSM tools and platforms give you a road to that service brokering capacity?

Extending services

Bailly: What’s needed for becoming a service broker that is we need to offer the ability to extend the current service that we have to the services that are available today in the cloud.

Bailly
To do that, we need to extend the capability of our framework. Today, our framework has been designed in order to run the operation on behalf of our customers, to run the operation on the customer side, or the operation on our data center, but more or less, traditionally IT. The current ITSM framework is able to do that.

What we're facing is that we have customers who want to add short-term [cloud capacity]. We need to offer that capability. What's very important is to offer one interface toward the customers, and to integrate across several service providers at the same time.

Gardner: Tell us a bit about Steria. You're a large organization, 20,000 employees, and in multiple countries.

Bailly: We're an IT service provider, and we manage different kinds of services from infrastructure management, application management, business process outsourcing, system integration, etc., all over Europe. Today, we're leveraging the capabilities that we have today in India and in Poland.

Gardner: Now, we've looked at what ITSM does. We haven’t dug into too much about where it’s going next in terms of what analysis of this data can bring to the table.

Charl, tell us, please, about how you see the use of analytics improving what you've been doing in your setting. How do baseline results from ITSM, the tools we have been talking about, improve when you start to analyze that data, index it, cleanse it, and get at the real underlying information that can then be turned into business benefits?

Joubert: Looking at inadequacies of your processes is really the start of all of this. The moment you start scratching at the vast amount of information you have, you start seeing the errors of your ways, and ways and opportunities to correct them.

Joubert
It's really an exciting time in ITSM. We now have the ability to start mining this magnitude of information that’s being locked inside attachments in all of these ITSM solutions. We can now start indexing all that unstructured data and using it. It’s a fantastic time to be in IT.

Gardner: Give me an example of where you've seen this at work -- maybe a helpdesk environment. How can you immediately get benefits from starting to analyze systems and IT information?

Million interactions

Joubert: In the service desk I'm involved in, we have about a total of a million interactions over the past few years. What we've done with big data is index the categorization of all these interactions.

With tools from HP, Smart Analytics and Smart Ticketing, we're able to predict the categorization of these interactions to a accuracy of about 84 percent at the moment. This assists the service desk agents to more accurately get the correct information to the correct service teams the first time, with fewer errors in escalation, which in turn leads to greater customer satisfaction.

Gardner: Julien, where does the analysis of what you're doing with software asset management, for example, play a role? Where do you see it going?

Kuijper: SAM is already quite complex on-premise and we all know today that the IT world is moving to the cloud, and this is the next challenge of SAM, because the whole point of the cloud is that you don’t know where your systems are.

However, the licensing models, as they are today, refer to CPU, to on-premise, to physical assets. Understanding how you can adapt your licensing model to this new concept -- not that new anymore now -- this new concept of cloud is something to which even the software publishers and vendors have not really adapted their model.
This is the next challenge of SAM, because the whole point of the cloud is that you don’t know where your systems are.

You also have to face some vendors or publishers who are not willing to adapt their model, especially to be able to audit specific customers and get more revenue. So, on one hand, you have to implement the right processes and the right tools, which are now going to navigate in a very complex environment, very difficult to scan, very difficult to analyze. At the same time, you have to update all your contracts, and sometime, this will not be possible.

Some vendors will have a very easy licensing model if you are implementing their software in their own cloud environment, but in another cloud environment, in a competitor, they might make this journey quite complicated for you.

So this will be complex and will be resolved by correct data to analyze and also some legal workforce and purchasing workforce to try to adapt the contracts.

Gardner: In many ways right now, we never really own software. We only lease it or borrow it and we're charged in a variety of ways. But soon we'll to be going more to that pay-as-you-use, pay-as-you-consume model. What about the underlying information associated with those services? Would logs go along with your cloud services? Should you be able to access that so that you can analyze it in the context of your other IT infrastructure?

Edward, any thoughts as a managed services environment and a management of networks provider. Do you see that as you provide more services that you are providing insight or ITSM metadata along with the services?

IaaS to SaaS

Jackson: Over the past five or six years, the services that we offered pretty much started as infrastructure as a service (IaaS), but it’s now very much a software-as-a-service (SaaS) offering, managed OS, and everything up the technology stack into managed applications.

It's gotten to a point now that we are taking on the managing of bespoke applications that customers wanted to hand over to Redcentric. So not only do we have to understand the technology and the operating systems that go on these platforms in the cloud, but we also have to understand the bespoke software that’s sitting on them and all the necessary dependencies for that.

The more that we invest into cloud technologies, the more complex the service that we offer our customers becomes. We have a multitude of management systems that can monitor all the different elements of this and then piece them together in a service-level model (SLM) perspective. So you get SLM and you get service assurance on top of that.

Gardner: We've recently  heard about HP's IDOL OnDemand and Vertica OnDemand, as part of the Haven OnDemand. They're bringing these analytics capabilities to cloud services, APIs as well. As I understand it, they're going to be applying them to more IT operations issues. So it’s quite possible that we'll start to see a mash up, if you will, between a cloud service, but also the underlying IT information associated with that service.

Let’s go back to Patrick at Steria. Any thoughts about where this combination of ITSM within a cloud environment develops? How do you see it going?

Bailly: The system today exists for traditional IT, and we also have to have the tooling for designing and consuming cloud services. We are running HP Service Manager for traditional IT, legacy IT, and we are running HP Cloud Service Automation (CSA) for managing and operating in the cloud.

We’d like to have a unique way for reconciling the catalog of services that are in Service Manager with the catalog of services that are in CSA, and we would need to have a single, unique portal for doing that.
HP Service Desk Software
Brings Together ITSM Capabilities

Get Your Free Trial
What we're expecting with HP Propel is to offer the capabilities to aggregate services that are coming from various sources and to extend that by also offering them. When we're serving this live, we need to offer some additional features like collaboration, incident management, access to the knowledge base, collaboration between service desk and end user, collaboration between end users, etc.

There's also another important point and that is service integration. As a service provider, we will have to deliver and control the services that are delivered by some partners and by some cloud service providers.

In order to do that, we need to have strong integration, not only partnership, but also strong integration. And that integration should be multiple point, meaning that, as soon as we're able to integrate a service provider with this, that integration will be de facto available for our other customers. We're expecting that from HP Propel.

And it’s not only an integration for provisioning service, but it’s also an integration for running the other processes, collaboration, incident management, etc.

Gardner: Patrick mentioned HP Propel, do any of you also have some experience with that or are looking at it to solve other problems?

Single view

Joubert: We're definitely looking at it to give a single view for all our end users. There are various supportive partners in the area where I work. The end user really wants one place to ask for fixing a broken light, to fixing a broken PC, to installing software. It's ease of use that they're looking for. So yes, we are definitely looking at Propel.

Gardner: Let’s take another look to the future. We've heard quite a bit about the Internet of Things (IoT) -- more devices, more inputs, and more data. Do you think that’s something that’s going to be an issue for ITSM, or is that something separate? Do you view that the infrastructure that’s being created for ITSM lends itself to something like managing the IoT and more devices on a network?

Kuijper: For me, as asset management experts and software asset management experts, we have to draw a line somewhere and say, "There is this IoT, and there is some data that we have to say we don’t want to analyze." There are things that are here on the Internet. That’s fine, but too much engineering around that might be over-killing the processes.

We also have to be very careful about false good ideas. I personally think that bring your own device (BYOD) is a false good idea. It brings tremendous issues with regards to who takes care of an asset that is personally owned by a person in a corporate environment, who deals with IT.

Today, it’s perfect. I bring the computer that I'm used to in the office. Tomorrow, it’s broken. Who is going to fix it? When I buy software for this machine, who is going to pay for it and who's going to be responsible for non-compliance?
We also have to be very careful about false good ideas. I personally think that bring your own device is a false good idea.

A CIO might think it’s very intelligent and very advanced to allow people to use what they're used to, but the legal issues behind it are quite complicated. I would say this is a false good idea.

Gardner: Edward, you mentioned that at Redcentric, scale doesn’t concern you. You're pretty confident that the systems that you can access can handle almost any scale. How about that IoT? Even if it shouldn’t be in the purview legally or in terms of the role of IT, it does seem like the systems that have been developed for ITSM are applicable to this issue. Any thoughts about more and more devices on a network?

Jackson: In terms of the scale of things, if the elements are in your control and you have some structure and management around them. You don’t need to be overly concerned. We certainly don’t keep anything in our systems their shouldn’t be in there or doesn’t need to be.

Going forward, things like big data and smart analytics layered on top would give us a massive benefit in how we could deliver our service, and more importantly, how we can manage the service.

Once you have your processes is in place, and can understand the necessity of those processes, you have the structure, and you have the kind of management platform that your sure is going to handle the data, then you can basically leverage things like big data, smart analytics, and data mining to enable you to offer a sophisticated level of support that perhaps your competitors can’t.

Esoteric activity

Gardner: It's occurred to me that the data and the management of that ITSM data is central to any of these major challenges, whether it’s big data, cloud service brokering, management of assets for legal or jurisdiction compliance. ITSM has become much more prominent, and is in the position to solve many more problems.

I'd like to end our conversation with your thoughts along those lines. Charl, ITSM, is it more important than ever? How has it become central?

Joubert: Absolutely. With the advent of big data, we suddenly have the tools to start mining this information and using it to our benefit to give better service to our end-users.
With the advent of big data, we suddenly have the tools to start mining this information and using it to our benefit to give better service to our end users.

Kuijper: ITSM is definitely core to any IT environment, because ITSM is the way to put the correct price tag behind a service. We have service charging and service costing. If you don’t do that correctly, then you basically don’t tell the truth to your customer or to your end user.

If you mix this with the IoT and the possibility to have anything with an IP address available on the network, then you enter into more philosophical thoughts. In a corporate environment, let’s assume you have a tag on your car keys that helps you to find them, and that is linked on the Internet. Those gizmos are happening today.

This brings some personal life information into your corporate environment. What does the corporate environment do about this? The brand of your car is on your car tag. They will know that you bought a brand new car. They will know all this information which is personal. So we have to think about ethics as well.

So drawing a line of what the corporate environment will take care and what is private will be essential in this IOT. When you have your mobile phone, is it personal, it is business? Drawing a line will be very important.

Gardner: But at least we will have the means to draw that line and then enforce the drawing of that line.

Kuijper: Right. Totally correct.

Gardner: Edward, the role of ITSM, bigger than ever or not so much?

Bigger than ever

Jackson: I think it’s bigger than ever. It’s the front end of your business, and the back-end of your business its what the customers see. It’s how you deliver your service, and if you haven’t got it right, then you are not going to be able to deliver the service that a customer expects.

You might have the best products in the world, but if your ITSM systems and your ITSM team aren’t doing what they're supposed to be doing then you know it’s not going to be any good, and the customers are going to say that.

Gardner: And lastly to Steria, and Patrick, the role of ITSM, bigger than ever? How do you view it?

Bailly: For me, the role of IT Service Management (ITSM) won't change. We did ITSM in the past and we still continue to have that in the future. In order to deliver any service,  we need to have the detailed configuration of the service. We will have to run processes and not have the service change. What will change in the future is the diversity of service providers that we use.

As a service provider, we'll have to walk with a lot of other service providers. So the SLA will be more complex to manage for service management. It will be critical. For the customer, you will have to not only manage — but to govern — that service even if it is provided by lot of service providers.

Gardner: So the complexity goes up, and therefore the need to manage that complexity also needs to go up.

Bailly: What is also very important in license management in the cloud is that very often the return on investment (ROI) of the cloud adoption has ignored or minimized the impact of software cost. When you tell your customers, internal or external, that this xyz cloud offer will cost them that amount of money, you will most likely have to add up 20-30 percent because of the impact of the software cost afterward.

Gardner: I am afraid we will have to leave it there. We've been talking to a panel of experts about IT service management and its role in a hybrid computing world. We’ve found out how the future of analytics plays into ITSM, big data included, as well as many of the other scaling issues around mobility, IoT, and the licensing and legal issues around all assets in IT.
HP Service Desk Software
Brings Together ITSM Capabilities

Get Your Free Trial
So a big thank you to our panel, Charl Joubert, a change and configuration expert based in Pretoria, South Africa; Julien Kuijper, an expert in asset and license management based in Paris; Patrick Bailly, IT Quality and Process Director at Steria in Paris, and  Edward Jackson, Operational System Support Manager at Redcentric in the UK.

And a big thank you to our audience as well for joining us for this special new style of IT discussion. I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for joining us, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP Enterprise.

Transcript of a BriefingsDirect panel discussion on how agile ITSM plays an essential role in IT today. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in: