Showing posts with label log management. Show all posts
Showing posts with label log management. Show all posts

Monday, August 24, 2009

IT and Log Search as SaaS Gains Operators Fast, Affordable and Deep Access to System Behaviors

Transcript of a sponsored BriefingsDirect podcast on how IT and log search and analytics are helping companies and MSPs better monitor, troubleshoot, and manage their networks and applications.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today we present a sponsored podcast discussion on efficient ways to keep IT operations running smoothly at the scale demanded of public-facing applications and services.

We'll explore IT search and systems log management as a service. We'll examine how network management, systems analytics, and log search come together, so that IT operators can gain easy access to identify and fix problems deep inside complex distributed environments.

Here to help us better understand how systems log management and search work, and how to gain such functionality as a service, are Dr. Chris Waters, co-founder and chief technology officer at Paglo. Welcome, Chris.

Chris Waters: Hi, Dana.

Gardner: We're also joined by Jignesh Ruparel, system engineer at Infobond, a value-added reseller (VAR) in Fremont, Calif. Welcome Jignesh.

Jignesh Ruparel: Hi, Dana.

Gardner: Chris, let's start with you. How has life changed for IT operators? We have a recession. We have tight budgets. And yet, we have demands that seem to be increasing, particularly when we consider that more applications are now facing the public and are under this large Web-scale demand?

Waters: IT never stands still. There are always new technologies coming on line. Right now, we're seeing a really interesting transition, as more applications become Web-based, both Internet-oriented, Web-based applications and Web-based applications provided out of the cloud.

For an IT professional, being able to monitor the health of applications like this and the successful deployment across the local area network (LAN) and the wide area network (WAN) to their local users provides them additional dividend that hasn’t been there in the past.

Gardner: What sort of requirements have changed in terms of getting insights and access to what’s going on within these systems?

More information

Waters: There are several things. There’s just more information flowing, and more information about the IT environment. You mentioned search in your preamble. Search is a great technology for quickly drilling through a lot of noise to get to the exact piece of data that you want, as more and more data flows at you as an IT professional.

One of the other challenges is the distribution of these applications across increasingly distributed companies and applications that are now running out of remote data centers and out of the cloud as well.

Gardner: As folks start to wrap their minds around cloud computing and the benefits, there's an inherent risk in being responsible for services that you don’t necessarily have authority over. Is this something that you are finding at Paglo? Are customers interested in how to gain insight into systems beyond their immediate control?

Waters: Absolutely. When you're trying to monitor applications out of a data center, you can no

You can’t do any IT at all, if you don't start with a good inventory. And, inventory here means not just the computers connected to the network, but the structure of the network itself . . .

longer use software systems that you have installed on your local premise. You have to have something that can reach into that data center. That’s where being able to deliver your IT solution as software-as-a-service (SaaS) or a cloud-based application itself is really important.

Gardner: So, as we're looking to do IT management and network monitoring, we're interested in log management at a higher level abstraction of analytics, the same old tools don’t seems to be up to the task.

Waters: That’s exactly right. They're much more oriented towards managing LAN environments than they are to managing the evolving IT landscape.

Gardner: Furthermore, this is not just for remediation and fixing, but audit trails, making sure you know what you have under your purview and what is actually supporting business processes. You can’t fix what you don’t know that you have.

Waters: You can’t do any IT at all, if you don't start with a good inventory. And, inventory here means not just the computers connected to the network, but the structure of the network itself -- the users, the groups that they belong to, and, of course, all of the software and systems that are running on all those machines.

Gardner: Many vendors that deliver enterprise infrastructure have some very strong tools for themselves, but they don’t necessarily extend that across their competitor's environment. So, we need something that is, in a sense, above the fray.

Bringing solutions together

Waters: You've got this heterogeneity in your IT environments, where you want to bring together solutions from traditional software vendors like Microsoft and cloud providers like Amazon, with their EC2, and it allows you to run things out of the cloud, along with software from open-source providers.

All of the software in these systems and this hardware is generating completely disparate types of information. Being able to pull all that together and use an engine that can suck up all that data in there and help you quickly get to answers is really the only way to be able to have a single system that gives you visibility across every aspect of your IT environment.

Gardner: As we see more interest in SaaS applications, as we see more interest in mixing up

Nothing in this world ever gets simpler. What you're trying to find are solutions that help you capture all that noise.

sourcing options, in terms of colocation or outsourcing, cloud providers launching their own applications on someone else's infrastructure, these issues are just going to grow more complex, right?

Waters: Nothing in this world ever gets simpler. What you're trying to find are solutions that help you capture all that noise.

Gardner: Tell us a little about Paglo. What makes it different? You deliver search for log systems as a service?

Waters: That’s right. Paglo is different from other IT systems in two significant ways. The first is that at the heart of Paglo is search. Search allows us to take information from every aspect of IT, from the log files that you have mentioned, but also from information about the structure of the network, the operation of the machines on the network, information about all the users, and every aspect of IT.

We put that into a search index, and then use a familiar paradigm, just as you'd search with Google. You can search in Paglo to find information about the particular error messages, or information about particular machines, or find which machines have certain software installed on them. So, search is a way that Paglo is fundamentally different than how IT has been done in the past.

SaaS offering

The second thing unique about Paglo is that we deliver the solution as a SaaS offering. This means that you get to take advantage of our expertise in running our software on our service, and you get to leverage the power of our data centers for the storage and constant monitoring of the IT system itself.

This allows people who are responsible for networks, servers, and workstations to focus on their expertise, which is not maintaining the IT management system, but maintaining those networks, servers, and workstations.

Gardner: I suppose that puts the emphasis on the data and the information about the systems and not necessarily on agents or on-premises' appliances or systems.

Waters: Exactly.

Gardner: Tell me a little bit about how that works. It sounds a little counterintuitive, when you

That’s why we have a piece of software that sits inside the network. So, there are no special firewall holes that need to be opened or compromised in the security with that.

first think about. I’m going to manage my system through someone else's software on their cloud or their infrastructure.

Waters: The way Paglo works is that we have what we call the Paglo Crawler, which is a small piece of software that you download and install onto one server in your network. From that one server, the Paglo Crawler then discovers the structure of the rest of the network and all the other computers connected to that network. It logs onto those computers and gathers rich information about the software and operating environment.

That information is then securely sent to the Paglo data center, where it's indexed and stored on the search index. You can then log in to the Paglo service with your Web browser from anywhere in your office, from your iPhone, or from your home and gain visibility into what's happening in real time in the IT environment.

Gardner: What about security? What about risks -- the usual concerns that people have with SaaS and cloud?

Waters: There are a few aspects there. The first thing is that to do its job the Crawler needs some access to what’s going on in the network, but any credentials that you provide to the Crawler to log in never leaves the network itself. That’s why we have a piece of software that sits inside the network. So, there are no special firewall holes that need to be opened or compromised in the security with that.

There is another aspect, which is very counterintuitive, and that people don't expect when they think about SaaS. Here at Paglo, we are focused on one thing, which is securely and reliably operating the Paglo service. So, the expertise that we put into those two things is much more focused than you would expect within an IT department, where you are focused on solving many, many different challenges.

Increased reliability

Ultimately, I think what people see when they use SaaS offerings is that the reliability they get out of their software goes up dramatically, because they are no longer responsible for operating it themselves or dealing with software upgrades. There is no such thing as a software upgrade for a SaaS service. It's a transparent operation. The same applies to security as well. We are maniacally focused on making sure that Paglo and our Paglo data center are secure.

Gardner: Is this just for enterprises, small or medium-sized business, or managed service providers (MSPs)? How does this affect the various segments of the IT installed community?

Waters: We see users of Paglo across all aspects of the IT spectrum. We have a lot of users who are from small and medium-sized businesses. We also see departments within some very large enterprises, as well, using Paglo, and often that's for managing not just on-premise equipment, but also managing equipment out of their own data centers.

Paglo is ideal for managing data-center environments, because, in that case, the IT people and the hardware are already remote from each other. So, the benefits of SaaS are double there. We also see a lot of MSPs and IT consultants who use Paglo to deliver their own service to their users.

Gardner: Let's go to Jignesh. Jignesh, you have been very patient, as we learn about IT search


That would not have been possible, if there wasn't one place where we could aggregate all this information and quickly extract it, either into a reportable format or a customized format.

as a service, but tell me about Infobond. What sorts of issues were you dealing with, as you started looking for better insights into what your systems are doing?

Ruparel: As far as Infobond, we have been in business for 15 years. We have been primarily a break-fix organization, moving into managed services, monitoring services.

The first challenge in going in that direction was that we needed visibility into customer networks of the customers we service. For that we needed a tool that would be compatible with the various protocols that are out there to manage the networks -- namely SNMP, WMI, Syslog. We needed to have all of them go into a tool and be able to quickly search for various things.

To give you a very small example, recently we had some issues with virtual private network (VPN) clients for a customer who was using Paglo. We took the logs of the firewalls, plugged it into the Paglo system, and very quickly, we were able to decipher the issue that was taking place with the client that wasn't able to connect.

That would not have been possible, if there wasn't one place where we could aggregate all this information and quickly extract it, either into a reportable format or a customized format. That was the major challenge that we had.

Advanced technology

We basically looked at various solutions out there -- open source and commercial -- and we found that the technology that Paglo is using is very, very advanced. They aggregate the information and make it very easy for you to search.

You can very quickly create customized dashboards and customized reports based on that data for the end customer, thus providing more of a personal and customized approach to the monitoring for the customers. We deal in the small to mid-sized markets. So, we have varied customers -- biotech, healthcare, manufacturing, and real estate.

Gardner: Now, you've been able to use this search function for this one-off remediation and discovery types of tasks, but have you built templates or recurring searches that you use on a regular basis for ongoing maintenance and support?

Ruparel: Absolutely. Right now, we have created customized dashboards for our customers. Some of them are a common denominator to various sorts of customers. An example would be an Exchange dashboard. Customers would love to have a dashboard that they have on the screen

At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

for Exchange and know what the queues that are taking places in the Exchange Server, mailbox sizes, RPC connections, the service latency, and all this kind of stuff. We have an Exchange dashboard for that, which our customers use regularly to get a status update on what's taking place with their Exchange.

We also have it for VMware. These are some things that are a common denominator to almost all customers that are moving with the technology, implementing new technologies, such as VMware, the latest Exchange versions, Linux environments for development, and Windows for their end users.

The number of pieces of software and the number of technologies that IT implements is far more than it used to be, and it’s going to get more and more complex as time progresses. With that, you need something like Paglo, where it pulls all the information in one place, and then you can create customized uses for the end customers.

At the end of the day, I look at it very simply as collecting information in one place, and then being able to extract that easily for various situations and environments.

Gardner: So, from your vantage point, going SaaS for these values is not something to consider as a risk factor, but really is an enabling technology factor?

Ruparel: Absolutely. There is always going to be that. We are at a stage where SaaS implementations are taking place at a very rapid rate, and that is the wave of the future. Why? Because, if you look at it, what would you need to set up a monitoring system that supports so many protocols in your own network it would be very expensive.

Missing pieces

Not only that, but if you were to take some other piece of software out there, install it in your network, and monitor the systems, it will not be an end-all solution. There will always be pieces missing, because each vendor is going to focus on certain aspects of the monitoring and management. What Paglo is doing is bringing all of that together by building a platform where you can do that easily.

Gardner: Let's go back to Chris. Why don’t older approaches work? Why wouldn’t an on-premise server and appliance approach offer the same sorts of benefits?

Waters: There are a couple of issues. Jignesh just touched one of them, which is particularly appropriate to MSPs. Once the people who are managing the data are remote from the systems, then where are you going to put the servers as an MSP, if you want to manage information from multiple clients?

Now you are into the business of having to build out your own data center to aggregate that information, which obviously is costly and involves a lot of effort, not to mention the ongoing effort of keeping all of that stuff alive.

Gardner: Well, we've heard this makes great sense for the MSPs, the small and medium-sized businesses, and the ecology of play there. What about the large enterprise? They almost act like

Paglo can take data from all of those different disciplines, and, as you try to solve problems or improve the monitoring or service level agreement (SLA) monitoring that you are doing on your network, within Paglo you've got one place to look.

their own MSPs. How does this fit with their legacy approach to the log data from these systems and being able to search and index it?

Waters: If you look at the IT management landscape, especially for enterprises, what you see is a highly fragmented environment, where each different IT problem has a different system or set of tools that are applied against it.

The beauty of taking search and applying that to IT and log management is that it allows us to pull together in one place data that previously would have been considered completely different disciplines. The data was previously being fed into your network monitoring software, into your IT asset inventory, or into your server management software.

Paglo can take data from all of those different disciplines, and, as you try to solve problems or improve the monitoring or service level agreement (SLA) monitoring that you are doing on your network, within Paglo you've got one place to look. You can look across servers through the network to users in a single consolidated view.

Gardner: Jignesh, back to you. You mentioned several important industry segments. A couple of them are under quite a bit of regulation. Is there something about audit trail, search, dashboards, and the SaaS approach that in some way benefits compliance and adherence?

Configuration changes

Ruparel: Absolutely. Keeping track of configuration changes on devices is important. Some of the stuff that I am mentioning may not be available at this instant on Paglo, but knowing the infrastructure as I have learned it over the last four to five months on Paglo, I'd say that building such tools is very easy.

They already have a bunch of reports on there that give you a lot of support with various compliance measures, such as HIPAA, Sarbanes-Oxley, and provide reports on that. However, there is still a little bit of work that is required to get to that stage, where the reports are directly targeted towards a particular regulation.

As far as the generic reports that deregulation would require, say audit trails on who logged in, I mentioned to you regarding the VPN issue that I faced. I generated reports on all the users who had logged in via VPN over the last month in a snap. It was very, very quick. It took me less than two minutes to get that information. Having that quick of response in getting the information you want is very, very powerful. I think Paglo does that extremely well.

Definitely there is a huge advantage of having this information collected. As far as generating reports are concerned, that is something that not only Paglo can provide support on there, but Infobond, if you are interested, can provide support on it as well.

Gardner: We've talked about some of the qualitative benefits and the benefits of SaaS, but what

You would probably need a person dedicated for approximately two to three months to get the information into the system and presentable to the point where its useful. With Paglo, I can do that within four hours.

about cost? It seems to me that the implementation cost would be lower, but how about the general operating cost, Jignesh, now that you've been using this for several months?

Ruparel: Let’s look at it from two different perspectives. If I go and set things up without Paglo, it would require me, as Chris had mentioned, to place a server at the customer site. We would have to worry about not only maintenance of the hardware, but the maintenance of the software at the customer site as well, and we would have to do all of this effort.

We would then have to make sure that our systems that those servers communicate to are also maintained and steady 24/7. We would have multiple data centers, where we can get support. In case one data center dies, we have another one that takes over. All of that infrastructure cost would be used as an MSP.

Now, if you were to look at it from a customer's perspective, it's the same situation. You have a software piece that you install on a server. You would probably need a person dedicated for approximately two to three months to get the information into the system and presentable to the point where its useful. With Paglo, I can do that within four hours.

Gardner: Can you give me any metrics of success in terms of cost? It certainly sounds compelling?

Lowest cost

Ruparel: As far as cost is concerned, right now Paglo charges a $1.00 a device. That is unheard of in the industry right now. The cheapest that I have gotten from other vendors, where you would install a big piece of hardware and the software that goes along with it, and the cost associated with that per device is approximately $4-5, and not delivering a central source of information that is accessible from anywhere.

As far as cost, infrastructure cost wise, we save a ton of money. Manpower wise, the number of hours that I have to have engineers working on it, we save tons of time. Number three, after all of that, what I pay to Paglo is still a lot less than it would cost me.

Gardner: Chris Waters, tell me a bit about the future. It seems to me that this is a paradigm shift, if I could use sort of a cliché, in terms of cloud resources, and then using the network a bit more strategically. Now that you have taken this first step with Paglo, do you have any sense of where you expect to go next?

Waters: We're riding the wave of adoption of cloud by services, and we use that ourselves. The reason we're able to offer our service so cost effectively is that we leverage a very efficient data center. The Paglo back-end is designed specifically to support many, many tens of thousands of SaaS customers. So, they get to take advantage of the infrastructure that we have been building there, and we pass those cost savings on to them.

We all know that we're riding the benefits of Moore's Law as computational power becomes cheaper and network bandwidth costs less. Those things both allow us to do with Paglo more sophisticated analysis and capture more interesting data about the IT environment.

The most recent data source that we added to Paglo was the ability to capture information about

Companies like YouTube taught us . . . that if you for a second forget about bandwidth costs and you forget about storage costs . . . you can do some really interesting things.

your logs. I would say that the log management space is still barely scratching the surface of what’s possible with logs. As companies move towards more Web-based services, there is an interesting characteristic. Web-based services generate more log data, when you contrast that with more traditional client-server based software. So the possibilities for analyzing that data and drawing conclusions from that data, and having it integrated with an overall perspective of the entire IT system is going to be a pretty cool part of Paglo's future.

Gardner: In the past, there were a number of inhibitors to where you could go with this. There was the storage cost. There was the access cost. There were simply network and bandwidth issues, and then the ability to deal with that increasing load of data. Now it sounds as if you are pretty much free to start collecting everything.

Waters: Companies like YouTube taught us, and taught everybody, that if you for a second forget about bandwidth costs and you forget about storage costs, because they are rapidly hitting towards zero, and let your imagination run wild, you can do some really interesting things. Paglo takes advantage of those insights.

Gardner: For those folks who want to learn more about this, how does one get started? Is this a long process? I think Jignesh mentions only four hours, but tell us a little bit about the ramp-up process and where you might get some more information?

Principle of transparency

Waters: One of our fundamental operating principles here at Paglo is our principle of transparency. We want to make it extremely easy for people to find out information about Paglo, try it, and buy it.

You can go to paglo.com and read a lot more about how Paglo works and see a lot of screen shots of what the Paglo experience is like. You can sign up without a human ever being involved. So from sign up, to being able to search your own IT data is simply a matter of minutes.

Gardner: Very good. We've been learning quite a bit about the opportunity to do IT search and log search and management as a service. This gets to the heart of network management and analytics, and appears to be a value for MSPs, SMBs, and enterprises.

Here helping us discuss how we scale IT operations to the demands of the Web more smoothly and apparently at quite a bit of less cost, we have been joined by Dr. Chris Waters, co-founder and CTO of Paglo. Thanks for joining, Chris.

Waters: Thanks.

Gardner: We have also been joined by Jignesh Ruparel, system engineer at Infobond. Thanks so much, Jignesh.

Ruparel: Thank you Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Paglo.

Automatically discover your IT data and make it accessible and useful. Get started for free.

Transcript of a sponsored BriefingsDirect podcast on how IT and log search and analytics are helping companies and MSPs better monitor, troubleshoot, and manage their networks and applications. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, December 15, 2008

IT Systems Analytics Become Crucial as Move to Cloud and SaaS Raises Complexity Bar

Transcript of a BriefingsDirect podcast on the role of log management and analytics as enterprises move to cloud computing and software as a service.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on the changing nature of IT systems' performance and the heightening expectations for applications delivery from those accessing application as services.

The requirements and expectations on software-as-a-service (SaaS) providers are often higher than for applications traditionally delivered by enterprises for their employees and customers. Always knowing what's going on under the IT hood, being proactive in detection, security, and remediation, and keeping an absolute adherence to service level agreements (SLAs), are the tougher standards a SaaS provider deals with.

Increasingly, this expected level of visibility, management, and performance will apply to those serving up applications as services regardless of their hosting origins or models.

Here to provide the full story on how SaaS is making all applications' performance expectations higher, and how to meet or exceed those expectations is Jian Zhen, senior director of product management at LogLogic. Welcome to the show Jian.

Jian Zhen: Thank you for having me.

Gardner: We're also joined by Phil Wainewright, an independent analyst, director of Procullux Ventures, and SaaS blogger at ZDNet and ebizQ. Welcome back to the show, Phil.

Phil Wainewright: Glad to be here, Dana.

Gardner: Phil, let’s start with you. The state of affairs in IT is shifting. Services are becoming available from a variety of different models and hosts. We're certainly hearing a lot about cloud and private cloud. I suppose the first part of this that caught the public's attention was this whole SaaS notion and some successes in the field for that.

Maybe you could help us understand how the world has changed around SaaS infrastructure, and what implications that has for the IT department?

Wainewright: One thing that's happening is that the SaaS infrastructure is getting more complicated, because more choice is emerging. In the past people might have gone to one or two SaaS vendors in very isolated environments or isolated use cases. What we're now finding is that people are aggregating different SaaS services.

They're maybe using cloud resources alongside of SaaS. We're actually looking at different layers of not just SaaS, but also platform as a service (PaaS), which are customizable applications, rather than the more packaged applications that we saw in the first generation of SaaS. We're seeing more utility and cloud platforms and a whole range of options in between.

That means people are really using different resources and having to keep tabs on all those different resources. Where in the past, all of an IT organizations' resources were under their own control, they now have to operate in this more open environment, where trust and visibility as to what's going on are major factors.

Gardner: Do you think that the type of application delivery that folks are getting from the Web will start to become more the norm in terms of what delivery mechanisms they encounter inside the firewall from their own data center or architecture?

Wainewright: If you're going to take advantage of SaaS properly, then you need to move to more of a service-oriented architecture (SOA) internally. That makes it easier to start to aggregate or integrate these different mashups, these different services. At the end of the day, the end users aren't going to be bothered whether the application is delivered from the enhanced data center or from a third-party provider outside the firewall, as long as it works and gives them the business results they're looking for.

Gardner: Let's go to Jian Zhen at LogLogic. How does this changing landscape in IT and in services delivery affect those who are responsible for keeping the servers running, both from the host as well as the receiving end in the network, and those who are renting or leasing those applications as services?

Zhen: Phil hit the nail on the head earlier when he mentioned that IT not only has to keep track of resources within their own environment, but now has to worry about all these resources and applications outside of their environment that they may or may not have control over.

That really is one of the fundamental changes and key issues for current IT organizations. You have to worry not only about who is accessing the information within your company firewall, but now you have all this data that's sitting outside of the firewall in another environment. That could be a PaaS, as Phil said, it could be a SaaS, an application that's sitting out there. How do you control that access? How do you monitor that access. That's one of the key issues that IT has to worry about.

Obviously, there are data governance issues and activity monitoring issues. Now, from a performance and operational perspective, you have to worry about, are my systems performing, are these applications that I am renting, or platforms or utilities I am renting, are they performing to my spec? How do I ensure that the service providers can give me the SLAs that I need.

Those are some of the key issues that IT has to face when they are going outside of this corporate firewall.

Gardner: I suppose if it were just one application that you knew you were getting as a service, if something would go wrong, you might have a pretty good sense of who is responsible and where, but we are very rapidly advancing toward mixtures, hybrids, multiple SaaS providers, different services that come together to form processes. Some of these might be on premises, and some of them might not be.

It strikes me that we're entering a time when finger pointing might become rampant if something goes wrong, who is ultimately responsible, and under whose SLA does it fall?

Phil, from your perspective, how important will it be to gain risk, compliance, and security comfort, by being able to quickly identify who is the source of any issue?

Wainewright: That's vitally important, and this is a new responsibility for IT. To be honest Dana, you're a little bit generous to the SaaS providers when you say that if you only dealt with one or two, and if something went down, you had a fair idea of what was going on. What SaaS providers have been learning is that they need to get better at giving more information to their customers about what is going wrong when the service is not up or the service is not performing as expected. The SaaS industry is still learning about that. So, there is that element on that side.

On the IT side, the IT people have spent too much time worrying about reasons why they didn't want to deal with SaaS or cloud providers. They've been dealing with issues like what if does go down, or how can I trust the security? Yes, it does go down sometimes, but it's up 99.7 percent of the time or 99.9 percent of the time, which is better than most organizations can afford to do with their own services.

Let's shift the emphasis from, "It's broken, so I won't use it," to a more mature attitude, which says, "It will be up most of the time, but when it does break, how do I make sure that I remain accountable, as the IT manager, the IT Director, or the CIO. How do I remain accountable for those services to my organization, and how do I make sure that I can pinpoint the cause of the problem, and get it rectified as quickly as possible?"

Gardner: Jian, this offers a pretty significant opportunity, if you, as a vendor and a provider of services and solutions, can bring visibility and help quickly decide where the blame lies, but I suppose more importantly, where the remediation lies. How do you view that opportunity, and what specifically is LogLogic doing?

Zhen: We talked to a lot of customers who were either considering or actually going into the cloud or using SaaS applications. One of the great quotes that we recently got from a customer is, "You can outsource responsibility, but not accountability." So, it fits right into what Phil what was saying about being accountable and about your own environment.

The requirement to comply with government regulations and industry mandates really doesn't change all that much, just because of SaaS or because a company is going into the cloud. What it means is that the end users are still responsible for complying with Sarbanes-Oxley (SOX), payment cared industry (PCI) standards, the Health Insurance Portability and Accountability Act (HIPAA), and other regulations. It also means that these customers will also expect the same type of reports that they get out of their own systems.

IT organizations are used to transparency in their own environment. If they want to know what's happening in their own environment, they can get access to it. They can at least figure out what's going on. As you go into the cloud and use some of the SaaS applications, you start to lose some of that transparency, as you move up the stack. Phil mentioned earlier, there's infrastructure as a service, PaaS, SaaS. As you go up the stack, you're going to lose more and more of that transparency.

From a service-provider perspective, we need these providers to provide more transparency and more information as to what's happening in their environment and who has access. Who did access the information? LogLogic's can help these service providers get that kind of information and potentially even provide the reports for their end users.

From a user's perspective, there is that expectation. They want to know what's going on and who is accessing the data. So, the service providers need to have the proper controls and processes in place, and need to continuously monitor their own infrastructure, and then provide some of these additional reports and information to their end customers as needed.

Gardner: LogLogic is in the business of collating and standardizing information from a vast array of different systems through the log files and other information and then offering reports and audit capabilities from that data. It strikes me that you are now getting closer to what some people call business intelligence (BI) for IT, in that you need to deal almost in real time with vast amounts of data, and that you might need to adjust across boundaries in order to gain the insights and inference.

Do you at LogLogic cotton to this notion of BI for IT, and if so, what might we expect in the future from that?

Zhen: BI for IT or IT intelligence, as I have used the term before, is really about getting more information out of the IT infrastructure; whether it's internal IT infrastructure or external IT infrastructure, such as the cloud.

Traditionally, administrators have always used logs as one of the tools to help them analyze and understand the infrastructure, both from a security and operational perspective. For example, one of the recent reports from Price Waterhouse, I believe, says that the number one method for identifying security incidents and operational problems is through logs.

LogLogic's can provide the infrastructure and the tools to help customers gather the information and correlate different log sources. We can provide them that information, both from an internal and external perspective. We work with a lot of service providers, as you know, companies like SAVVIS, VeriSign, Verizon Business Services, to provide the tools for them to analyze service provider infrastructures as well.

A lot of that information can be gathered into a central location, correlated, and presented as business intelligence or business activity monitoring for the IT infrastructure.

Gardner: Phil, the amount of data that we can extract from these systems inside the service providers is vast. I suppose what people are looking for is the needle in the haystack. Also, as you mentioned, it probably behooves these providers to offer more insights into how well they did or didn't do.

What's your take on this notion of BI for IT, and does it offer the SaaS providers an opportunity to get a higher level of insight and detail about what is going on within their systems for the assurance and risk mediation for their customers?

Wainewright: Yes, it does. This is an area where we are going to see best practices emerge. We're in a very early stage. Talking about keeping logs reminds me of what happened in the early days of Web sites and Web analytics. When people started having Web sites, they used to create these log files, in which they accumulated all this data about the traffic coming to the site. Increasingly, it became more difficult to analyze that traffic and to get the pertinent information out.

Eventually, we saw the rise of specialist Web-traffic analytics vendors, most of them, incidentally, providing their services as SaaS focused on helping the Web-site managers understand what was going on with their traffic.

IT is going to have to do the same thing. Anyone can create a log file, dump all the data into a log, and say that they've got a record of what's been going on. But, that's the technically easy challenge. The difficult thing, as Jian said, is actually doing the business analytics and the BI to see what was going on, and to see what the information is.

Increasingly, it comes back to IT accountability. If your service provider does go down, and if the logs show that the performance was degrading gradually over a period of time, then you should have known that. You should have been doing the analysis over time, so that you were ahead of that curve and were able to challenge the provider before the system went down.

If it's a good provider, which comes back to the question you asked, then the provider should be on top of that before the customer finds out. Increasingly, we'll see the quality of reporting that providers are doing to customers go up dramatically. The best providers will understand that the more visibility and transparency they provide the customers about the quality of service they are delivering, the more confidence and trust their customers will have in that service.

Gardner: As we mentioned, the expectations are increasing. The folks who rent an application for a few dollars a month actually have higher expectations on performance than perhaps far more expensive applications inside a firewall and the traditional delivery mechanisms.

Wainewright: That's right, Dana. People get annoyed when Gmail goes down, and that's free. People do have these high expectations.

Gardner: Perhaps we can meet those expectations, even as they increase, but even more importantly for these providers is the cost at which they deliver their services. The utilization rates, the amount of energy that’s required per task or some metric like that, these log files, and this BI will decide their margins and how competitive they are in what we expect to be a fairly competitive field. In fact, we are starting to see the signs of marketplace and auctioning types of activities around who can put up a service for the least amount of money, which, of course, will put more downward pressure on margin.

I've got to go back to Jian on this one. We can certainly provide for user expectations and SLAs, but ultimately how well you run your data center as a service provider dictates your survival ability or viability as a business.

Zhen: You're absolutely right. One of the things that service providers, SaaS providers, or cloud providers have always talked about is the economy of scale. Essentially, that's doing more with less in order to understand your IT infrastructure and understand your customer base. This is what BI is all about, right? You're analyzing your business, your user base, the user access, and all that information in trying to come up with some competitive advantage to either reduce cost or increase efficiency.

All that information is in logs, whether logs that are spewed out by your IT infrastructure, logs that are instrumented using agents or application performance, monitoring type of tools. That information is there, and you need to be able to automate and enhance the ways things are done. So, you need to understand and see what's going on in the environment.

Analyzing all those logs gives you critical capability, not only managing hundreds or thousands of systems and making them more efficient, but bringing that BI throughout. Seeing how your users are accessing, reacting to, or changing your system makes it more efficient for the user, faster for the user, and, at the same time, reduces that cost to manage the infrastructure, as well as to do business.

So, the need to understand and see what's going on is really driving the need to have better tools to do system analysis.

Gardner: Well, how about that Phil? With apologies to Monty Python, every electron is important, right?

Wainewright: Well, it certainly can be. I think the other benefits of providers monitoring this information is that, if they can build out a track record and demonstrate that they all providing better service, then maybe that's the way of defending themselves, of being able to justify asking higher prices than they might otherwise have done.

If the pricing is going to go down because of competitive pressures, there will be differential pricing according to the quality that providers can show they have a track record for delivering.

Zhen: I definitely agree with that. Being able to provide better SLAs, being able to provide more transparency, audit transparency, are things that enterprises care about. As many reports have mentioned, it's one of the biggest issues that's preventing enterprises from adopting the cloud or some of these SaaS applications. Not that the enterprises are not adopting, but the movement is still very slow.

The main reasons are security and transparency. As SaaS providers or service providers start providing a lot more information based on the data that they analyze, they can provide better SLAs, both from an uptime and performance perspective, not just uptime. A lot of the SLAs today just talk about uptime. If they can provide a lot of that information by analyzing the information that they already have -- the log data, access data, and what not -- that’s a competitive advantage for the providers. They can charge a higher price, and often, enterprises are willing to pay for that.

Wainewright: I've been speaking to enterprise customers, and they are looking for better information from the providers about those performance metrics, because they want to know what the quality of service is. They want to know that they're getting value for money.

Gardner: Well, we seem to have quite a set of pressures. One, to uphold performance, provide visibility, reduce risk, and offer compliance and auditing benefits. On the other side, it's pure economics. The more insight and utilization you have, and the more efficiently you can run your data centers, the more you can increase your margin and scale out to offer yet more services to more types of customers. It seems pretty clear that there's a problem set and a solution set.

Jian, you mentioned that you had several large service providers as customers. I don’t suppose they want all the details about what happens inside their organizations to come out, but perhaps you have some use case scenarios. Do you have examples of how analytics from a system’s performance, vis-à-vis log data, helps them on either score, either qualitatively in terms of performance and trust, and more importantly, over time, their ability to reap the most efficiency out of their system?

Zhen: These are actually partners of LogLogic. We've worked with these service-provider partners to provide managed services or cloud services for log management to the end customers. They're using it both working with the customers themselves, as well as using it internally.

Often, the use cases are really around compliance and security. That’s where the budget is coming from. Compliance is the biggest driver for some of these tools today.

However, some of the reports I mentioned, especially from Enterprise Strategy Group (ESG), one of the fastest-growing use cases for log management is operational use. This means troubleshooting, forensic analysis, and being able to analyze what's going on in the environment. But, the biggest driver today for purchasing that type of log-management solution is still compliance -- being able to comply with SOX, PCI, HIPAA, and other regulations.

Gardner: Let’s wrap up with some crystal-ball gazing. First, from Phil. How do you see this market shaking out? I know we're under more economic pressure these days, given the pending or imminent global recession, but it seems to me that it could be a transformative pressure, a catalyst, toward more adoption of services, and keeping application performance at lowest possible cost. What's your sense of where the market is going.

Wainewright: It’s a terrible cliché, but it’s about doing more with less. It may be a cliché, but it’s what people are trying to do. They've got to cut costs as organizations, and, at the same time, they have to actually be more agile, more flexible, and more competitive.

That means a lot of IT organizations are looking to SaaS and they're looking to cloud computing, because this is the way of getting resources without a massive outlay and starting to do things with a relatively low risk of failure.

They're finding that budgets are tight. They need to get things done quickly. Cloud or SaaS allows them to do that, and therefore there's a rosy future, even in bleak economic conditions, for this type of offering.

There are still a lot of worries among IT people as to the reliability and security and privacy compliance and all the other factors around SaaS. Therefore, the SaaS providers have to make sure that they're monitoring that, and that they're reporting. Likewise, the IT people, for their own peace of mind, need to make their own arrangement, so that they can also be keeping an eye on their side. I think everyone is going to be tracking and monitoring each other.

The upside of is that we're going to get more enterprise-class performance and enterprise-class infrastructure being built around the cloud services and the SaaS providers, so that enterprises will be able to have more confidence. So, at the end of the economic cycle, once people start investing again, I think we'll see people continue to invest in cloud services and SaaS, not because it's the low-cost option, but because it's the proven option that they have confidence in.

Gardner: Jian Zhen, how do you and LogLogic see the market unfolding? Where do you think the opportunities lie?

Zhen: I definitely agree with Phil. With the current economic environment, a lot of enterprises will start looking at SaaS and cloud services seriously and consider them.

However, enterprises are still required to be compliant with government regulations and industry mandate, so that's not going to go away. For the service providers and the SaaS providers, what they can do to attract these customers really is to make themselves more attractive, and make themselves be compliant with some of these regulations, and provide more transparency, giving people a view into who is accessing the data, and how they protect the data.

Amazon did a great thing, which was to release a white paper on some of their security practices. It's a very high level, but it’s a good start. Service providers need to start thinking more along the lines of, how to attract these enterprise customers, because the enterprise customers are willing and seriously considering SaaS services.

Phil had an article a while back, calling for a SaaS code of conduct. Phil, one of the things that you should definitely add there is a code to have the service providers provide all the transparency. That’s a thing that service providers can use to offer essentially a competitive advantage for their enterprise customers.

Gardner: Now, you sit at a fairly advantageous point, or a catbird's seat, if you will, on this regulatory issue. As enterprises seek more SaaS and cloud services for economic and perhaps longer-term strategic reasons, do we need to rethink some of our compliance and regulatory approaches?

We have a transition in the United States in terms of the government. So, now is a good time, I suppose, to look at those sorts of things. What, from your perspective, should change in order to allow companies to more freely embrace and use cloud and SaaS services, when it comes to regulation and compliance?

Zhen: As far as changing the regulations, I'm not sure there are a lot of things. We've seen SOX become a very high level and very costly regulation to be compliant with. However, we've also have seen PCI. That’s much more specific, and companies and even service providers can adopt and use some of these requirements.

Gardner: That's the payment card issue, right?

Zhen: Correct. The PCI data-security standard is a lot more specific as to what a company has to do in order to be compliant with it. Actually, one of the appendixes is really for service providers. A lot of service providers have used, for example, the Statement on Auditing Standards (SAS) 70 Type II kind of a report as one of the things they show the customer that they are compliant with. However, I don’t think the SAS 70 Type II is sufficient, mainly because the controls are described by the service providers themselves.

Essentially, they set their own requirements and they say, "Hey, we meet these requirements." I don’t think that’s sufficient. It needs to be something that’s more industry standard, like PCI, but maybe a little bit different, definitely more specific as to what the service providers needs to do.

On top of that, we need some kind of information on when security incidents happen with service providers. One of the things that 44 states have today is data-breach notification laws. That law obviously doesn’t apply to SaaS providers, but in order to provide more transparency there may need to be some standard or some processes in how breaches are reported and handled.

Some of these things certainly will help enterprises be more comfortable in adopting the services.

Gardner: Well, there are some topics Phil for about 150 blog entries, this whole notion of how to shift regulation and compliance in order to suit a cloud economy.

Wainewright: Yeah, it's going to be a difficult issue for the cloud providers to adapt to, but a very important one. This whole issue of SAS 70 Type II compliance, for example. If you're relying on a service provider for part of the services that you provide, then your SAS 70 Type II needs to dovetail with their SAS 70 Type II processes.

That’s the kind of issue that Jian was alluding to. It's no good just having SAS 70 Type II, if the processes that you've got are somehow in conflict with or don't work in collaboration with the service providers that you are depending on. We have to get a lot smarter within the industry about how we coordinate services and provide accountability and audit visibility and trackability between the different service providers.

Gardner: Very good. We've been discussing requirements and expectations around SaaS providers, looking at expected increases and demands for visibility, and management and performance metrics. Helping us to better understand these topics -- and I'm very happy that they joined us -- are Jian Zhen, senior director of product management at LogLogic. Thanks for your input, Jian.

Zhen: Thank you, Dana.

Gardner: Also Phil Wainewright, independent analyst, director of Procullux Ventures, and SaaS blogger at ZDNet and ebizQ. Always good to have you here Phil, thank you.

Wainewright: Thanks, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.

Transcript of a BriefingsDirect podcast on the role of log management and analytics as enterprises move to cloud computing and SaaS. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Thursday, November 06, 2008

Implementing ITIL Requires Log Management and Analytics to Help IT Operations Gain Efficiency and Accountability

Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion on how to run your IT department well by implementing proven standards and methods, and particularly leveraging the Information Technology Infrastructure Library (ITIL) prescriptions and guidelines.

We’ll talk with an expert on ITIL and why it’s making sense for more IT departments and operations around the world. We’ll also look into ways that IT leaders can gain visibility into systems and operations to produce the audit and performance data trail that helps implement and refine such frameworks as ITIL.

We’ll examine the use of systems log management and analytics in the context of ITIL and of managing IT operations with an eye to process efficiency, operational accountability, and systems behaviors, in the sense of knowing a lot about the trains, in order to help keep them running on time and at the lowest possible cost.

To help us understand these trends and findings we are joined by Sudha Iyer. She is the director of product management at LogLogic. Welcome to the show, Sudha.

Sudha Iyer: Thank you.

Gardner: We’re also joined by Sean McClean. He is a principal at KatalystNow in Orlando, Florida. It's a firm that handles mentoring, learning, and training around ITIL and tools used to implement ITIL. Welcome to the show, Sean.

Sean McCLean: Thank you very much.

Gardner: Let's start by looking at ITIL in general for those folks who might not be familiar with it. Sean, how are people actually using it and implementing it nowadays?

McCLean: ITIL has a long and interesting history. It's a series of concepts that have been around since the 1980, although lot of people will dispute exactly when it got started and how. Essentially, it started with the Central Computer and Telecommunications Agency (CCTA) of the British government.

What they were looking to do was create a set of frameworks that could be followed for IT. Throughout ITIL's history, it has been driven by a couple of key concepts. If you look at almost any other business or industry, accounting for example, it’s been around for years. There are certain common practices and principles that everyone agrees upon.

IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT.

In recent times, there has been a lot more focus on that, particularly in two general areas. One, ITIL has had multiple revisions. Initially, it was a drive to handle support and delivery. Now, we are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business. That has kind of been the constant focus of ITIL.

Gardner: So, it's really about maturity of IT as a function that becomes more akin to other major business types of functions or management functions.

McCLean: Absolutely. I think it's interesting, because anyone in the IT field needs to remember that we are in a really exciting time and place. Number one, because technology revises itself on what seems like a daily basis. Number two, because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things.

As I said, in areas like accounting or sales, those things are consistent. They stay that way for eons, but this one is a new and changing environment for us.

Gardner: Are there any particular stumbling blocks that organizations have as they decide to implement ITIL? When you are doing training and mentoring, what are the speed bumps in their adoption pattern?

McCLean: A couple of pieces are always a little confusing when people look at ITIL. Organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way. As with any kind of framework, it’s there to provide guidance and an overall common thread or a common language. But, the practicality of taking that common thread or common language and then incorporating it or interpreting it in your business is sometimes hard to get your head around.

It's interesting that we have the same kind of confusion when we just talk. I could say the word “chair,” and the picture in your head of what a chair is and the picture in my head of what a chair is are slightly different.

It's the same when we talk about adopting a framework such as ITIL that's fairly broad. When you apply it within the business, things like “that business is governance,” “that business is auditing compliance rules” and things like that have to be considered and interpreted within that framework for ITIL. A lot of times, people who are trying to adopt ITIL struggle with that.

If we are a healthcare industry, we understand that we are talking about incidents or we understand that we are talking about the problems. We understand they we are talking about certain things that are identified in the ITIL framework, but we have to align ourselves with rules within the Health Insurance Portability and Accountability Act (HIPAA). Or, if we are an accounting organization, we have to comply to a different set of rules. So it's that element that's interesting.

Gardner: Now, what's interesting to me about the relationship between ITIL and log and systems analytics is that ITIL is really coming from the top-down, and it’s organizational and methodological in nature, but you need information, you need hard data to understand what's going on and how things are working and operating and how to improve. That's where the log analytics comes in from the bottom-up.

Let's go to Sudha. Tell us how a company like LogLogic uses ITIL, and how these two come together -- the top-down and the bottom-up?

Iyer: Sure. That's actually where the rubber meets the road, so to speak. As we have already discussed, ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.

But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.

There is a tight fit there in what we provide as our log-management platform. LogLogic has been around for a number of years and is the leader in this log management industry. It allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.

All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in.

Gardner: Back to you, Sean. When companies are trying to move beyond baseline implementation and really start getting some economic benefits, which of course are quite important these days from their ITIL activities, what sort of tools have you seen companies using? To what degree do you need to dovetail your methodological and ITIL activities with the proper tools down in the actual systems?

McCLean: When you’re starting to talk about applying the actual process to the tools, that's the space that's the most interesting to me. It's that element you need some common thread that you can pull through all of those.

Today, in the industry, we have countless different tools that we use, and we need common threads that can pull across all of those different tools and say, “Well, these things are consistent and these things will apply as we move forward into these processes.” As Sudha pointed out, having an underlying log system is a great way to get that started.

The common thread in many cases across those pieces is maintaining the focus on the business. That's always where IT needs to be more conscious and to be constantly driving forward. Ultimately, where do these tools fit to follow business, and how did these tools provide the services that ultimately support the business to do the thing that we are trying to get done?

Does that address the question?

Gardner: I think so. Sudha, tell us about some instances where LogLogic has been used and ITIL has been the focus or the context of its use. Are there some findings general use case findings? What have been some of the outcomes when these two bottom-up, top-down approaches come together?

Iyer: That's a great question. The bottom line is the customers, and we have a very large customer base. It turns out, according to some surveys we have done in our customer base, that the biggest driver for a framework such as ITIL is compliance. The importance of ITIL for compliance has been recognized, and that is the biggest impact.

As Sean mentioned earlier, it's not a package that you buy and plug into your network and there you go, you are compliant. It's a continues process.

What some of our customers have figured out is that adopting our log management solutions allows them to create better control and visibility into what actually is going on on their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on, whether you were able to mitigate anything untoward that's happening, and whether there is accountability. So, we get feedback in our surveys that control, and visibility has been the top driver for implementing such solutions.

Another item that Sean touched on, reducing IT cost and improving the service quality, was the other driver. When they look at a log-management console and see this is how many admin accesses that were denied. It happened between 10 p.m. and midnight. They quickly alert, get on the job. and try to mitigate the risk. This is where they have seen the biggest value return on investment (ROI) on implementations of LogLogic.

Gardner: Sean, the most recent version of ITIL, Version 3 focuses, as you were alluding to, on IT service management, of IT behaving like a service bureau, where it is responsible on almost a market forces basis to their users, their constituents, in the enterprise. This involves increasingly service-level agreements (SLAs) and contracts, either explicit or implicit.

At the same time, it seems as if we’re engaging with the higher level of complexity in our data center's increased use of virtualization and the increased use of software-as-a-service (SaaS) type services.

What's the tension here between the need to provide services with high expectations and a contract agreement and, at the same time, this built-in complexity? Is there a role for tools like LogLogic to come into play there?

McCLean: Absolutely. There is a great opportunity with regard to tools such as LogLogic from that direction. ITIL Version 2 focused on simply support and delivery, those two key areas. We are going to support the IT services and we are going to deliver along the lines of these services.

The ITIL Version 2 has started to talk a lot about alignment of IT with the business, because a lot of times IT continues and drives and does things without necessarily realizing what the business is and the business is doing. An IT department focuses on email, but they are not necessarily looking at the fact that email is supporting whatever it is the business is trying to accomplish or how that service does.

As we moved into ITIL Version 3, they started trying to go beyond simply saying it's an element of alignment and move the concept of IT into an area where its a part of the business. Therefore it’s offering services within and outside of the business.

One of the key elements in the new manuals in ITIL V3 is talk to service strategy, and its a hot topic amongst the ITIL community, this push towards a strategic look at IT, and developing services as if you were your own business.

IT is looking and saying, “Well, we need to develop our IT services as a service that we would sell to the business, just as any other organization would.” With that in mind, it's all driving toward how we can turn our assets into strategic assets? If we have a service and its made up of an Exchange server, or we have a service and it’s made up three virtual machines, what can we do with those things to make them even more valuable to the business?

If I have an Exchange server, is there someway that I can parcel it out or farm it to do something else that will also be valuable?

Now, with LogLogic's suite of tools we’re able to pull that log information about those assets. That's when you start being able to investigate how you can make the assets that exist more value driven for the organization's business.

Gardner: Back to you, Sudha. Have you had customer engagements where you have seen that this notion of being a contract service provider puts a great deal of responsibility on them, that they need greater insight and, as Sean was saying, need to find even more ways to exploit their resources, provide higher level services, and increase utilization, even as complexity increases?

Iyer: I was just going to add to what Sean was describing. You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized. Or, it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time, etc. That's where the open API, the open platform that LogLogic is comes into play.

Today, IT is heavily into the services-oriented architecture (SOA) methodology. So, we say, “Do you have to actually have a console login to understand what's going on in your enterprise?” No. You are probably a storage administrator or located in a very different location than the data center where a LogLogic solution is deployed, but you still want to analyze and predict how the storage capacity is going to be used over the next six months or a year.

The open API, the open LogLogic platform, is a great way for these other entities in an organization to leverage the LogLogic solution in place.

Gardner: Another thing that has impressed me with ITIL over the years is that it allows for sharing of information on best practices, not only inside of a single enterprise but across multiple ones and even across industries and wide global geographies.

In order to better learn from the industries' hard lessons or mistakes, you need to be able to share across common denominators, whether its APIs, measurements, or standards. I wonder if the community-based aspect to log behaviors, system behaviors, and sharing them also plays into that larger ITIL method of general industry best practices. Any thoughts along those line, Sean?

McCLean: It's really interesting that you hit on that piece, because globalization is one of the biggest drivers I think for getting ITIL moving and going on. More and more businesses have started reaching outside of the national borders, whether we call them offshore resources, outshore resources, or however you want to refer to them.

As we become more global, businesses are looking to leverage other areas. The more you do that, the larger you grow your business in trying to make it global, the more critical it is that you have a common ground.

Back to that illustration of the chair, when we communicate and we think we are talking about the same thing, we need some common point, and without it we can't really go forward at all. ITIL becomes more and more valuable the more and more we see this push towards globalization.

It’s the same with a common thread or shared log information for the same purposes. The more you can share that information and bring it across in a consistent manner, then the better you can start leveraging it. The more we are all talking about the same thing or the same chair, when we are referring to something, the better we can leverage it, share information, and start to generate new ideas around it.

Gardner: Sudha, anything to add to that in terms of community and the fact that many of these systems are outputting the same logs. I’s making that information available on a proper context that becomes the value add.

Iyer: That's right. Let's say you are Organization A and you have vendor relationships and customer relationships outside your enterprise. So, you’ve got federated services. You’ve got different kinds of applications that you share between these two different constituents -- vendors and customers.

You probably already have an SLA with these entities, and you want to make sure you are delivering on these operations. You will want to make sure there is enough uptime. You want to grow towards a common future where your technologies are not far behind, and sharing this information and making sure that what you have today is very critical. That's where there is actual value.

Gardner: Let's get into some examples. I know it's difficult to get companies to talk about sensitive systems in their IT practices. So perhaps we could keep it at the level of use-case scenarios.

Let's go to Sean first. Do you have any examples of companies that have taken ITIL to the level of implementation with tools like log analytics, and do you have some anecdotes or metrics of what some of the experiences have been?

McCLean: I wish I had metrics. Metrics is the one thing that seems to be very hard to come up with in this area. I can think of a couple of instances where organizations were rolling out ITIL implementations. In implementations where I am engaged, specifically in mentoring, one of the things I try to get them to do is to dial into the community and talk to other people who are also implementing the same types of processes and practices.

There’s one particular organization out in the Dallas-Fort Worth, Texas area. When they started getting into the community, even though they were using different tools, the underlying principles that they were trying to get to were the same.

In that case they were able to start sharing information across two companies in a manner that was saying, “We do these same things with regard to handling incidents or problems and share information, regardless of the tool being set up.”

Now, in that case I don't have specific examples of them using LogLogic, but what invariably came out in this set of discussions was what we need underneath is the ability to get proactive and start preventing these incidents before they happen. Then, we need metrics and some kind of reporting system where we can start doing the checking issues before they occur and getting the team on board to fix it before it happen. That's where they started getting into log-like tools and looking at using log data for that purpose.

Iyer: That corroborates with one of the surveys we developed and conducted in the last quarter. Organizations reported that the biggest challenge for implementing ITIL was twofold.

The first was the process of implementation, the skill set that they needed. They wanted to make sure there was a baseline, and measuring the quality of improvement was the biggest impediment.

The second one was the result of this process improvement. You get your implementation of the ITIL process itself, and where did you get it? Where were you before and where did you end up after the implementation?

I guess when you were asking for metrics, you were looking for those concrete numbers, and that's been a challenge, because you need to know what you need to measure, but you don't know that because you are not skilled enough in the ITIL practices. Then, you learn from the community, from the best-of-breed case studies on the Web sites and so forth, and you go your merry way, and then the baseline numbers for the very first time get collected from the log tools.

Gardner: I imagine that it's much better to get early and rapid insights from the systems than to wait for the SLAs to be broken, for user surveys to come back, and say, “We really don't think the IT department is carrying its weight.” Or, even worse, to get outside customers or partners coming back with complaints about performance or other issues. It really is about early insights and getting intervention that seems to really dovetail well with what ITIL is all about.

McCLean: I absolutely agree with that. Early on in my career within ITIL I had a debate with a practitioner on the other side of the pond. One thing we had a debate about was about SLAs. I had indicated that it's critical to get the business engaged in the SLA immediately.

His first answer was no, it doesn't have to happen that way. I was flabbergasted. You provide a service to an organization without an SLA first? I thought “This can't be. This doesn't make sense. You have to get the business involved.”

When we talked through it and got down to real cases, it turned out that what he was saying is that it’s not that he didn't feel that the SLA didn’t need to be negotiated with the business. What he meant was that we need to get data and reports about the services that we are delivering before we go to the customer, the customer, in this case, being internal.

His point was that we need to get data and information about the service we are delivering, so that when we have the discussion with a business about the service levels we provide, they have a baseline to offer. I think that's to Sudha's point as well.

Iyer: That's right. Actually, it goes back to one of the opening discussions we had here about aligning IT to the business goals. ITIL helps organizations make the business owners think about what they need. They do not assume that the IT services are going to be there or its not an afterthought. It’s a part of that collective, working toward the common success.

Gardner: Let's wrap up our discussion with some predictions or look into the future of ITIL. Sean, do you have any sense of where the next directions for ITIL will be, and how important is it for enterprises that might not be involved with it now to get involved, so that they can be in a better position to take advantage of the next chapters?

McCLean: The last is the most critical. People who are not engaged or involved in ITIL yet will find they are starting to drop out of a common language. That enables you to do just about everything else you do with regard to IT in your business.

If you don't speak the language and the vendors that provide the services do, then you have a hard time getting the vendors to understand what it is the vendors are offering. If you don't speak the language and you are trying to get information shared, then you have a hard time getting forward in that sense.

It’s absolutely critical for businesses and enterprises to start understanding the need for adopting. I don't want to paint it as if everybody needs to get on board ITIL, but you need to get into that and aware of that, so that you can help drive its future directions.

As you pointed out earlier, Dana, it's a common framework but it's also commonly contributed to. It's very much an open framework, so if a new way to do things comes up and is shared, that makes sense. That would be probably the next thing that's adopted. It’s just like our English language, where new terms and phrases are developed all the time. It's very important for people to get on board.

In terms of what's the next big front, when you have this broad framework like this that says, “Here are common practices, best practices, and IT practices.” If the industry matures, I think we will see a lot of steps in the near future, where people are looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”

There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.

Gardner: Sudha, it certainly seems like a very auspicious pairing, the values that LogLogic provides and the type of organizations that would be embracing ITIL. Do you see ITIL as an important go-to market or a channel for you, and is there in fact a natural pairing between ITIL-minded organizations and some of the value that you provide?

Iyer: Actually, LogLogic believes that ITIL is one of those strong frameworks that IT organizations should be adopting. To that effect, we have been delivering ITIL-related reporting, since we first launched the Compliance Suite. It has been an important component of our support for the IT organization to improve their productivity.

In today’s climate, it's very hard to predict how the IT spending will be affected. The more we can do to get visibility into their existing infrastructure networks and so on, the better off it is for the customer and for ourselves as a company.

Gardner: We’ve been discussing how enterprises have been embracing ITIL and improving the way that they produce services for their users. We’ve been learning more about visibility and the role that log analytics and systems information plays in that process.

Helping us have been our panelists, Sudha Iyer. She is the director of product management at LogLogic. Thanks very much, Sudha.

Iyer: Thank you, it's a pleasure, to be sure.

Gardner: Sean McClean, principal at KatalystNow, which mentors and helps organizations train and prepare for ITIL and its benefits. It’s based in Orlando, Florida. Thanks very much, Sean.

McCLean: Thank you. It’s been a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.