Tuesday, June 26, 2012

HP Expert Chat Explores How Insight Remote Support and Insight Online Bring Automation, Self-Solving Capabilities to IT Problems

Transcript of a BriefingsDirect expert chat with HP on new frontiers in automated and remote support.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Dana Gardner: Welcome to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP Expert Chat discussion on new approaches to data center support, remote support, and support automation.

Data centers must do whatever it takes to make businesses lean, agile, and intelligent. Modern support services then need to be able to empower the workers and IT personnel alike to maintain peak control, and to keep the systems and processes performing reliably at lowest cost.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. To help find out more about how to best implement improved and productive IT support processes, I recently moderated an HP Expert Chat session with Tommaso Esmanech, Director of Automation Strategies at HP Technology Services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Tommaso has more than 16 years of HP IT support experience, and has been a leader in designing new innovations in support automation.

In our discussion now, you’ll hear the latest on how HP is revolutionizing support to offer new innovations in support automation and efficiency.

As part of our discussion, we're also joined by two other HP experts, Andy Claiborne, Usability Lead for HP Insight Remote Support, and Paddy Medley, Director of Enterprise Business IT for HP Technology Services.

Our overall discussion begins now with a brief overview from me of the data center agility market, and need for improved IT support capabilities.

I begin by looking at why industry and business leaders are forcing a rethinking of data centers and their support. Agility is the key nowadays. The speed of business has really never been faster, and it needs to be ever more responsive. It seems that even more time compression is involved in reacting to customers. And reacting to markets now is more than essential, it's about survival. Those that can't keep up are in a pretty tough -- even perilous -- situation.

Modern data centers therefore must serve many masters, but ultimately, it's primarily a tool of business, and it must perform therefore at the speed of business. For example, nowadays the impacts of big data are demanding that decisions are increasingly data-driven. A lot more data needs to be tapped and mined. Decisions need to be made based on data -- and those business decisions need to be conducted with ongoing visibility, performance analytics, and, of course, time is important, so near real-time.

But even as data centers support these new levels of agility and analysis, they also need to become cost-reduction centers. Modern IT must do more for less, and that extends especially to ongoing operations and support, which for many people are their largest long-term costs in their total cost equation.

Big date requirements

But not only are data centers supporting many types of converged infrastructure, and now increasingly virtualized technical workloads, too. They're supporting big data requirements -- as we pointed out, data continues to explode -- but they must do this all efficiently, with increased automation as a key component of that efficiency. And moving towards lower energy costs is increasingly important as well.

To accomplish this high efficiency and to exploit the best in performance management and operational governance, these new requirements are all essential to delivering that never failing reliability. And we can also move now toward proactive types of support -- to continue the ongoing improvement and to maintain systems with those high expectations met.

In a nutshell, data centers must do whatever it takes to make businesses lean, agile and intelligent, as businesses and innovate and excel in their fast-changing markets. Modern support services need to be able to empower the workers and the IT personnel alike need to be able to maintain peak control, even within an ecosystem of support, so constituents can keep these systems and processes performing reliably, at the lowest possible cost.

Fortunately, today's modern data centers are like no others before. For the first time, data centers can accommodate the interrelated short-term tactical imperatives and the long-term strategic requirements demanded by their dynamic business demands and requirements.

By delivering fit-for-purpose utilization and converged infrastructure control -- and by putting a priority on energy conservation and automated support -- total costs are no longer spiraling out of control. By doing all of this correctly -- managing your data center for efficiency and putting in proactive support to continue operational efficiency – you can gain huge payoffs.

Fortunately, today's modern data centers are like no others before.



But there are big challenges in getting there as well. So it's important to execute properly to keep that efficiency continuing and building over time. This is, after all, a journey. So today, we're going to learn about how modern data centers are being built for business demands first and foremost, and we'll see how converged infrastructure methods and technologies are being used to retrofit older data centers into fleet, responsive engines of innovation.

We'll also hear specifically on how HP is redefining modern data-center support, enabling far more insights into performance and operation and modernizing through efficiency projects like Voyager, Moonshot and Odyssey, the big initiatives at HP that we've heard quite a bit about, and that are changing the very definition of the data center.

Moreover, we're going to see how HP Technology Services places a proactive edge on service support. And they’re pioneering support automation and remote support, with all of this designed to make IT more responsive so that the businesses themselves can stay adaptive.

I now have the pleasure of introducing our main speaker, Tommaso Esmanech, Director of Automation Strategies at HP in the Technology Services Group. He's going to provide an overview on how HP is revolutionizing support to offer new innovation in support automation. Tommaso leads the Deployment and Business Impact of Web Services implementation, change management, and technologies intended to distribute faster and more customer-oriented services via the Internet.

Support automation

Tommaso Esmanech: Thank you, Dana, and good day to everyone joining today. Before we dive into how HP is implementing support automation and enabling a new and a next generation of data centers, we need to understand what HP is trying to achieve with support automation.

Our intent is to automate the entire support processes, eliminate minor work, and improve production and activities for the entire enterprise. This involves finding solutions for software and hardware, and making hardware and software work seamlessly together by providing a best-in-class customer experience.

What we need to understand is that the world is changing. Customers are using devices that now are providing a new, innovative experience. Their front end is becoming easier. Customers demand integrated capabilities and are requesting a seamless experience, though the back end, the data center, is still complex, articulated, and provided by multiple vendors.

You have network storage and management software that needs to start working together. We began a the journey about 18 months ago in HP to make that change, and we’ve called it Converged Infrastructure. HP took on the journey, mostly because we're the only provider in the industry that provides all the components to make the data center run seamlessly. We're the only provider for data-center network solutions, storage, servers, and management software.

Let’s put this in context of support automation. When you have hardware and software working together and you’re supplying services within that chemistry, you achieve a powerful position for customers. Furthermore, if you're able to automate the entire support and service process, you provide a win-win situation for you, our customers, our HP partners, and for HP, of course.

When you have hardware and software working together and you’re supplying services within that chemistry, you achieve a powerful position for customers.



Now, let’s sit back and look at how this support has changed throughout the years. Support used to be very manual. A lot of the activities used to reside on site where a very qualified workforce, customer engineers and system engineers, would interact to resolve and manage situations.

In the early '90s, we saw a change with infrastructure support moving from decentralized to centralized global and regional centers, moving routine activities into those centers and providing a new role for the customer engineers by focusing on value-added infrastructure and capabilities.

In the '90s, we saw the explosion of the Internet. The basic task was to move to the Web sales, service, our system knowledge base, chat, support cases and case management. A lot of these activities were still manual, relying on human factor activities, to determine the root cause of a problem.

In 2000, we saw more growth of machine-to-machine diagnostics. Now, imagine that we can completely revolutionize that experience. We can integrate the entire delivery support processes, leveraging the machine experience, incorporating that with customer options of all the information with the customer in control, and really blending a remote support, onsite, phone, Web and machine-to-machine into a new automated experience. We believe that unimaginable efficiency can be achieved.

Gardner: Tommaso, I just have a quick question. As we talk about support automation, how is this actually reaching the customer? How do these technologies get into the sites where they’re needed, and what are some of the proof points that this is making an impact?

Intelligent devices

Esmanech: Let me talk about how we’re bringing the support automation to the customer. It starts with how we build intelligence and connectivity into the devices. You probably followed the announcement in February of our new ProLiant servers, our Generation 8 servers.

We have basically embedded more support capabilities into the DNA. We call it Insight Online. As of December 2012, we will be able to support in a similar fashion the existing installed base. This provides the customer a truly one-stop-shop experience for the entire IT data center.

Now that it is easier to utilize and take advantage of an automated support infrastructure, what are the key points? You don't have to make, or necessarily have to make, a phone call. You don't have to wait for a document provide a description. All those activities are automated, because the machine tells us how it’s feeling and what is its health status.

Furthermore, if we compare our support infrastructure to standard human interaction and technical support, we've seen a 66 percent improvement in problem resolution. All these numbers are great for your business.

How much does it cost in downtime? What if your individual servers are impacting your factory? For us, it's about keeping your systems up and running, making sure that you meet the customer commitments, and delivering your products on time.

If we compare our support infrastructure to standard human interaction and technical support, we've seen a 66 percent improvement in problem resolution.



You may say, “Well, machine-to-machine support automation existed before.” Yes, some of them did. What we added just recently is a new customer experience. The management of the infrastructure, the access to the information, how it’s performing, was very much limited to the local management, with access only to the technical few, and they knew how to use it, they knew how to read it.

With Insight Online, accessible through the Web, we now provide secure, personalized anytime/anywhere access to the information. We're totally changing the dynamics from few who had access to those who need to have access to the information. That reduces high learning times that were necessary before, and moves to the user-friendly, innovative, and integrated content that our customers are requesting.

Furthermore, Insight Online is integrated in real-time with a back end. It's not just a report or dashboard of information that is routinely updated. It truly becomes a management tool, when you can view the infrastructure.

One of the other key aspects with Insight Online, this new Web experience, is that we didn’t want to create a new portal. We had made a conscious decision in integrating it with the existing capabilities that you're using to do basic support tasks like accessing a knowledge base, downloading drivers and patches, downloading documentation, and making the infrastructure run seamlessly. The access to the information has to be seamless.

We've also leveraged HP Passport, the identification methodology that you use within your HP experience, providing one infrastructure and not multiple access points.

Gardner: Tommaso, can you give us a bit more detail about how it all comes together, the server management and the support experience?

Customer connectivity

Esmanech: It starts with the connectivity on the customer side. We have a new Generation 8 with embedded DNA that directly connects to the HP back end through Insight Remote Support. Through Insight Remote Support, we're able to collect information and provide alerts about events, warranty, case-management status, and collect all the information necessary for us to deliver on the customer commitments.

In this new version, we've embedded new functions. For example, we allow you to provide identification on the HP service partner that is working on managing your environment. It could be HP, or it could be a certified HP service partner. We have authentication through HP Passport that allows and permits access to the information on Insight Online. Last but not least, we've been able to achieve a faster installation process, eliminating a lot of those hurdles that made it more difficult. It's now significantly easier to adopt Insight Online.

What's important to recognize is that as we collected the bulk of knowledge and information on how these patches are performing, Insight Remote Support does role matching and event correlation.

It not only provides, as we say, traffic-light alerts. You're able to correlate an event with other events to propose a multipurpose action and, in the end, trigger the appropriate delivery and support processes. For example, we can automatically send the right part to you in case you need to manage the device. We link with the standard support processes.

When information is flowing from the customer side into HP support, they have access to the customer in Insight Online. We have access to a customer through our dashboard. This provides alerts and information about how the devices are performing and automatically links warranties. It informs the staff of when they're going to expire, so you can take more proactive actions about renewing it. They also automatically link support cases to events, and with one click, you can navigate to the website.

We have access to a customer through our dashboard. This provides alerts and information about how the devices are performing.



One new feature of Insight Online is access for our HP partners. I talked about having to identify a partner that is actually working on the device? What we have is now a new partner view, again, through HP servers and Insight Online. This uses a new tab called My Customers, and now others can be part of the entire interaction by being able to manage devices on behalf of the customer.

You don’t have to install any of your own software. You don’t have to develop it. We are providing the tools to be more productive, right from the start, by installing the HP server, HP infrastructure, data network storage, and giving you new tools to give you more efficiency.

HP Support Center with Insight Online also provides access to multiple users. You could be an account manager, managing infrastructure, who is going to meet the customer and you want to talk about that infrastructure, how it's performing. You log onto Insight Online and review the information.

Your HP partner can automatically view the information before even going on site and taking actions on a customer device. You will have everything accessible. If users complain that the infrastructure is not performing, you will view the management software and know what is actually going on.

You can actually gain that without having to be in the environment. It is kind of giving the life back, that is the way I would like you to see. Now, let’s also look at this in terms of security. You have information flowing from your data center back into HP and now accessible online.

Security and privacy

First of all, security and privacy are extremely important. We actually compare our privacy policy against all the countries that we do business in. Security is highly scrutinized. We've been audited and certified for our security, and it’s extremely important for us to take care of your security concerns.

Gardner: Tommaso, one of the things I hear quite a bit from folks is that they’re trying to understand how this all works in a fairly complex environment, like a data center, with many people involved with support. There are individuals working on the customer IT infrastructure internally, self-maintainers as well, within that group.

But they’re also relying on partners, and there are other vendors and other devices and equipment and technologies involved. So how does the support automation capability that you have been describing address and manage a fairly fragmented support environment like that?

Esmanech: It is indeed one of the questions we asked ourselves, when we started looking at how do we solve today's problem? How do we give something more than just management software. It’s all about the users that need to access the information.

As I said before, access through a management console is limited to the few that can have access to that environment, because they're within the network or they have the knowledge how to use the tools. With the new experience, by providing cloud-based service in support automation, we're able to provide tools to the customer to enable access to the right people to do the right job.

We've created a new portfolio of services that is taking advantage of this new knowledge and infrastructure to provide new value to the customer.



HP shares devices or views devices or groups of devices with multiple users through the Web-based capabilities that we have with Insight Online. The customers then create groups. Also all customers manage. So you're in control of setting up those groups, saying who has the right to view the information and what he is able to do with such information.

Another important aspect is the security when employees move on. It's part of life. You have somebody working for you, and tomorrow he’s going to move to another organization. You don’t want that individual to have access to your information any longer. So we've given the ability to control who is accessing information and eventually removing the user's right to go into HP Support Center Insight Online and see your environment. So it’s not only providing access, but also controlling access.

Let me take another look how things are changing. We have this easy-to-adopt Insight Remote Support. You have this new access methodology and you have all this knowledge, information, and content flowing from the customer environment into the hands of the right people to keep the system up and running.

If you are under warranty, which is the minimal requirement to take advantage of this infrastructure, you still have a self-solve capability. You have to figure out what you have to do in some cases. While there's information provided, it's still up to you.

We've created a new portfolio of services that is taking advantage of this new knowledge and infrastructure to provide new value to the customer.

Proactive care

O
n the technology side, we need to look at proactive care service. First of all, a technical account manager is assigned as a single point of contact for the software. Several components and reports are sent or made available to the customer. Incorporated incident reports are reviewed with a technical account manager.

This allows them to decide configuration, performance, and security, match it against best practices. It allows them to understand what is the current version of software to keep the infrastructure up and running at the optimal level.

I want to close with few takeaways. First of all, products and services have come together to provide an innovative and exciting user experience, helping to guarantee a 24x7 coverage, and providing access to anywhere/anytime cloud-based and secure support, while managing who can receive such information.

We've embedded this also with a new portfolio to take advantage of old HP expertise and knowhow. Now, partners, customers, and HP experts work together to dramatically increase uptime and achieve efficiency at 66 percent.

This concludes our main presentation, and I want to turn it back to you, Dana, for our Q&A session.

Products and services have come together to provide an innovative and exciting user experience, helping to guarantee a 24x7 coverage.



Gardner: Thank you, Tommaso, and I’d like to introduce to our audience a couple of more experts that we have with us today.

We're here with Andrew Claiborne, Usability Lead for HP Insight Remote Support. Andy has developed HP remote support solutions for a half-dozen years within HP’s internal development labs. He also developed portions of the HP Insight Remote Support capabilities with a special focus on usability.

We're also here with Paddy Medley, Director of Enterprise Business IT for HP Technology Services. Paddy has more than 25 years of experience in the R&D of technology solutions for the HP services organization, responsible for the formulation and execution of technology solutions that are underpinning the delivery of HP technology services. Welcome to you both.

Let me start with you, Paddy, about licensing. Do we use the full functions of iLO 4 and the new HP SIM without any licensing issues?

Eliminate licensing issues

Paddy Medley: The good news is, Dana, is that what we’re trying to do with the solution here is to make it as pervasive as possible and to eliminate licensing issues. HP SIM is essentially a product attribute. Once a customer purchases a storage server from HP or they’ve got such device that’s under service contracts, they are actually entitled to HP SIM by default.

With iLO, iLO really comes in two formats, the standard format and advanced format. The standard format is effectively free, and the advanced format is for fee. The advanced format has additional facilities, such as supporting virtual media, directory support, and so on.

Gardner: Thank you. We have a question here directed at Insight Remote Support. It’s about the software. They're asking, is it included, and is it difficult to install?

Medley: The preface of the first answer applies to this answer as well. What we’ve done with our overall solution is make it as easy to install as possible for the huge amount of human factor effort in behind that. At its most basic level, what’s required is Insight Remote Support software, and that needs to be installed on a Windows-based system or a VMware guest or Windows guest. That’s pretty pervasive.

The actual install process is pretty straightforward and very intuitive. As I said, it's an area where we’ve gone through extensive human factors to make that as easy as possible to install.

The actual install process is pretty straightforward and very intuitive.



The other part of that is if the customer has Insight Manager already installed, they'll actually inherit its features, and there is an integration point there. For instance, if Insight Manager has already discovered a number of devices on the customer’s environments, we’ll inherit those with Insight Remote Support, and for pertinent events occurring in those systems, we’ll try to trace them through Insight Manager into Insight Remote Support and back to HP.

Gardner: Andy Claiborne, a question for you. Our viewers say that they're working to modernize their infrastructure and virtualize their environment. They'd like to implement support automation like Insight Remote Support, but they feel the cost is too high. What does it cost to implement this?

Andy Claiborne: Previous versions of Insight Remote Support were very challenging to get installed, especially at large customer sites. Trying to address that has been one of the key features that we've been trying to bake into our latest release of our support automation tools.

If you have just a couple of Gen8 ProLiants that you want to deploy in your environment and support using our support automation solutions, those systems are able to connect directly to HP, and that capability is just baked into their firmware. So it's really straightforward to set those up.

Hosting device

If you have a bunch of legacy devices in your environment, you’d have to set up what we call a hosting device, which is one system that sits in your environment that listens to all of your devices and sends service events back to HP. For our latest release, we've dramatically reduced the amount of time that it takes to set up, install, and configure the hosting device and implement remote support in your environment.

In the labs, we have cases that used to take our expert testers 45 minutes to get through. Our testers can now get through them in five minutes. So it should be a dramatic improvement, and it should be relatively easy.

Gardner: Here's a related question. How soon can we recover the upfront cost of implementing HP support automation? I think this is really getting to the return-on-investment (ROI) equation.

Claiborne: We look at two aspects. What does it cost to deploy it, and what benefit do you get from having remote support? As we said, the cost is greatly reduced from previous releases.

The benefit, as Tommaso mentioned, is in looking at our case resolution data across thousands of cases that have been opened, we see a 66 percent reduction in problem resolution time. When you think about just how incredibly expensive it is if one of your critical system goes down and how much that costs every second that that system is down, the benefits can be huge. So the payoff should be pretty quick.

Through the entire support processes and collection of the data, we're able to provide a great value proposition for our customers.



Gardner: Okay, Tommaso, a question for you. They ask, why is Insight Remote Support mandatory for proactive care?

Esmanech: If you think about the amount of data that we need to collect to deliver against the proactive care, if we were to all do that activity manually, that would definitely make the value proposition of proactive care through event and revision management, almost impossible to manage or to adapt as a value proposition. So we separate those. Through the entire support processes and collection of the data, we're able to provide a price quantity that is very interesting and a great value proposition for our customers.

A customer can choose as a part of our portfolio, foundation care, but of course, the price point and the value it will provide is going to be different.

Gardner: Here is a question that gets to the heart of the issue about your getting data from inside of other people's systems. They ask, our company has very strict security requirements. How does HP ensure the security of this data?

Esmanech: That is really one of the most asked questions. After we start talking with the security experts at the customer sites, we're able to solve all the problems.

Our security is multilayer. It starts with information collected at the customer site. First of all, the customer has visibility into everything that we collect. When we collect it and transfer it to HP’s back end, all that information is encrypted. When we talk about providing access on Insight Online through the Web, the access goes through HTTPS, so it's encrypted access of information.

For a password, for example, a minimum set of characters is required for an alphanumeric password. Also, the customer has knowledge and information about who is accessing his and viewing his devices. Last but not least, we have certified our environment end-to-end for eTrust, which is one of the most important certifications of security for these type of services in infrastructure.

Product support


Gardner: Paddy, a question from an organization with ProLiant servers as well as HP storage and networking products. Will Insight Remote support all of those products, or is it just the ProLiant servers?

Medley: We've had our initial release of the new Insight Remote Support and Insight Online solution. The initial solution covers Gen8 products only. In parallel with that, we're working on the second release, and that will be coming out in the summer.

That will, in effect, provide similar support for all of our legacy devices, network storage, and server spaces with the exception of three private tools, which we are looking at delivering in a future release. Our objective here is to have pervasive coverage across all of our enterprise-based products.

Gardner: Okay, is there an upgrade path for Insight Remote Support, so that older versions can gain some of the new capabilities?

Medley: There is indeed. We have our legacy remote support solution, which has very significant usage in customer sites. We're providing an upgrade path to customers to migrate from that legacy solution to our new solution, and that’s part of the bundle that will go with the summer release that I just spoke about.

We're providing an upgrade path to customers to migrate from that legacy solution to our new solution.



Gardner: Andy, we have a question here from another user. They have a lot of ProLiant servers running Insight Remote Support today and they are purchasing some of the new ProLiant Gen8s. Will different versions of Insight Remote Support interact, and how so, how would that work?

Claiborne: A lot of you might have spent a lot of time and energy deploying our current generation of remote support tools and you're wondering what does it do to the mix when we add a Gen8 ProLiant.

First, if you're happy with your current set of features, you can monitor the Gen8 ProLiants with the current Insight Remote Support tools, just as you would with any other ProLiant using agents running on the operating system. If you want to get some of the benefits of the new HP Insight Online portal or use the baked-in firmware-enabled remote support features of the new Gen8 ProLiants, you would have to upgrade to the latest version of Insight Remote Support, and we’ve tried to make this as easy as possible. Today, we have Remote Support Standard and Remote Support Advanced.

Our next release of Remote Support, Version 7.0.5, will allow most Remote Support Standard customers and some Remote Support Advanced customers to upgrade automatically. We made this upgrade as seamless as possible. It should be hands-off. We will import all of your device data, credentials, site information, contact information, and event history, into our new tool.

Also, we’ve gone through extensive testing to make sure that, for example, if you had an Open Service event in your current Version 5 solution and you upgrade to Version 7, the service event will still be visible in your user interface and you’ll be able to get updates for it.

Hands-off upgrade

F
or the remainder of Remote Support Advanced customers, if you have mission-critical features -- you're monitoring like an XP Array or a dynamic smartcooling device, things like that -- support for those will come in the subsequent release, Version 7.1. With that, we will also implement a seamless hands-off comprehensive upgrade process.

Gardner: A user asks, Do I need a dedicated server to run Insight Remote Support?

Claiborne: If you're running Insight Remote Support, you have this hosting device in your environment that listens to events from all of your devices in the environment. That doesn't need to be a dedicated server and it doesn't need to be running on HP hardware either. You can run that on any computer that meets the minimum system requirements, and you can even run that on a VMware box.

We end up doing a lot of our testing in the lab in VMware systems, and we’ve realized that a lot of you out there are probably implementing VMware systems in your customer environments. So VMware is supported as well.

The one thing to remember, though, is that this box is the conduit for service events from your environment to HP. So you need to make sure that the box is available and turned on and that it's not a box that’s going to be accidentally powered off over the weekend or something like that.

Gardner: Back to Tommaso, and the question is, what is the difference between Insight Online and Insight Remote Support?

We’ve realized that a lot of you out there are probably implementing VMware systems in your customer environments. So VMware is supported as well.



Esmanech: That’s come up before. The easy way to describe these is that Insight Online is the Web access of Insight Remote Support. It's part of the entire support of the information ecosystem. While we do recognize that Insight Remote Support has a management console, where you can view events and view the devices, that's limited to access within the environment, within the VPN, and only to the few people that know how to manage the environment.

You also have to recognize that Insight Remote Support goes beyond just a management console. It has event correlation and it collects all the data. As Andy said, it's a conduit back to HP. The conduit back to HP leads to Insight Online. The way it is now, there are two systems, and they're part of the same ecosystem.

Gardner: Tommaso, you mentioned self-solve services. What are those, and what did you mean?

Esmanech: The term self-solve we define as those activities and capabilities for which a customer can find a solution of the problem by himself. For example, if you were going on a website for support, you're accessing that knowledge base, finding articles and information on how to troubleshoot or solutions to the problem. If you were just loading drivers, it’d be component of self-solve.

By themselves, they're not services that we sell, but they're part of our services support portfolio. It's about doing business.

Some of the self-solve capabilities may be available to customers with contracts, versus customers who have a warranty, or or don't even have an HP device, but we give the customer the ability to solve problems by themselves.

Future direction

Gardner: Next one to you, Paddy. This is sort of a big question. They are asking, can you predict HP support automation's future direction for the next 10 years? Can you look at your crystal ball and tell us what people should expect in terms of some of the capabilities to come?

Medley: We're seeing a number of trends in the industry. We talked earlier about the converged infrastructure of storage, servers, and networks into single tabs and converged management of that environment.

We’re seeing a move to virtualization. Storage is continuing to grow at a pervasive rate, and hardware continues to become more and more reliable. So when you look at that backdrop, the future is different from the past, in terms of service and service need. We’re seeing this greater need for interoperability, management, revision, configuration management, and for areas like performance and security.

In other words, we're also seeing a move to greater needs that are proactive, as well as reactive, service support. The beauty of the Insight Online solution is that it provides us a framework to go along that path. It provides us the basic framework to provide remote event monitoring or reactive monitoring in the case of subsequent events occurring, and then getting those events back to HP, but also to deliver proactive service.

What we're doing with the solution here is that, as we collect configuration and event information from customer environments, that configuration and event information is securely transported back to HP. Parts are loaded into a database against a defined data model.

We’re bringing convergence of all the reference data associated with the products that we support and then providing a set of analytics that analyze that collected data.



We’re bringing convergence of all the reference data associated with the products that we support and then providing a set of analytics that analyze that collected data against that reference data, producing recommendations and actions and events management. In fact, aggregation and that ability to do that in that aggregated back end, that’s really providing us, we see, with a key differentiator.

And then, all of that information is presented through the Insight Online portal, along with our knowledge bases, forums, and other reference data. So it's that whole aggregation that’s really the sweet spot with this overall solution.

Gardner: Well, that sounds very exciting. I'm afraid we’ll have to leave it there. A huge thanks to Tommaso Esmanech, Andy Claiborne and Paddy Medley.

I’d also like to thank you, our audience, for taking your time, and I hope this was helpful and useful for you. I'm Dana Gardner, Principal Analyst at Interarbor Solutions. Goodbye until next time until the next HP expert chat session.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect expert chat with HP on new frontiers in automated and remote support. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Friday, June 22, 2012

Learn How Enterprise Architects Can Better Relate TOGAF and DoDAF to Bring Best IT Practices to Defense Contracts

Transcript of a BriefingsDirect enterprise IT thought leadership podcast on how governments are using multiple architectural frameworks.

Register for The Open Group Conference
July 16-18 in Washington, D.C.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with the Open Group Conference this July in Washington, D.C. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout these discussions.

The conference will focus on enterprise architecture (EA), enterprise transformation, and securing global supply chains. Today, we’re here to focus on EA, and how governments in particular are using various frameworks to improve their architectural planning and IT implementations.

Joining us now to delve into this area is one of the main speakers at the July 16 conference, Chris Armstrong, President of Armstrong Process Group. His presentation will also be live-streamed free from The Open Group Conference.

Chris is an internationally recognized thought leader in EA, formal modeling, process improvement, systems and software engineering, requirements management, and iterative and agile development.

Governments in particular are using various frameworks to improve their architectural planning and IT implementation.



Chris represents the Armstrong Process Group at the Open Group, the Object Management Group (OMG), and Eclipse Foundation. Chris also co-chairs The Open Group Architectural Framework (TOGAF), and Model Driven Architecture (MDA) process modeling efforts, and also the TOGAF 9 Tool Certification program, all at The Open Group.

At the conference, Chris will examine the use of TOGAF 9 to deliver Department of Defense (DoD) Architecture Framework or DoDAF 2 capabilities. And in doing so, we'll discuss how to use TOGAF architecture development methods to drive the development and use of DoDAF 2 architectures for delivering new mission and program capabilities. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

So with that, we now welcome to BriefingsDirect, Chris Armstrong.

Chris Armstrong: Great to be here, Dana.

Gardner: Tell our viewers about TOGAF, The Open Group Architecture Framework, and DoDAF. Where have they been? Where are they going? And why do they need to relate to one another more these days?

Armstrong: First of all, TOGAF we look at as a set of essential components for establishing and operating an EA capability within an organization. And it contains three of the four key components of any EA.

First, the method by which EA work is done, including how it touches other life cycles within the organization and how it’s governed and managed. Then, there's a skills framework that talks about the skills and experiences that the individual practitioners must have in order to participate in the EA work. Then, there's a taxonomy framework that describes the semantics and form of the deliverables and the knowledge that the EA function is trying to manage.

One-stop shop

One of the great things that TOGAF has going for it is that, on the one hand, it's designed to be a one-stop shop -- namely providing everything that a end-user organization might need to establish an EA practice. But it does acknowledge that there are other components, predominantly in the various taxonomies and reference models, that various end-user organizations may want to substitute or augment.

It turns out that TOGAF has a nice synergy with other taxonomies, such as DoDAF, as it provides the backdrop for how to establish the overall EA capability, how to exploit it, and put it into practice to deliver new business capabilities.

Frameworks, such as DoDAF, focus predominantly on the taxonomy, mainly the kinds of things we’re keeping track of, the semantics relationships, and perhaps some formalism on how they're structured. There's a little bit of method guidance within DoDAF, but not a lot. So we see the marriage of the two as a natural synergy.

Gardner: So their complementary natures allows for more particulars on the defense side, but the overall looks at the implementation method and skills for how this works best. What has been the case up until now? Have these not been complementary. Is this something new, or are we just learning to do it better?

Armstrong: I think we’re seeing the state of industry advance and looking at trying to have the federal government, both United States and abroad, embrace global industry standards for EA work. Historically, particularly in the US government, a lot of defense agencies and their contractors have often been focusing on a minimalistic compliance perspective with respect to DoDAF. In order to get paid for this work or be authorized to do this work, one of our requirements is we must produce DoDAF.

A lot of defense agencies and their contractors have often been focusing on a minimalistic compliance perspective with respect to DoDAF.



People are doing that because they've been commanded to do it. We’re seeing a new level of awareness. There's some synergy with what’s going on in the DoDAF space, particularly as it relates to migrating from DoDAF 1.5 to DoDAF 2.

Agencies need some method and technique guidance on exactly how to come up with those particular viewpoints that are going to be most relevant, and how to exploit what DoDAF has to offer, in a way that advances the business as opposed to just solely being to conforming or compliant?

Gardner: Well it has prevented folks from enjoying more of that benefit side, rather than the compliance side. Have there been hurdles, perhaps culturally, because of the landscape of these different companies and their inability to have that boundary-less interaction. What’s been the hurdle? What’s prevented this from being more beneficial at that higher level?

Armstrong: Probably overall organizational and practitioner maturity. There certainly are a lot of very skilled organizations and individuals out there. However, we're trying to get them all lined up with the best practice for establishing an EA capability and then operating it and using it to a business strategic advantage, something that TOGAF defines very nicely and which the DoDAF taxonomy and work products hold in very effectively.

Gardner: Help me understand, Chris. Is this discussion that you’ll be delivering on July 16 primarily for TOGAF people to better understand how to implement vis-à-vis, DoDAF, is this the other direction, or is it a two-way street?

Two-way street

Armstrong: It’s a two-way street. One of the big things that particularly the DoD space has going for it is that there's quite a bit of maturity in the notion of formally specified models, as DoDAF describes them, and the various views that DoDAF includes.

We’d like to think that, because of that maturity, the general TOGAF community can glean a lot of benefit from the experience they’ve had. What does it take to capture these architecture descriptions, some of the finer points about managing some of those assets. People within the TOGAF general community are always looking for case studies and best practices that demonstrate to them that what other people are doing is something that they can do as well.

We also think that the federal agency community also has a lot to glean from this. Again, we're trying to get some convergence on standard methods and techniques, so that they can more easily have resources join their teams and immediately be productive and add value to their projects, because they’re all based on a standard EA method and framework.

Gardner: As I mentioned, The Open Group Conference is going to be looking at EA, transformation, security, and supply-chain issues. Does the ability to deliver DoDAF capabilities with TOGAF, and TOGAF 9 in particular, also come to bear on some of these issues about securing supply chain, transforming your organization, and making a wider and more productive use of EA?

Armstrong: Absolutely, and some of that’s very much a part of the new version of DoDAF that’s been out for a little while, DoDAF 2. The current version is 2.02 and 2.03 is being worked on, as we speak.

One of the major changes between DoDAF 1 and DoDAF 2 is the focusing on fitness for purpose.



One of the major changes between DoDAF 1 and DoDAF 2 is the focusing on fitness for purpose. In the past, a lot of organizations felt that it was their obligation to describe all architecture viewpoints that DoDAF suggests without necessarily taking a step back and saying, "Why would I want to do that?"

So it’s trying to make the agencies think more critically about how they can be the most agile, mainly what’s the least amount of architecture description that we can invest and that has the greatest possible value. Organizations now have the discretion to determine what fitness for purpose is.

Then, there's the whole idea in DoDAF 2, that the architecture is supposed to be capability-driven. That is, you’re not just describing architecture, because you have some tools that happened to be DoDAF conforming, but there is a new business capability that you’re trying to inject into the organization through capability-based transformation, which is going to involve people, process, and tools.

One of the nice things that TOGAF’s architecture development method has to offer is a well-defined set of activities and best practices for deciding how you determine what those capabilities are and how you engage your stakeholders to really help collect the requirements for what fit for purpose means.

Gardner: As with the private sector, it seems that everyone needs to move faster. I see you’ve been working on agile development. With organizations like the OMG and Eclipse is there something that doing this well -- bringing the best of TOGAF and DoDAF together -- enables a greater agility and speed when it comes to completing a project?

Register for The Open Group Conference
July 16-18 in Washington, D.C.

Different perspectives

Armstrong: Absolutely. When you talk about what agile means to the general community, you may get a lot of different perspectives and a lot of different answers. Ultimately, we at APG feel that agility is fundamentally about how well your organization responds to change.

If you take a step back, that’s really what we think is the fundamental litmus test of the goodness of an architecture. Whether it’s an EA, a segment architecture, or a system architecture, the architects need to think thoughtfully and considerately about what things are almost certainly going to happen in the near future. I need to anticipate, and be able to work these into my architecture in such a way that when these changes occur, the architecture can respond in a timely, relevant fashion.

We feel that, while a lot of people think that agile is just a pseudonym for not planning, not making commitments, going around in circles forever, we call that chaos, another five letter word. But agile in our experience really demands rigor, and discipline.

Of course, a lot of the culture of the DoD brings that rigor and discipline to it, but also the experience that that community has had, in particular, of formally modeling architecture description. That sets up those government agencies to act agilely much more than others.

Gardner: On another related topic, The Open Group has been involved with cloud computing. We’ve also seen some reference materials and even movement towards demanding that cloud resources be used by the government at large through NIST.

The cloud platform has a lot to offer both government and private organizations, but without trivializing it too much, it’s just another technology platform.



But, I imagine that the DoD is also going to be examining some of these hybrid models. Is there something about a common architectural approach that also sets the stage for that ability, should one decide to avail themselves of some of these cloud models?

Armstrong: On the one hand, the cloud platform has a lot to offer both government and private organizations, but without trivializing it too much, it’s just another technology platform, another paradigm, and a great demonstration of why an organization needs to have some sort of capability in EA to anticipate how to best exploit these new technology platforms.

Gardner: Moving a bit more towards some examples. When we think about using TOGAF 9 to deliver DoD architecture framework capabilities, can you explain what that means in real terms? Do you know of anyone that has done it successfully or is in the process? Even if you can’t name them, perhaps you can describe how something like this works?

Armstrong: First, there has been some great work done by the MITRE organization through their work in collaboration at The Open Group. They’ve written a white paper that talks about which DoDAF deliverables are likely to be useful in specific architecture development method activities. We’re going to be using that as a foundation for the talk we’re going to be giving at the conference in July.

The biggest thing that TOGAF has to offer is that a nascent organization that’s jumping into the DoDAF space may just look at it from an initial compliance perspective, saying, "We have to create an AV-1, and an OV-1, and a SvcV-5," and so on.

Providing guidance

T
OGAF will provide the guidance for what is EA. Why should I care? What kind of people do I need within my organization? What kind of skills do they need? What kind of professional certification might be appropriate to get all of the participants up on the same page, so that when we’re talking about EA, we’re all using the same language?

TOGAF also, of course, has a great emphasis on architecture governance and suggests that immediately, when you’re first propping up your EA capability, you need to put into your plan how you're going to operate and maintain these architectural assets, once they’ve been produced, so that you can exploit them in some reuse strategy moving forward.

So, the preliminary phase of the TOGAF architecture development method provides those agencies best practices on how to get going with EA, including exactly how an organization is going to exploit what the DoDAF taxonomy framework has to offer.

Then, once an organization or a contractor is charged with doing some DoDAF work, because of a new program or a new capability, they would immediately begin executing Phase A: Architecture Vision, and follow the best practices that TOGAF has to offer.

Just what is that capability that we’re trying to describe? Who are the key stakeholders, and what are their concerns? What are their business objectives and requirements? What constraints are we going to be placed under?

As the project unfolds, they're going to discover details that may cause some adjustment to that final target.



Part of that is to create a high-level description of the current or baseline architecture descriptions, and then the future target state, so that all parties have at least a coarse-grained idea of kind of where we're at right now, and what our vision is of where we want to be.

Because this is really a high level requirements and scoping set of activities, we expect that that’s going to be somewhat ambiguous. As the project unfolds, they're going to discover details that may cause some adjustment to that final target.

Gardner: Chris, do you foresee that for a number of these organizations that have been involved with DoDAF mainly in the compliance area being compliant is going to lead them into a larger consumption, use, and exploitation of EA? Or will the majority of organizations that might be trying to move more towards government work as contractors already have a background?

Is there a trend here? It seems to me that if you’re going to have to do this to be compliant, you might as well take advantage of it and extend it across your organization for a variety of very good reasons.

Armstrong: Exactly. We’ve actually had a recent experience with a defense contractor who, for many years, has been required to do program conformance requirement to deliver DoDAF-compliant content. They're actually saying, "We get all that, and that’s all well and good, but through that process, we’ve come to believe that EA, in its own right, is a good thing for us and our organization."

Internalize best practices

So, we're seeing defense contractors being able to internalize some of these best practices, and really be prepared for the future so that they can win the greatest amount of business and respond as rapidly and appropriately as possible, as well as how they can exploit these best practices to affect greater business transformation across their enterprises.

Gardner: Of course the whole notion of fit for purpose ultimately is high productivity, lower cost, and therefore passing on more of those savings to your investors.

Armstrong: A lot of government organizations are really looking at their bottom line, trying to trim costs, and increase efficiency and operation excellence. EA is a proven best practice to deliver that.

Gardner: We mentioned that your discussion on these issues, on July 16 will be live-streamed for free, but you’re also doing some pre-conference and post-conference activities -- webinars, and other things. Tell us how this is all coming together, and for those who are interested, how they could take advantage of all of these.

Armstrong: We’re certainly very privileged that The Open Group has offered this as opportunity to share this content with the community. On Monday, June 25, we'll be delivering a webinar that focuses on architecture change management in the DoDAF space, particularly how an organization migrates from DoDAF 1 to DoDAF 2.

We’ll be talking about things that organizations need to think about as they migrate from DoDAF 1 to DoDAF 2.



I'll be joined by a couple of other people from APG, David Rice, one of our Principal Enterprise Architects who is a member of the DoDAF 2 Working Group, as well as J.D. Baker, who is the Co-chair of the OMG’s Analysis and Design Taskforce, and a member of the Unified Profile for DoDAF and MODAF (UPDM) work group, a specification from the OMG.

We’ll be talking about things that organizations need to think about as they migrate from DoDAF 1 to DoDAF 2. We'll be focusing on some of the key points of the DoDAF 2 meta-model, namely the rearrangement of the architecture viewpoints and the architecture partitions and how that maps from the classical DoDAF 1.5 viewpoint, as well as focusing on this notion of capability-driven architectures and fitness for purpose.

We also have the great privilege after the conference to be delivering a follow-up webinar on implementation methods and techniques around advanced DoDAF architectures. Particularly, we're going to take a closer look at something that some people may be interested in, namely tool interoperability and how the DoDAF meta-model offers that through what’s called the Physical Exchange Specification (PES).

We’ll be taking a look a little bit more closely at this UPDM thing I just mentioned, focusing on how we can use formal modeling languages based on OMG standards, such as UML, SysML, BPMN, and SoaML, to do very formal architectural modeling.

One of the big challenges with EA is, at the end of the day, EA comes up with a set of policies, principles, assets, and best practices that talk about how the organization needs to operate and realize new solutions within that new framework. If EA doesn’t have a hand-off to the delivery method, namely systems engineering and solution delivery, then none of this architecture stuff makes a bit of a difference.

Driving the realization


We're going to be talking a little bit about how DoDAF-based architecture description and TOGAF would drive the realization of those capabilities through traditional systems, engineering, and software development method.

Gardner: Well, great. For those who are interested in learning more about this, perhaps they are coming from the TOGAF side and wanting to learn more about DoDAF or vice-versa, do you have any suggestions about how to get started?

Are there places where there are some good resources that they might use to begin the journey in one direction or the other -- maybe starting from scratch on both -- that would then lead them to better avail themselves of the information that you and The Open Group are going to be providing in the coming weeks.

Armstrong: On APG’s website, we have a free introduction to EA in TOGAF, a web-based tutorial. It’s about 60 minutes, or so, and is designed to get people to have familiarity with some of this content, but would like a little deeper dive. That’s one resource.

EA comes up with a set of policies, principles, assets, and best practices that talk about how the organization needs to operate.



Of course, there is The Open Group’s website. I'm not sure that I would refer people to the TOGAF 9.1 specification as the first starting point, although there is some really good content in the first introduction chapter. But there's also a manager’s guide, or executive guide, that can provide a little bit of a higher-level view of EA from a business perspective, as opposed to a architect practitioner’s perspective.

Of course, there is quite a bit of content out there on the DoD architecture framework and other government frameworks.

Gardner: Thank you so much. I'm afraid we are going to have to leave it there. We’ve been talking with Chris Armstrong, President of the Armstrong Process Group on how governments are using multiple architecture frameworks to improve their architecture planning and IT implementation.

This was a lead-in rather to his Open Group presentation on July 16, which I would like to point out will be live-streamed and free. He's going to be discussing using TOGAF 9 to deliver DoDAF 2 capabilities, and Chris will be exploring the ways at various architecture frameworks, from either perspective, that will be complementing one another as we go forward in this field.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference, July 16 - 20 in Washington D.C. You’ll hear more from Chris and many other global leaders on the ways that IT and EA supporting enterprise transformation. His presentation will also be live-streamed free from The Open Group Conference.

A big thanks to Chris Armstrong for this fascinating discussion. I really look forward to your presentation in Washington, and I encourage our readers and listeners to attend that conference and learn more either in person or online. Thank you, sir.

Armstrong: You are more than welcome, Dana, and thanks so much for the opportunity.

Gardner: You’re very welcome. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through these thought leader interviews. Thanks again for listening and come back next time.

Register for The Open Group Conference
July 16-18 in Washington, D.C.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect enterprise IT thought leadership podcast on how governments are using multiple architectural frameworks.
Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, June 18, 2012

Le Moyne College Accelerates IT Innovation with Help from VMware View VDI Solution Provider SMP

Transcript of a sponsored podcast discussion on how a mid-sized college harnessed server virtualization as a stepping stone to VDI.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how higher education technology innovator, Le Moyne College in upstate New York, has embraced several levels of virtualization as a springboard to client-tier virtualization benefits.

We'll see how Le Moyne worked with technology solutions provider Systems Management Planning, Inc. to make the journey to deep server virtualization and then move to virtual desktop infrastructure (VDI), and we will see how they've done that in a structured, predictive fashion.

Learn here how a medium-sized, private college like Le Moyne teamed with a seasoned technology partner to quickly gain IT productivity payoffs via VDI, amid the demanding environment and high expectations of a college campus. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here to share their virtualization journey story are Shaun Black, IT Director at Le Moyne College in Syracuse, New York. Welcome, Shaun.

Shaun Black: Good morning, and thanks for having me, Dana. It's wonderful talking to you.

Gardner: We're glad to have you. We're also here with Dean Miller, Account Manager at Systems Management Planning or SMP based in Rochester, New York. Hello, Dean.

Dean Miller: Good morning, Dana.

Gardner: Shaun, let me start with you. I'm thinking that doing IT at a college comes with its own unique challenges. You have a lot of very smart people. They're able to communicate well. They're impassioned with their goals and tasks. Is doing IT there like being in a crucible? And if it's a tough environment, given the user expectations, why did you choose to go to VDI quickly?

Black: I think you characterized it very well, Dana. There is tremendous diversity in the college and university environment. Our ability to be responsive as an IT organization is incredibly crucial, given the range of different clients, constituents, and stakeholders that we have. These include our students, faculty, administrators, fundraisers, and the like. There's a wide variety of needs that they have, not to mention the higher education expectations in a very much open environment.

We've been leveraging virtual technology now for a number of years, going back to VMware Desktop, VMware Player, and the like. Then, in 2007 we embraced ESX Virtual Server Technology and, more recently, the VMware VDI to help us meet those flexibility needs and make sure that the staff that we have are well aligned with the expectations of the college.

Gardner: Why don't you give us a sense of the size, how large of an organization you are? For people who aren’t familiar with Le Moyne, maybe you can tell us a little bit about the type of college you are.

Equal footing

Black: Le Moyne is a private, Catholic, Jesuit institution located in Syracuse, New York. We have about 500 employees and we educate roughly 4,000 students on an annual basis. We're the second youngest of 28 Jesuit college universities nationally. Some of our better-known peers are Boston College, Gonzaga, and Georgetown, but we like to think that we're on an equal footing with our older and more esteemed colleagues.

Gardner: And you're no newbie to virtualization, but you've moved aggressively. And now you're in the process of moving to VDI. Maybe you can just give us a brief history of how virtualization has been an important part of your IT landscape.

Black: It started for us back in the early 2000s, and was motivated by our management information systems program, our computer science-related programs, and their need for very specialized software.

A lot of that was started by using movable hard drives in very specific computing labs. As we progressed with them, and their needs continue to evolve, we just continued to find that the solutions that we had weren't flexible enough. They needed more and different servers in very specific needs.

From an IT workforce perspective, we were having the same problem most organizations have. We were spending a tremendous amount of time keeping existing systems working. We were finding that we weren't able to be as responsive to the academic environments, and to some degree, were potentially becoming an impediment in moving forward the success of the organization.

We started experimenting with it initially within a few classrooms and then realized that this is a great technology.



Virtualization was a technology that was out there. How could we apply this to our server infrastructure, where we were spending close to six months a year having one of our people swapping out servers?

We saw tremendous benefits from that, increased flexibility and an increased ability for our staff to support the academic mission. Then, as we start looking in the last couple years, we saw similar demands on the desktop side with requirements for new software and discussions of new academic programs. We recognized that VDI technology was out there and was another opportunity for us to try to embrace the technology to help us propel forward.

Gardner: And so given that you had a fairly good backing in virtualization generally -- and a very demanding and diverse set of requirements for your users -- tell me about how Systems Management Planning, or SMP, came into play and what the relationship between you two is?

Black: Our relationship with SMP and the staff there has been critical from back in 2006-2007, when we began adopting server virtualization. With a new technology, you try to bring in a new environment. There are learning and assimilation curves. To get the value out of that, to get the bang for the buck as quickly as possible, we wanted to identify a partner to help us accelerate into leveraging that technology.

They helped us in 2007 in getting our environment up, which was originally intended to be an 18-month transition of server virtualization. After they helped us get the first few servers converted within a couple weeks, we converted the rest of our environment within about a two-month period, and we saw tremendous benefits in server virtualization.

Front of the list

W
hen we started looking at VDI, we had a discussion with a number of different partners. SMP was always at the front of our list. When we got to them, they just reinforced why they were the right organization to move forward with.

They had a complete understanding of the impact of desktop virtualization and how it has an impact on the entire infrastructure of an environment, not just the desktop itself, but the server infrastructure, storage infrastructure, network infrastructure.

They were the only organization we talked to, from the start, that began with that kind of discussion of what the implications are from a technology perspective, but also understanding what the implications are, and why you want to do this from a business perspective, and particularly an education perspective.

They brought very experienced people to help us through the process of assimilating.



They are already working with a number of different higher education institutions in the New York region. So they understood education. It's just a perfect partnership, and again, they brought very experienced people to help us through the process of assimilating and getting this technology implemented as quickly as possible and putting it to good use.

Gardner: Dean Miller at SMP, how typical is Le Moyne's experience, in terms of the pilot, moving toward server virtualization and then starting to branch out and take advantage of that more holistic approach that Shaun just described that will then lead to some of these VDI benefits? Is this the usual path that you see in the market?

Miller: It is, and we like to see that path, because you don't want to disappoint your users with the virtual desktop. They just want to do their job and they don't want to be hung up with something that's slow. You want to make sure that you roll out your virtual desktops well, and you need the infrastructure behind that to support that.

So yes, they started with a proof of concept which was a limited installation, really just within the IT department, to get their own IT people up to speed, experimenting with ThinApp and ThinApping applications. That went well. The next step was to go to the pilot, which was a limited roll out with some of the more savvy users. That seemed to go pretty well, and then, we went for a complete implementation.

It's fairly typical, and it was a pleasure working with this team. They recognized the value of VDI and they made it happen.

Gardner: And is there anything unusual or specific to Le Moyne in this regard?

Miller: No, I don't think there was anything unusual. It went pretty smoothly. We've been doing quite a few rollouts, and it went well.

Gardner: Tell us a bit about SMP. What type of organization are you? Are you regional, across the globe, or the country? We want to know a little more about your services and your company?

Focus on data center

Miller: We're Systems Management Planning. We're a women-owned company. We're headquartered in Rochester, New York, and were founded in 1997. Our focus is in the data center, implementing virtualization, both server and desktop virtualization, storage virtualization, and networking.

Our expertise in VMware and its complementing technologies allowed us to grow at a rate of about 30 percent year over year. We're recognized in the "Rochester Business Journal Top 100." This past year, we're ranked number six, based on growth.

We have offices in Rochester, Albany, and Orlando, Florida, and we use virtual desktops throughout our organization. This gives us the ability to spin up desktops or remote offices quickly. You could say we practice what we preach.

It's a technical organization. In fact, we have more engineers than salespeople on staff, which in my experience is pretty unusual. And we have more technical certification than any partner in upstate or western New York that I know of. I'm pretty sure of that.

VMware has recognized SMP as a premier partner. We're also on the VMware technical advisory board and we're really proud of that fact. We work closely with VMware, and they bounce a lot of ideas and things off our engineering team. So, in a nutshell, that’s SMP.

We are still in the process of rolling this out and we will be for another 12 months.



Gardner: Shaun, Dean has brought up an interesting point. If you're going to do VDI, you’ve got to do it right, having the word get out across the campus that the apps are slow or the storage isn't there sufficiently, it's going to really sound the death knell for the cause.

What did you do to make sure that that initial rollout was successful, that the performance was either at or better than the previous methods? Then, tell us a little bit about what you came back with in terms of their impression.

Black: It's what we continue to do, because we are still in the process of rolling this out and we will be for another 12 months. That’s probably the key component, as Dean mentioned.

We've been very methodical about going through an initial proof of concept, evaluating the technology, and working with SMP. They been great at informing us what some of the challenges might be, architecting an underlying infrastructure, the servers and the network.

Again, this is an area where SMP has informed us of the kinds of challenges that people have in virtual desktop environments, and how to build an environment that’s going to minimize the risk of the challenges, not the least of which are bandwidth and storage.

Methodical fashion

Then, we're being very deliberate about how we roll this out, and to whom, specifically so that we can try to catch some of these issues in a very methodical fashion and adjust what we're doing.

We specifically built the environment to try to build in an excess capacity of roughly a third to support business growth, as well as to support some variations in utilization and unexpected needs. You do everything you can in IT to anticipate what your customers are going to be doing, but we all know that on a day-to-day basis, things change, and those can have pretty dramatic consequences.

So we try to factor in enough head room to make sure that those kinds of situations wouldn’t negatively impact us. But the biggest thing is really just being very methodical and measured in throwing these technologies out.

With regard to the members of the pilot team, I’ll give a lot of kudos and hats-off to them, because they suffered through a lot of the learning curve with us in figuring out what some of these challenges are. But that really helped us, as we got to what we consider the second phase of the pilot this past fall. We were actually using a production environment with a couple of our academic programs in a couple of classrooms. Then we began to go into full production in the spring with our first 150 production users.

Gardner: And just to be clear, Shaun, what VMware products are you using? Are you up to vSphere 5, is this View 5, or you're using the latest products on that?

There are a couple different ways that we like to measure. I’d like to think of it as both dollars and delight.



Black: I understand that View 5.1 has recently been released. But at the time we rolled it out, vSphere, ThinApp, and View 5, were the latest and greatest with the latest service patches and all, when we initially implemented our infrastructure in December.

It's one of the areas where we're going to be leveraging SMP on a regular basis, given that they're dealing with the upgrades more frequently. My staff is helping us maintain the current and make sure we are taking maximal advantages of the incremental features and major innovations that VMware adds.

Gardner: Now, as you're rolling this out, it's probably a bit early to come up with return on investment (ROI) or productivity improvement metrics for the VDI, but how about the server virtualization, in general, and the modernization that you're going about for your infrastructure? Do you have a sense of whether this is a ROI type of benefit? What other metrics do you use to decide that this is a successful effort?

Black: Certainly, there's an ROI. There are a couple different ways that we like to measure. I’d like to think of it as both dollars and delight. From a server virtualization perspective, there's a dollar amount. We extended the lifecycle of our servers from a three-year cycle to five years. So we get some operational as well, as some capital cost savings, out of that extension.

Most significantly, going to the virtual technology on the servers, one motivator for us on the desktop was what our people are doing. So it's an opportunity-cost question and that probably, first and foremost, is the fundamental measure I'm using. Internally, we're constantly looking at how much of our time are we spending on what we call "keep the lights on" activity, just the operations of keeping things running, versus how much time we're investing on strategic projects.

Free up resources

Second to that, are we not investing enough time such that strategic projects are being slowed down, because IT hasn’t been able to resource that properly. From the perspective of virtualization, it certainly allowed us to free resources and reallocate those to things that the colleges deem more appropriate, rather than the standard kind of operational activities.

Then, just in regard to the overall stability and functionality in an environment is what I think of as a delight factor, the number of issues and the types of outages that we've had as a result of virtualization technology, particularly on the server front. It's dramatically reduced the pain points, even from hardware failures, which are bound to happen. So generally, it increased overall satisfaction of our community with the technology.

On the desktop front, we were much more explicit in building a pro forma financial model. We're going forward with that, and the expectation is that we are going to be able to reallocate, once we complete the rollout, a full-time equivalent employee. We're not going to have someone having to spend basically a year’s worth of time every year just shuffling new PCs onto desktops.

We're also expecting, as a result of that, that we're going to be able to be much more responsive to the new requests that we have, the various new software upgrades, whether it would be Windows, Office, or any of the various packages that are used in the academic environment here.

So we're expecting that’s going to contribute to overall satisfaction on the part of both our students, as well as our faculty and our administrators, having the tools that they need to do their job in the databases and be able to take advantage of them.

We're also expecting, as a result of that, that we're going to be able to be much more responsive to the new requests that we have.



Gardner: Just quickly on the cost equation for your client hardware, are you going to continue to use the PCs as these VDI terminals or are you going to be moving at some point to thin or zero clients? What are the implications for that in terms of cost?

Black: We do intend to extend the existing systems. We had been on a four-year lifecycle. We're expecting to extend our existing systems out to about seven years, but then, replacing any of that equipment with thin or zero clients, as those systems age out. Certainly, one of the benefits we did see of going to virtual is the ability to continue to use that hardware for a longer period of time.

Gardner: Okay. Dean Miller, is this experience that we are hearing from Le Moyne and Shaun, indicative of the ROI and economics of virtualization generally? That is to say a really good return on the server and infrastructure, but then perhaps higher financial benefits when you go to the full VDI, when you can start to really realize the efficiencies and cost-reduction of administration?

Miller: Absolutely. Le Moyne College, specifically Shaun Black and his team, saw the value in virtualizing their desktops. They understood the savings in hardware cost, the energy cost, the administrative time, and benefits from their remote users. I think they got some very positive feedback from some of the remote users about View. They had a vision for the future of desktop computers, and they made it happen.

Gardner: In looking to the future, Shaun. Is this setting you up for perhaps more ease in moving toward a variety of client endpoints. I'm thinking mobile devices. I'm thinking bring your own device (BYOD) with students working from campus, but then remotely on the weekends from home, that sort of thing. How does this set you up in terms of some of these future trends around mobile, BYOD, and consumerization?

Laying the foundation

Black: It lays the foundation for our ability to do that. That was certainly in our thinking in moving to virtual desktop. It wasn’t what we regard as a primary motivator. The primary motivator was how to do better what we’ve previously done, and that’s what we built the financial model on. We see that just as kind of an incremental benefit, and there may be some additional costs that come with that that have to be factored in.

But from the perspective of recognizing that our students, faculty, and everyone want to be able to use their own technology, and rather than having us issue them, be able to access the various software and tools more effectively and more efficiently.

It even opens up opportunities for new ways of offering our academic courses and the like. Whether it would be distance or the students working from home, those are things that are on our shortlist and our radar for opportunities that we can take advantage of because of the technology.

Gardner: Then, also looking at value from a different angle, is there anything about the VDI approach, the larger virtualization efforts that brings more control to your data, thinking about security, compliance, protecting intellectual property, storage, recovery, backup, even disaster recovery (DR). So how about going down that lane, if you will, of data lifecycle implications?

Black: That’s another great point, and again another one of the areas that was in our thinking in regard to the strategy. The idea, particularly for our mobile workers who have laptops, instead of them taking the data with them, to keep that data here on campus. We'll still provide them with the ability to readily access that and be just as effective and efficient as they currently are, but keeping the data within the confines of the campus community, and being able to make sure that’s backed up on a routine basis.

It's not just a control perspective, but it's also being able to offer more flexibility to people.



The security controls, better integration of View with our Windows server environment, and our authentication systems are all benefits that we certainly perceive as part of this initiative. It's not just a control perspective, but it's also being able to offer more flexibility to people, striking that balance better.

Gardner: Dean Miller, back to you. I should think that given that you have a large cross-section of customers, global concerns, and large US companies as well as small and medium-sized organizations like Le Moyne, that these data lifecycle management control security issues must be a big driver. Is that what you’re finding?

Miller: We’re seeing that in higher education as well as in Fortune 500s, even small and medium businesses (SMBs), the security factor of keeping all the data behind the firewall and in the data center, rather than on the notebooks out in the field, is a huge selling point for VDI and View specifically.

Gardner: Let's talk about lessons learned and sharing some of that. Shaun, if you were to do this over again, or you wanted to provide some insights to somebody just beginning their virtualization journey, are there any thoughts, any 20/20 hindsight conclusions, that you would share with them?

Black: For an organization that’s our size, a medium business, I'd say to anybody to be looking very hard at this, and be looking at doing it sooner, rather than later. Obviously, every institution has its own specific situation, and there are upfront capital costs that have to be considered in moving forward this. But if you want to do it right and if you’re going to do that, you have to make some of the capital investment to make that happen.

Sooner rather than later


B
ut, for anybody, sooner rather than later. Based on the data we've seen from VMware, we were in the front five percent of adopters. With VDI, I think we’re somewhere in maybe the front 15 or something like that.

So, we're a little behind where I’d like to be, but I think we’re really at the point where mainstream adoption is really picking up. Anyone who isn’t looking at this technology at this point is likely to find themselves at a competitive disadvantage by not realizing the efficiency that this technology can bring.

Gardner: Let me just explore that a bit more. What are the competitive advantages for doing this now?

Black: For us, it really gets down to, as I said earlier, opportunity cost in strategic alignment. If your staff are not focused, from an IT perspective, on helping your organization move forward, but just on keeping the existing equipment running, you’re not really contributing maximally, or as I would say, contributing maximally to move your organization forward.

So to the extent that you can reallocate those resources toward strategic type initiatives by getting them off of things that can be done differently and therefore done more effectively, any organization welcomes that.

In five years or whatever, the market will be matured enough that we could go to a desktop-as-a-service type environment and have the same level of flexibility and control.



Gardner: I guess I am thinking too that getting all your ducks lined up on the infrastructure, getting the planning in place and having these rollout milestones set and ready to be implemented frees you up to start thinking more about applications, making your innovation move from support to that innovative level.

Again, we talked about changing the types of applications, whether it's in delivery, maybe it's moving towards multitenancy, private cloud types of models. Before we sign off, any thoughts about what the implications long-term are for your ability to be leading agile vis-à-vis your application set?

Black: There's a lot of debate on this, but I've told many individuals on the campus, including my vice president, that I expect this to very likely be the last time that Le Moyne is required to make this kind of investment in capital infrastructure. The next time, in five years or whatever, the market will be matured enough that we could go to a desktop-as-a-service type environment and have the same level of flexibility and control.

So we can really focus on the end services that we’re trying to provide, the applications. We can focus on the implications for those, the academics, as opposed to the underlying technology and letting the organization have the time and the focus on the technology, maintaining that underlying infrastructure, take advantage of their competencies and allow us to focus on our core business.

We’re hoping that there's an evolution. Right now, we are talking with various organizations with regard to burst capacity, DR-type capabilities and also talking about our longer term desires to outsource even if some of the equipment is posted here, but ultimately, get most of the technology and underlying infrastructure in somebody else’s hands.

Insight question

Gardner: Dean, I just want to run that same kind of insight question by you. Clearly, Shaun has a track record, but you've got quite a bit more across different types of organizations. Is there a bit of advice that you would offer to companies as they’re beginning to think about virtualization as a holistic strategy for IT? What're some good concepts to keep in mind as you're beginning?

Miller: Well, that’s interesting. We were talking about virtual desktops, maybe two-and-a-half, three years ago. We started training on it, but it really hadn't taken off for the last year-and-a-half. Now, we’re seeing tremendous interest in it.

Initially, people were looking at savings for a hardware cost and administrative cost. A big driver today is BYOD. People are expecting to use their iPad, their tablet, or even their phone, and it's up to the IT department to deliver these applications to all these various devices. That’s been a huge driver for View and it's going to drive the View and virtual desktop market for quite a while.

We can really focus on the end services that we’re trying to provide, the applications.



Gardner: I am afraid we’ll have to leave it there. We’ve been talking about how higher education technology leader, Le Moyne College in upstate New York, has embraced server-level virtualization as a springboard to client-tier virtualization benefits, and we heard how technology solutions provider, SMP, helped them make that journey in a structured predictive way.

I’d like to thank our guests for joining us on this BriefingsDirect podcast. We’ve been here with Shaun Black, IT director at Le Moyne College. Thank you so much, Shaun.

Black: Thank you.

Gardner: And we’ve been here with Dean Miller, Account Manager at SMP. Thank you, Dean.

Miller: Thanks, Dana. Thanks for the opportunity.

Gardner: You’re welcome. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored podcast discussion on how a mid-sized college harnessed server virtualization as a stepping stone to VDI. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: