Tuesday, June 02, 2009

Mainframes Provide Fast-Track Access to Private Cloud Benefits for Enterprises, Process Ecosystems

Transcript of a BriefingsDirect podcast on the role and benefits of mainframes and their position as private cloud infrastructure in today's efficiency-minded enterprises.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on how mainframes can help enterprises reach cloud-computing benefits faster.

We'll be looking at what defines cloud computing, with an emphasis on private clouds or those computing models that enterprises can control on-premises, but that also favor and provide cloud-like efficiency with lower-end costs and a heightened ability to deliver services that support agile business processes.

We'll examine how new developments in mainframe automation and supporting the use of mainframes allow for cloud-computing advantages and the ability to solve some of the more contemporary computing challenges.

To help us understand how mainframe is the cloud, we're joined by Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit. Welcome to the show, Chris.

[UPDATE: CA's purchase today of some assets of Cassatt bolsters the role of mainframes' and CA's management capabilities as foundations for private cloud efficiencies.]

Chris O'Malley: Dana, thank you very much. I'm glad to be here.

Gardner: Chris, we've heard a tremendous amount about cloud computing and there's a buzz around this whole topic. From your perspective, what makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization you just discussed. It's a big topic of discussion right now, especially given the recession we're sitting in.

It's very visible physically that there are many, many servers that support the ongoing operations of the business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree are these servers being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

We're now seeing the availability of the technology -- VMware is an example -- to start to create almost mainframe-like environments on the distributed side. So, it's both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Gardner: I suppose there's also an issue around integration. When people talk about cloud computing, we hear them refer to it as an application-development or platform-as-a-service (PaaS) affair. We also hear software as a service (SaaS) or just great delivery of the applications. Then, there's this notion of infrastructure fabric or infrastructure as a service (IaaS).

But, to relate and manage all of those things is something we haven't yet seen in this whole cloud market. I imagine that at a private level, if you were to use mainframe and associated technologies, you might start to see some of those integration points among these different levels or aspects of cloud computing.

O'Malley: You're right. It's a maturity curve that we're going through, and it's very likely that larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. It's a fixed asset that was very expensive way back, or at least relatively expensive, that they try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

You see the same big customers, though, having application needs outside of what they've done themselves. What customer relationship management (CRM) and salesforce.com have done creates a duality of the mainframe acting as a cloud and using SaaS to support how they work their markets. It's very important that those things start to become integrated. CRM obviously fits into things like order entry, and tying those efforts together.

As you go through this maturity cycle, there is always a level of effort to integrate these things. The viability of things like salesforce.com, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

Gardner: So, that would lead, at some point, to a cloud of clouds and hybrid models. We've been worried about integration vertically and now horizontally. I suppose we'll have to start worrying about it across organizational boundaries as well.

Barriers to adoption

O'Malley: Absolutely. There are other barriers that exist as well. The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

It's not just the technical barriers and the complexity of it. It's a cultural shift of an acceptance by players across the business. They all start to use a shared commodity in fulfilling their needs, and the recession helps that. Good CEOs and good CFOs never let a recession go to waste. They explain to their executive management, "We need a greater level of efficiency. We need to transform our thinking, so that we can start to take advantage of these technologies, decrease our overall cost, and increase our ability to serve our market."

They are not just technical issues. There is also people's disposition on the way IT should be run. That has to change as well.

Gardner: I suppose we've gone along with the pendulum swing, from centralized, to decentralized, and now we're coming back. I've spoken to a number of people that say the shortcomings of distributed computing are, in fact, the set of requirements for cloud computing. Do you agree with that?

O'Malley: I absolutely do. This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. Recently, I was with an international customer. They took me on a data center tour, and one of the first things I see is an air conditioning unit the size of a school bus. I see walls that are three-and-a-half feet thick, poured

Time and time, I hear there is not a CEO or a CFO interested in adding yet another square foot of data-center floor space or adding people to manage the environment at a scale equal to the increasing capacity.

concrete. I see cabling that looks like it weighs tons and football fields of floor space. In the midst of the tour, somebody tells me, "Here is a blade server that cost us next to nothing."

The difficulty in bringing and using these things in an efficient fashion, the cost of all those moving parts, and everything that has to be managed as a single thing, rather than in a virtualized form, has caused a scale of waste that you cannot hide.

Time and time, I hear there is not a CEO or a CFO interested in adding yet another square foot of data-center floor space or adding people to manage the environment at a scale equal to the increasing capacity. They should be getting economies of scale and are just not seeing it.

You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: A lot of what attracts people to the cloud model -- because it is still rather amorphous, and not well-defined -- is this notion of elasticity. That's both, as you say, to help on utilization when it's low, but also to allow for the spikes to be managed externally or to take workloads and apply them across multiple machines in the case of a private cloud.

O'Malley: Exactly.

Gardner: How do you see this attraction to elasticity of compute resources and infrastructure? How does that relate to where the modern mainframe is?

On-demand engine

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, for example, you need to turn on additional engines that are already housed in the box. With peak processing in December around the retail uptake -- it will happen again here in the not too distant future -- or a quarter end for most organizations, they have the capacity to turn engines on and off and then be charged effectively, like a utility.

With the z10, IBM has a platform that is effectively an in-house utility and, obviously, outsourcers offer that option in a purer fashion. This is not the mainframe your grandpa bought in 1976. It had always been a strong platform in terms of being able to drive high degrees of utilization. You don't see a bad mainframe customer. They're all at 95 percent throughput on those processors.

Now, with the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long. So, that's strength. Obviously, with companies like Salesforce.com, that's an option on the distributed side as well. You're paying for only that which you need at a given moment.

Gardner: Another issue that I've encountered in exploring these cloud issues is a common idea that this is for commodity-level services -- email, maybe some business applications, sales-force automation, CRM, for example. But, those peaks and troughs are also something that affect mission-critical applications, particularly if they're batch or something to be done at a certain frequency.

How do you take advantage of the compute capacity, when you're in between those frequencies and those batches? Do you see cloud computing as something that is destined for commodity-level IT,

The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

or is this something that also makes a great deal of sense for the most mission-critical types of transactions and applications?

O'Malley: As it specifically relates to mainframe, it absolutely does. The mainframe has always been the home, if you're a manufacturer, for your logistics, which sit on the mainframe. It's a core process to the organization.

If you're a bank, the ATMs, the DDL, all of that stuff tends to be mainframe apps. You're right. There's a strong variability in the types of processing that is, in fact, being done. The hardware allows you the capacity to handle those things and reduce your consumption in a way that affects your cost.

Gardner: It's the virtualization, management, and governance of what's going on with the infrastructure that's the genesis of this elasticity. I think what you're describing is a value-add on top of the platform.

O'Malley: Absolutely. The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be. We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications, built with scripting languages, perhaps delivered with rich interfaces, but primarily Web applications.

But, there's nothing to say that a Web application, a client-server application, a virtualized application, or even a virtualized desktop -- referred to as virtualized desktop infrastructure (VDI) -- can't find a place on a mainframe that supports different applications and different platforms beneath those applications.

Moving away from green screen

O'Malley: Correct. As an example, Linux runs on the mainframe. Just to take what you're saying a little bit deeper and state the obvious, one of the knocks on the mainframe is that it's the home of green screens. It was put to me recently by a customer that it's like showing garlic to a vampire. They just don't see that as the answer to the future, and it's not driving them to want to work on a platform that looks like it came out of 2001: A Space Odyssey or something.

Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

So, CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May. The first thing that we're coming out with is a common service that looks in every way like InstallShield from the mainframe.

If you were to walk up to a 22-year-old system programmer and looked over their shoulder, there's no way that you'd see any difference between what they were working on and what somebody may be working on in the open-system side.

So, you're right that the mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy VOS environment and bring that up to 2009 and beyond. CA, through our strategy of Mainframe 2.0, is in

We've had a cloud for 40 years. It’s called 'the mainframe.'

fact making that happen relative to the usage of our technology, but ultimately in terms of how the day-to-day workers interact with the mainframe and having it look, we believe, even more productive than what they're accustomed to on a distributed platform.

Gardner: It sounds as if we're really dealing with semantics as it addresses infrastructure. If you have a person who's been in the business for several decades and has some experience and you want to reassure them, you could say. "Well, it's running on the mainframe," they'll probably feel good about that. For somebody a little bit younger, you might say, "Well, it's running on the private cloud." It's really the same thing.

O'Malley: Absolutely. I listened to VMware presentation the other day, and they were, I think, speaking with ADP. I think that's what they said. They described the cloud. At the end of it, they said, "We've had a cloud for 40 years. It’s called 'the mainframe.'" But, you're right. It becomes semantics at that point. People will think differently. The mainframe has an image that will be altered dramatically with what we're doing with Mainframe 2.0.

It has its virtues, but it has its limits. The open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

Gardner: What about this? Going back to the issue to integration, if there is been this long-term ability to manage virtualized instances on the mainframe, eventually, as we get into this cloud of clouds and hybrid model future scenario, the buck must stop some place.

There's going to need to be one throat to choke somewhere, even if the services are emanating from a variety of sources. Is it a stretch to think that your on-premises mainframe that's being used as a cloud would also become a hub, rather than a spoke, in terms of how you would govern, manage, and integrate across multiple cloud types of implementation?

Benefits of centralization

O'Malley: One of the aspects that's wonderful about the mainframe is that the scale of discipline allows a very few people to manage a very large environment. That's been developed over 40 years and really is the benefit of this centralized model.

Increasingly, we're seeing customers come to the conclusion that there are certain things -- security and storage management for example -- that have been perfected in terms of their optimization and efficiency on the mainframe.

You're right. They're thinking of how to take certain disciplines that would probably be best done by the hub or the mainframe to manage the overall environment. That's definitely what we're thinking about from a strategy perspective. Security and storage management are two strong examples of the place those disciplines are done throughout the data center.

Gardner: We've discussed some of the issues around expense and the economics around utilization, control, and lower risk with governance and security. We've also addressed the perception, the gap, if you will, on culture and age -- "my grandfather's mainframe" and that sort of thing.

But, there's also this nagging concern in the market around skills and whether the mainframe needs to be sunset because of a lack of support, or whether its going to become, as we just described, the hub for the future. What is it that you bring to your clients in order to ameliorate their concerns around this skills issue?

O'Malley: There are two dimensions to it. One, we have to transform the technology, because we can't be naive. There is an 18-year-old man or woman out there someplace who's about to get into college.

It's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.

They're going to have to see a renewed mainframe that is more like what they've been accustomed to, if we're going to have them invest a college career to develop their skills and pursue a career in the mainframe space.

They're used to intuitive interfaces that they don't need a manual for and that they can dig into. They eventually get into the depths of it, but they need a nice entry point into it. They need something that, through just their generalized knowledge, they can get into. A green screen is the opposite of that. It's a heavy-lifting exercise in the front end.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it. One is that there's a vacuum of talent in that space. So, you've got a career escalator within mainframe that is just not available to you on the distributed side, and we're trying to set the example.

Our first technology within Mainframe 2.0, which I talked about, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe.

We're trying to set the example for what you can do in terms of bringing college students, making them effective, and having them do new and creative things on a platform that, at least in the recent history, they hadn't seen a lot of. They can get a sense of confidence between the dynamic of CA redressing the platform and our showing a formula to bring in college students, rapidly make them effective, and have them actually deliver technology that changes the way this platform is managed forever. It changes a lot of people's thinking and gives confidence to our customers and management.

We're also going on the road. I'm speaking at many universities, talking to both existing computer science students, as well as high school students that plan to go to those universities. I'm talking about making the mainframe one that's a friendly platform to them, if you will, and talking about the career opportunities that are offered to them.

Just to give you the sense of amazement, have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity. But, we've got to get those two dimensions right. We've got to show that the platform is friendly. Its one where we have a formula to bring new college students in, make them effective, and then get the word out there, so that more and more students look at this as a career option for them.

Gardner: I'm just curious. When you speak to high school and college students, are there any particular skill sets that put them into the right track for what they need for mainframes, or is it just mainstream computer science?

A need for urgency

O'Malley: It's mainstream computer science, but there's a need for a level of urgency to get things done. The product that we're coming out within May, Mainframe Software Manager, was written from beginning to end in less than 12 months. One of the things that this project taught us was the capacity of these students to come out and connect with customers. There has been some atrophy in terms of our capacity to communicate, of being able to understand customer needs -- what are the issues -- and then being able to apply new paradigms.

Have no fear. We need almost a level of innocence in looking at things in a far different way that the students can bring and then working very hard in a systematic way in conjunction with a having a transparency with customers to never make a mistake. We can't go down a cul de sac with these kinds of activities -- developing the communication skills, the technical skills, or the discipline to master what I've just described. Those are the big things that we're looking for.

I'll be honest with you. With this younger crowd, there's a lot they don't know, but there is a new dimension that they bring and a level of innovation and creativity that we didn't have without them.

Gardner: They're not intimidated easily, right?

O'Malley: They're not intimidated, and they look at things differently. What others may say can never be done, shouldn't be done, or isn't necessary, they say, "That ain't right." A month later, they're doing something that almost creates a shock and awe from customer. It's a wonderful thing for me to be part of and to witness.

Gardner: Let's look at some examples, if you have any, around how organizations that have heard the cloud model attributes, requirements, and benefits, wanted to get there quickly, and probably had some things in place. Have we examples of taking the mainframe model, elevating it to the cloud model in terms of how it's being utilized, and then perhaps some attributes? Are there metrics of successes as to how that works?

O'Malley: For a long time the higher-end mainframe customers aggressively used their big iron to do things in the way you've described. What's more interesting is that recently we're seeing smaller customers start to look at cloud, more specifically virtualization, being pushed to the mainframe in unconventional ways.

We have an insurance company up in Minneapolis that ran SAP, which is a financial system that competes against Oracle, and they elected initially to run it in client-server fashion. They ran the database server

Some interesting things happen when you bring it up to the mainframe. There's no physical network at that point. It's all hypersockets. So, it has drastically reduced the cost from a networking standpoint.

under DB2 on their z/OS. They ran the application server on an Intel platform. They got to a point where they required an upgrade to that application.

Usually customers follow conventional wisdom. They do what they always did. They upgrade their hardware in place and they leave the application as it was. In this case, this company has a charter to sell insurance only in the state of Minnesota. As a result of that, when Target stores let people go because of the recession, it's not like they can go to Wisconsin and sell somebody else insurance to increase their overall revenue. Cost efficiency, cost per member, is not just an IT issue. It’s a CEO issue.

So, rather than just upgrading this application with all they have, they said, "Let's pause and take a hard look at this environment. Let's look at options and see if there are better things we could be doing to serve the business."

Ultimately, they decided on bringing the application server up to z/Linux, effectively encasing all of SAP in a single server, effectively creating an internal cloud for SAP to handle the scalability requirements and drive down cost.

Some interesting things happen when you bring it up to the mainframe. There's no physical network at that point. It's all hypersockets. So, it has drastically reduced the cost from a networking standpoint.

As you talked about earlier, z/OS effectively becomes a hub to the effort of management. The few people who did system programmer type function on the mainframe could now do it for what is a consolidated distributed environment, where they brought up 40 servers to the mainframe.

The thing that's also interesting is that, because of the maturity of virtualization on the mainframe, you can't just share SAP to 40 SAP servers, but you can also share with Web services and other applications. This is much, much more difficult to do on a distributed side with things like VMware.

Now, they've gotten nearly all their distributed environment up to the mainframe. On that platform, things like disaster recovery, where it was extremely difficult to bring up the environment when they did their testing, now comes up in 90 minutes. In fact, it takes half an hour to bring it up, an hour of certification validation, and they're up and running.

They've seen effectively half the cost, with a greater level of security, resilience, and all the things that the mainframe offers. You saw things like that in the big banks and the big insurance companies that had the capacity and people and smartness skills to do it.

You seldom saw that on the smaller end, but, given the recession and the maturity of the platform, the innovation that's been brought to the mainframe, all the enhancements that have taken place over the last eight years, and the efforts that CA is doing, it's making people look at it differently. That is, I think, a perfect example of a cloud up and running, and making a massive difference to support an organization's charter, which is to serve their customers at the lowest possible cost.

Gardner: I should think that that's not only going to be payback in a short-term but will improve over time as they need to do patches, administration, and upgrades. They'll have a smaller set or perhaps even a singular application set to apply those to to get the benefits of what a SaaS provider can do, but we're now bringing this downstream to a smaller company that can deliver their own on-demand model.

O'Malley: Absolutely. The evil in IT is moving parts and too many of them. The more that you can reduce change and reduce the need to manage change, the more you're going to reduce your overall cost.

The recession eventually will end, and you're right. The people who have taken these steps to drive efficiency, the steps that I just went through, are going to be in a much better competitive position when we come out of this recession not only to grow at a rate that their customers do, but do it in a more cost effective fashion than their competitors.

Gardner: Well, we've covered a lot of territory in terms of understanding some of the issues, the attractiveness of cloud. We've talked about the fact that it's still immature, but that there are a number of elements in the requirements list for cloud that are in place and simply need to be applied. We've discussed some of the issues around age, expense, and skill sets that are being addressed.

I want to thank our guest today, Chris O'Malley. He is the executive vice president and general manager for CA's Mainframe Business Unit. I appreciate your time, Chris.

O'Malley: Dana, thank you very much.

Gardner: We've been learning about how mainframes can help enterprises reach cloud benefits faster, and how in many respects the mainframe is already the cloud. I want to thank the sponsor for this discussion, CA, for their underwriting of its production. This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Transcript of a BriefingsDirect podcast on the role and benefits of mainframes and their position as private cloud infrastructure in today's efficiency-minded enterprises. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
Post a Comment