Showing posts with label Platform Computing. Show all posts
Showing posts with label Platform Computing. Show all posts

Thursday, June 23, 2011

Private Clouds: Debunking the Myths That Can Slow Adoption

Transcript of a sponsored podcast on the misconceptions that slow some enterprises from embracing private cloud models.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on debunking myths on the road to cloud-computing adoption.

The popularity of cloud concepts and the expected benefits from cloud computing have raised expectations. Forrester now predicts that cloud spending will grow from $40 billion to $241 billion in the global IT market over the next 10 years, and yet, there's still a lot of confusion about the true payoffs and risks associated with cloud adoption. IDC has it's own numbers.

Some enterprises expect to use cloud and hybrid clouds to save on costs, improve productivity, refine their utilization rates, cut energy use and eliminate gross IT inefficiencies. At the same time, cloud use should improve their overall agility, ramp up their business-process innovation, and generate better overall business outcomes. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

To others, this sounds a bit too good to be true, and a backlash against a silver bullet, cloud hype mentality is inevitable and is probably healthy. Yet, we find that there is also unfounded cynicism about cloud computing and underserved doubt.

So, where is the golden mean, a proper context for real-world and likely cloud value? And, what are the roadblocks that enterprises may encounter that would prevent them from appreciating the true potential for cloud, while also avoiding the risks?

We're here to identify and debunk some myths, for better or worse, that can cause confusion and hold IT back from embracing cloud model sooner rather than later. We’ll also define some clear ways to get the best out of cloud virtues without stumbling.

Here to join me on our discussion about the right balance of cloud risk and reward are Ajay Patel, a Technology Leader at Agilysys, and he's in Chicago. Welcome to the show, Ajay.

Ajay Patel: Thank you very much, Dana.

Gardner: We're also here with Rick Parker, IT Director for Fetch Technologies, and he's in El Segundo, Calif. Welcome, Rick.

Rick Parker: Good morning.

Gardner: We're also here with Jay Muelhoefer, Vice President of Enterprise Marketing at Platform Computing, and he joins us from Boston. Welcome, Jay.

Jay Muelhoefer: Glad to be here.

Looking at extremes?

Gardner: Jay, let me start with you. I want to try to understand a little bit from your perspective, being deeply involved with cloud, particularly private cloud. Are we looking at extremes?

On one hand, we have people that see this as a golden, wonderful opportunity to change IT fundamentally. On the other hand, we have folks that seem to be grounded in risk about security and data, and think that the cost will probably be even higher. So where's the right balance? Are they both right or they both wrong? How do you see it?

Muelhoefer: They're both right in some ways. Yes, there are risks that people are confronting today, but there’s also lots of opportunity. Right now, it's a golden time to be evaluating the concept of cloud and private cloud.

In 2009, I think a lot of people were looking at cloud and saying, "Okay, this is an interesting technology. Is this really something that’s going to go into fruition?" In 2010, there was a lot of research and a lot of the early adopters dipping their toes into cloud and what the benefits could be.

But, 2011 is really where that tension is moving from "Is this possible" to "How do I take advantage of it for my own organization?" Google and Amazon have really reset the bar for how IT services are delivered in the marketplace. If internal organizations don't start meeting the needs of their business constituencies, whether it’s a development, test or even production user, they're going to look elsewhere to consume those resources. So, we've hit an inflection point, and that’s going to make it an exciting time.

Gardner: Ajay Patel, how about from your perspective? Do you see this mostly through the lens of opportunity, or do the risks merit being bit conservative?

Patel: Looking at it from systems-integrator (SI) perspective, what we're seeing is the customer base, the end-users are ready to take the leap to cloud. The technologies are there. The capabilities of the cloud management software, the key part of deploying private clouds, are there -- but the fear of security concerns around it are keeping them from jumping to it. I am very confident that the technology and the industry is ready to take customers to the next phase of private clouds.

Gardner: We'll get to some of those fears in a little while, when we look at various myths and perhaps what is supporting them or what needs to be debunked.

Rick Parker, how about you? What are you seeing? What are you hearing in the field? Do most people seem to think that the good or the benefits outweigh the risks, or are many people still on the fence?

No standard definition

Parker: The biggest issue is the lack of knowledge, because there isn't a standard definition of what a private cloud network is comprised of. If you don't know what it is, then you can’t possibly build one yourself. Because there isn’t a standard definition that majority of people are aware of, that leads to an enormous amount of confusion.

Then, when marketing gets hold of it and applies the term to many different things that aren't even cloud related, that obscures the issue even further. So, I see a basic lack of knowledge as the issue for private cloud deployments more than anything.

Gardner: So, we're working toward refining that understanding and, that way, being able to have a better sense of where our risks and rewards are. Of course, we hear that IT is focusing on a sense of lost control, that a third-party public cloud gets between them and their users.

We also hear about a lack of trust, that these cloud providers are not proven. They say that they're going to do what they do, but if they don’t, the IT department is still going to be left holding the bag or being held responsible. There is, of course, as you mentioned security, vulnerability, confidentiality, and privacy issues, particularly around data.

Let's begin to tackle some of the underlying myths that substantiate these concerns, ameliorate them, or help folks get the good without suffering the ills. We have a series of myths, and I'll take the first one to you, Rick.

Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud.



There's an understanding that, as we are trying to define it, virtualization is private cloud and private cloud is virtualization. Clearly, that's not the case. Help me understand what you perceive in the market as a myth around virtualization and what should be the right path between virtualization and a private cloud?

Parker: Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud. That's opposed to virtualization, which could just be a single physical host with a couple of virtual machines (WMs) running on it and doesn't provide the redundancy and cost-effectiveness of an entire private cloud or the ease of management of a private cloud.

So there is a huge difference between virtualization and use of a hypervisor versus an entire private cloud. A private cloud is comprised of virtualized routers, firewalls, switches, in a true data center not a server room. There are redundant environmental systems, like air-conditioning and Internet connections. It’s comprised of an entire infrastructure, not just a single virtualized host.

Gardner: And is there a certain level of virtualization required? We hear some common rates for server workloads of 20 to 30 percent. Is there a certain point in your adoption of server virtualization where you're almost inevitably heading toward a cloud? Are there people who have 80 percent virtualization and perhaps have no interest in, or will never get to, the cloud? How does the rate of adoption for virtualization perhaps impact the likelihood of adopting private cloud infrastructure?

Parker: Moving to a private cloud is inevitable, because the benefits so far outweigh the perceived risks, and the perceived risks are more toward public cloud services than private cloud services.

Gardner: We’ve talked a little bit about fear of loss of control. Perhaps bringing private cloud infrastructure and models to bear on a largely virtualized server infrastructure would provide even more control, better security, and a reduction in some of these risks. Is there a counter-intuitive effect here that cloud will give you better control and higher degrees of security and reliability?

Redundancy and monitoring

Parker: I know that to be a fact, because the private cloud management software and hypervisors provide redundancy and performance monitoring that a lot of companies don't have by default. You don’t only get performance monitoring across a wide range of systems just by installing a hypervisor, but by going with a private cloud management system and the use of VirtualCenter that supports live motion between physical hosts.

It also provides uptime/downtime type of monitoring and reporting capacity planning that most companies don't even attempt, because these systems are generally out of their budget.

Gardner: I wonder if you wouldn’t mind telling us, Rick a little bit about Fetch Technologies. You're the IT Director there. Tell us a little bit about your organization.

Parker: Fetch Technologies is a provider of data as a service, which is probably the best way to describe it. We have a software-as-a-service (SaaS) type of business that extracts formats and delivers Internet-scale data. For example, two of our clients are Dow Jones and Shopzilla.

Gardner: Let’s go next to Ajay. A myth that I encounter is that private clouds are just too hard. "This is such a departure from the siloed and monolithic approach to computing that we'd just as soon stick with one server, one app, and one database," we hear. "Moving toward a fabric or grid type of affair is just too hard to maintain, and I'm bound to stumble." Why would I be wrong in assuming that as my position, Ajay?

The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue.



Patel: One of the main issues that the IT management of an organization encounters on a day-to-day basis is the ability for their current staff to change their principles of how they manage the day-to-day operations. So, the operational ability for an IT management staff to operate a private cloud is there.

The training and the discipline need to be changed. The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue. By doing the actual cloud assessment, by understanding what the cloud means, it's closer to home to what the IT infrastructure team does today than one would imagine through the myth.

For example, virtualization is a key fundamental need of a private cloud -- virtualization at the servers, network and storage. All the enterprise providers at the servers, networks, and storage are creating a virtualized infrastructure for you to plug into your cloud-management software and deliver those services to a end-user without issues -- and in a single pane of glass.

Gardner: When you say a single pane of glass, I think you are talking about the manageability, the fact that these highly virtualized environments can be automated and that you can probably oversee many, many more instances of servers and runtime environments with fewer people. Is that what you mean?

Patel: Absolutely. If you look at the some of the metrics that are used by managed service companies, SIs, and outsourcing companies, they do what the end-user companies do, but they do it much cheaper, better and faster.

More efficient manner

How they do it better is by creating the ability to manage several different infrastructure portfolio components in a much more efficient manner. That means managing storage as a virtualized infrastructure; tier storage, network, the servers, not only the Windows environment, but the Unix environment, and the Linux environment, including all that in the hands of the business-owners.

Gardner: This is probably where we hear a lot about the cost containment issues. We're talking about higher utilization, lower energy, and better footprint, when it comes to facilities and so forth. Is this what you're seeing, that those who do cloud properly, that put in the proper management and administration, are actually getting some cost-benefits? There might be an upfront cost associated, but it’s the operational ongoing costs that are probably the most important, and that's where the real value is.

Patel: Absolutely. Another thing to look at is not even the upfront cost that you need to be concerned about. Today, with the money being so tight to come by for a corporation, people need to look at not just a return on investment (ROI), but the return on invested capital.

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry, than if you were to just expand utilizing the islands of bills of test, dev environment, by application, by project.

Gardner: I'd like to hear more about Agilysys? What is your organization and what is your role there as a technology leader?

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry.



Patel: I am the technology leader for cloud services across the US and UK. Agilysys is a value-added reseller, as well as a system integrator and professional services organization that services enterprises from Wall Street to manufacturing to retail to service providers, and telecom companies.

Gardner: And do you agree, Ajay, with Forrester Research and IDC, when they show such massive growth, do you really expect that cloud, private cloud, and hybrid cloud are all going to be in such rapid growth over the next several years?

Patel: Absolutely. The only difference between a private cloud and public cloud, based on what I'm seeing out there, is the fear of bridging that gap between what the end-user attains via private cloud being inside their four walled data center, to how the public cloud provides the ability for the end-user to have security and the comfort level that their data is secure. So, absolutely, private to hybrid to public is definitely the way the industry is going to go.

Gardner: Jay at Platform, you're thinking about myths that have to do with adoption, different business units getting involved, lack of control, and cohesive policy. This is probably what keeps a lot of CIOs up at night, thinking that it’s the Wild West and everyone is running off and doing their own thing with IT. How is that a myth and what does a private cloud infrastructure allow that would mitigate that sense of a lot of loose cannons?

Muelhoefer: That’s a key issue, when we start thinking about how our customers look to private cloud. It comes back a little bit to the definition that Rick mentioned. Does virtualization equal private cloud -- yes or no? Our customers are asking for the end-user organizations to be able to access their IT services through a self-service portal.

Key element

That’s a key element that we see being added on top of virtualization. But, a private cloud isn’t just virtualization, nor is it one virtualization vendor. It’s a diverse set of services that need to be delivered in a highly automated fashion. Because it's not just one virtualization, it's going to be VMware, KVM, Xen, etc.

A lot of our customers also have physical provisioning requirements, because not all applications are going to be virtualized. People do want to tap in to external cloud resources as they need to, when the costs and the security and compliance requirements are right. That's the concept of the hybrid cloud, as Ajay mentioned. We're definitely in agreement. You need to be able to support all of those, bring them together in a highly orchestrated fashion, and deliver them to the right people in a secure and compliant manner.

The challenge is that each business unit inside of the company typically doesn’t want to give up control. They each have their own IT silos today that meet their needs, and they are highly over provisioned.

Some of those can be at 5 to 10 percent utilization, when you measure it over time, because they have to provision everything for peak demands. And, because you have such a low utilization, people are looking at how to increase that utilization metric and also increase the number of servers that are managed by each administrator.

You need to find a way to get all the business units to consolidate all these underutilized resources. By pooling, you could actually get effects just like when you have a portfolio of stocks. You're going to have a different demand curve by each of the different business units and how they can all benefit. When one business unit needs a lot, they can access the pool when another business unit might be low.

You need to find a way to get all the business units to consolidate all these underutilized resources.



But, the big issue is how you can do that without businesses feeling like they're giving up that control to some other external unit, whether it's a centralized IT within a company, or an external service provider? In our case, a lot of our customers, because of the compliance and security issues, very much want to keep it within their four walls at this stage in the evolution of the cloud marketplace.

So, it’s all about providing that flexibility and openness to allow business units to consolidate, but not giving up that control and providing a very flexible administrative capability. That’s something that we've spent the last several years building for our customers.

Gardner: So, the old way of allowing for physical IT to be distributed offers them control, but at a high price. Perhaps with increasing security vulnerability issues, it’s hard to have a comprehensive security and network performance benefit, when there's so much scattered infrastructure, but the balance then has to be that we want to let them feel they are enabled. Perhaps private cloud can do that.

Muelhoefer: It’s all about being able to support that heterogeneous environment, because every business unit is going to be a little different and is going to have different needs. Allowing them to have control, but within a defined boundaries, you could have centralized cloud control, where you give them their resources and quotas for what they're initially provisioned for, and you could support costing and charge back, and provide a lot more visibility in to what’s happening.

You get all of that centralized efficiency that Ajay mentioned, but also having a centralized organization that knows how to run a larger scale environment. But then, each of the business units can go in and do their own customized self-service portal and get access to IT services, whether it's a simple OS or a VM or a way to provision a complex multi-tier application in minutes, and have that be an automated process. That’s how you get a lot of the cost efficiencies and the scale that you want out of a cloud environment.

Gardner: And, for those business units, they'd also have to watch the cost and maybe have their own P&L. They might start seeing their IT costs as a shared services or charge-backs, get out of the capital expense business, and so it could actually help them in their business when it comes to cost.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing

Still in evolution

Muelhoefer: Correct. Most of our customers today are very much still in evolution. The whole trend towards more visibility is there, because you're going to need it for compliance, whether it’s Sarbanes-Oxley (SOX) or ITIL reporting.

Ultimately, the business units of IT are going to get sophisticated enough that they can move from being a cost center to a value-added service center. Then, they can start doing that granular charge-back reporting and actually show at a much more fine level the value that they are adding to the organization.

Parker: Different departments, by combining their IT budgets and going with a single private cloud infrastructure, can get a much more reliable infrastructure. By combining budgets, they can afford SAN storage and a virtual infrastructure that supports live VMotion.

They get a fast response, because by putting a cloud management application like Platform on top it, they have much more control, because we are providing the interface to the different departments. They can set up servers themselves and manage their own servers. They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.

IT gives end-users more control by providing a cloud management application and also gives them a much more reliable, manageable system. We've been running a private cloud here at Fetch for three years now, and we've seen this. This isn’t some pie-in-the-sky kind of thing. This is, in fact, what we have seen and proven over and over.

They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.



Gardner: I asked both Ajay and Rick to tell us about their companies. Jay, why don’t you give us the overview of Platform Computing? It’s based in Toronto and it’s been in the IT business for quite some time.

Muelhoefer: Platform Computing is headquartered in Toronto, Canada and it's about an 18-year-old company. We have over 2,000 customers, and they're spread out on a global basis.

We have a couple of different business units. One is enterprise analytics. Second, is cloud, and the third is HPC grids and clusters. Within the cloud space, we offer a cloud management solution for medium and large enterprises to build and manage private and hybrid cloud environments.

The Platform cloud software is called Platform ISF. It's all about providing the self-service capability to end-users to access this diverse set of infrastructure as a service (IaaS), and providing the automation, so that you can get the efficiencies and the benefits out of a cloud environment.

Gardner: Rick, let’s go back to you. I've heard this myth that private clouds are just for development, test, and quality assurance (QA). Developers really like cloud. They have unique characteristics as users, lots of uneven demands when they test or they need to distribute applications for development and bring it back from those teams. So is that right? Is cloud really formed by developers and it’s being getting too much notoriety, or is there something else going that it’s for test, dev, and a whole lot more?

Beginning of the myth

Parker: I believe that myth just came from the initial availability of VMware and that’s what it was primarily used for. That’s the beginning of that myth.

My experience is that our private cloud isn't a specific use-case. A well designed private cloud should and can support any use case. We have a private cloud infrastructure and on top of this infrastructure, we can deliver development resources and test resources and QA resources, but they're all sitting on top of a base infrastructure of a private cloud.

But, there isn't just a single use case. It’s detrimental to define use cases for private cloud. I don't recommend setting up a private cloud for dev only, another separate private cloud for test, another separate private cloud for QA. That’s where a use case mentality gets into it. You start developing multiple private clouds.

If you combine those resources and develop a single private cloud, that lets you divide up the resources within the infrastructure to support the different requirements. So, it’s really backward thinking, counter-intuitive, to try to define use cases for private cloud.

Gardner: How about learning from that heritage, though? It’s almost like New York. If you can do it there, you can do it anywhere. Is there something to be said for private cloud supporting the whole test, dev, and deploy or dev/ops type of lifecycle means that it’s probably going to be quite capable at supporting any number of workloads?

Our goal is 100 percent virtualization of all servers, of running everything on our private cloud.



Parker: Correct. We run everything on our private cloud. Our goal is 100 percent virtualization of all servers, of running everything on our private cloud. That includes back-office corporate IT, Microsoft Exchange services like domain controllers, SharePoint, and all of these systems run on top of our private cloud out of our data centers.

We don't have any of these systems running out of an office, because we want the reliability that the cost savings that our private cloud gives us to deploy these applications on servers in the data center where these systems belong.

Muelhoefer: Some of that myth is maybe because the original evolution of clouds started out in the area of very transient workloads. By transient, I mean like a demonstration environments. or somebody that just needs to do a development environment for a day or two. But we've seen a transition across our customers, where they also have these longer-running applications that they're putting in the production type of environments, and they don't want to have to over-provision them.

At the end of the quarter, you need to have a certain capacity of 10 units, you don’t want to have that 10 units throughout the entire quarter as resource-hogs. You want to be able to flex up and flex down according to the requirements and the demand on it. Flexing requires a different set of technology capabilities, having the right sets of business policies and defining your applications so they can dynamically scale. I think that’s one of the next frontiers in the world of cloud.

Gardner: Jay, I suppose that's particularly important for organizations that are in the business-to-consumer (B2C) business, that have Web apps and others, they are facing their retail or other consumer bases. These could be flexing based on certain demand or even seasonal fluctuations, and certainly a much more cost-efficient way to attack that problem would be through cloud infrastructures.

Flexing capability

Muelhoefer: We've seen with our customers that there is a move toward different application architectures that can take advantage of that flexing capability in Web applications and Java applications. They're very much in that domain, and we see that the next round of benefits is going to come from the production environments. But it does require you to have a solid infrastructure that knows how to dynamically manage flexing over time.

It’s going to be a great opportunity for additional benefits, but as Rick said, you don't want to build cloud silos. You don't want to have one for dev, one for QA, one for help desk. You really need a platform that can support all of those, so you get the benefits of the pooling. It's more than just virtualization. We have customers that are heavily VMware-centric. They can be highly virtualized, 60 percent-plus virtualized, but the utilization isn’t where they need it to be. And it's all about how can you bring that automation and control into that environment.

Gardner: Next myth, it goes to Ajay. This is what I hear more than almost any other: "There is no cost justification. The cloud is going to cost the same or even more. Folks that seem to think that this is really going to have a long-term benefit are kidding themselves. We've seen this in the past with other shifts in computing. They always claim it's going to cost less, but it never does." So, there is some cynicism out there, Ajay. Why is that cynicism unjustified?

Patel: One of the main things that proves to be untrue is that when you build a private cloud, you're pulling in the capabilities of the IT technology that is building the individual islands of environments. On top of it, you're increasing utilization. Today, in the industry, I believe the overall virtualization is less than 40 percent. If you think about it, taking the less-than-40 percent virtualized environment, the remaining is 60 percent.

Even if you take 30 percent, which is average utilization -- 15-20 percent in the Windows environment. By putting it on a private cloud, you're increasing the utilization to 60 percent, 70 percent, 80 percent. If you can hit at 85 percent utilization of the resources, now you are buying that much less of every piece of hardware, software, storage, and network.

You put the right infrastructure in place with the ability to service your business, what you do successfully



When you pool all the different projects together, you build an environment. You put the right infrastructure in place with the ability to service your business, what you do successfully. You end up saving minimally 20 percent, if you just keep the current service level agreements (SLAs) and current deliverables, the way you do today.

But, if you retrain your staff to become cloud administrators -- to essentially become more agile in the ability to create the workloads that are virtual-capable versus standalone-capable -- you get much more benefit, and your cost of entry is minimally 20-30 percent lower on day one. Going forward, you can get more than 50 percent lower cost.

Gardner: I would imagine that for large organizations, in some cases, their constraints, their physical plants, their large brick-and-mortar data centers are at capacity. So this isn't simply saving costs operationally, but frees up capacity that they can use for other activities, and therefore not have to build additional data centers. That could be a huge savings.

Patel: It's killing two birds with one stone, because not only can you re-utilize your elasticity of a 100,000 square-foot facility of data center, but you can now put in 2-3 times more compute capacity without breaking the barriers of the power, cooling, heating, and all the other components. And by having cloud within your data center, now the disaster-recovery capabilities of cloud failover is inherent in the framework of cloud.

You no longer have to worry about individual application-based failover. Now, you're looking at failing over an infrastructure instead of applications. And, of course, the framework of cloud itself gives you a much higher availability from the perspective of hardware up-time and the SLAs than you can obtain by individually building sets of servers with test, dev, QA, or production.

Gardner: Ajay, when we talk about cost, I suppose another important criteria here is comparing old processes and methods to the new. Are there any metrics that you've been able to gather about how private cloud in a sense compresses and/or improves on how IT has done?

Days to hours

Patel: Operationally beyond the initial set up of the private cloud environment, the cost to IT, in an environment and the IT budget goes down drastically on the scale based on our interaction to end-users and our cloud providers is anywhere from 11 days to 15 days down to 3-4 hours.

This means that the hardware is sitting on the dock in the old infrastructure deployment model, versus the cloud model. And when you take three to four hours down into individual components it takes one to two to three days to build the server, rack it, power it, connect it.

It takes 10 minutes today within the private cloud environment to install the operating system. It used to take one to two days, maybe two-and-a-half days, depending on the patches and the add-ons. It takes 30 to 60 minutes starting with a template that is available within private cloud and then setting up the dev environments at the application layer, goes down from days down to 30 minutes.

When you combine all that, the operational efficiency you gain definitely puts your IT staff at a much greater advantage than your competitor.

Gardner: Ajay just pointed out that there is perhaps a business continuity benefit here. If your cloud is supporting infrastructure, rather than individual apps, you can have failover, reliability, redundancy, and disaster recovery at that infrastructure level. Therefore, having it across the board.

In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.



Is that something that you're seeing your customers use, or is there a hybrid benefit as well? That's a roundabout way of asking what's the business continuity story and does that perhaps provide a stepping stone to hybrid types of computing models?

Parker: To backtrack just a little bit, at Fetch Technologies, we've cut our data-center cost in half by switching to a private cloud. That's just one of the cost benefits that we've experienced.

Going back to the private cloud cost, one of the myths is that you have to buy a whole new set of cloud technology, cloud hardware, to create a private cloud. That's not true. In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.

In other words, you can reconfigure existing hardware into a private cloud. You don't necessarily need to buy, and there is really no such thing as specific cloud hardware. There are some hardware systems and models that are more optimal in a private cloud environment, but that doesn't necessarily mean you need to buy them to start. You get some initial cost savings, do virtualization to pay for maybe more optimal hardware, but you don't have to start with the most optimal hardware to build a private cloud.

As far as the business continuity, what we've found is that the benefit is more for up-time maintenance than it is for reliability, because most systems are fairly reliable. You don't have servers failing on a day-to-day basis.

Zero downtime

We have systems, at least one server, that's been up for two years with zero downtime. For updating firmware, we can VMotion servers and virtual machines off to other hosts, upgrade the host, and then VMotion those virtual servers back on to the upgraded host so we have a zero downtime maintenance. That's almost more important than reliability, because reliability is generally fairly good.

Gardner: Rick, at Fetch Technologies, we've been talking about cloud computing at almost an abstract level, but for end users, the folks who are actually using these applications, there might be some important benefits for them that we haven't looked at yet?

Parker: Yes. The response that we got from the QA engineers that we rolled out Platform to was that it was the greatest thing since sliced bread, because they're able to deploy new virtual machines when they wanted to, when they needed them. They could change the configuration of the virtual machines.

They weren't waiting for IT to respond different things. So just the almost ecstatic feedback from the end-users was different from a very few other applications that we've deployed. That was extremely important.

Gardner: Jay Muelhoefer at Platform, is there another underlying value here that by moving to private cloud, it puts you in a better position to start leveraging hybrid cloud, that is to say more SaaS or using third-party clouds for specific IaaS and/or maybe perhaps over time moving part of your cloud into their cloud.

Is there a benefit in terms of getting expertise around private cloud that sets you up to be in a better position to enjoy some of the benefits of the more expensive cloud models?

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud.



Muelhoefer: That's a really interesting question, because one of the main reasons that a lot of our early customers came to us was because there was uncontrolled use of external cloud resources. If you're a financial services company or somebody else who has compliance and security issues and you have people going out and using external clouds and you have no visibility into that, it's pretty scary.

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud that people may be using as part of their overall IT footprint. It's really the ability to synthesize and figure out -- if an end user is making a request, what's the most efficient way to service that request?

Is it to serve up something internally or externally, based upon the business policies? Is it using very specific customer data that can't go outside the organization? Does it have to use a certain type of application that goes with it where there's a latency issue about how it's served, and being able to provide a lot of business policy context about how to best serve that whether it's a cost, compliance, or security type of objective that you’re going against?

That’s one key thing. Another important aspect we do see in our customers is the disaster recovery and reliability issue is very important. We've been working with a lot of our larger customers to develop a unique ability to do Active/Active failover. We actually have customers that have applications that are running real-time across multiple data centers.

So, in the case of not just the application going down, but an entire data center going down, they would have no loss of continuity of those resources. That’s a pretty extreme example, but it goes to the point of how important meeting some of those metrics are for businesses and making that cost justification.

Stepping stone

Gardner: We started out with some cynicism, risk, and myths, but it sounds like private clouds are a stepping stone, but at the same time, they are attainable. The cost structure sounds very attractive, certainly based on Rick and Ajay’s experiences.

Jay, where do you start with your customers for Platform ISF, when it comes to ease of deployment? Where do you start that conversation? I imagine that they are concerned about where to start. There is a big set of things to do when it comes to moving towards virtualization and then into private cloud. How do you get them on a path where it seems manageable?

Muelhoefer: We like to engage with the customer and understand what their objectives are and what's bringing them to look at private cloud. Is it the ability to be a lot more agile to deliver applications in minutes to end users or is it more on the cost side or is it a mix between the two? It's engaging with them on a one-on-one basis and/or working with partners like Agilysys where we can build out that roadmap for success and that typically involves understanding their requirements and doing a proof of concept.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment. Look at what types of processes you're going to be modifying in addition to the technologies that you’re going to be implementing, so that you can achieve the right set of pooling.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment.



You’re a very VMware-centric shop, but you don’t want to be locked into VMware. You want to look at KVM or Xen for non-production-type use cases and what you’re doing there. Are you looking at how can you make yourself more flexible and leverage those external cloud resources? How can you bring physical into the cloud and do it at the right price point?

A lot of people are looking at the licensing issue of cloud, and there are a lot of different alternatives, whether it's per VM, which is quite expensive, or other alternatives like per socket and helping build out that value roadmap over time.

For us, we have a free trial on our website that people can use. They can also go to our website to learn more which is http://www.platform.com/privatecloud. We definitely encourage people to take a look at us. We were recently named the number one private cloud management vendor by Forrester Research. We are always happy to engage with companies that want to learn more about private cloud.

Gardner: Very good. We’ve covered quite a bit of a ground, but we're out of time. You've been listening to a sponsored BriefingsDirect podcast discussion on debunking myths on the road to cloud computing adoption. I want to thank our guests. We've been joined by Ajay Patel, the Technology Leader at Agilysys. Thanks so much, Ajay.

Patel: Thank you very much for your time, Dana.

Gardner: And, Rick Parker, IT Director at Fetch Technologies. Thank you, sir.

Parker: You’re welcome.

Gardner: And last, Jay Muelhoefer, Vice President of Enterprise Marketing at Platform Computing. Thank you, Jay.

Muelhoefer: Thanks Dana. I appreciate it.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing

Transcript of a sponsored podcast on the misconceptions that slow some enterprises from embracing private cloud models. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, February 03, 2010

CERN’s Evolution to Cloud Computing Portends Revolution in Extreme IT Productivity?

Transcript of a BriefingsDirect podcast on the move to cloud computing for data-intensive operations, focusing on the work being done by the European Organization for Nuclear Research.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on some likely directions for cloud computing based on the exploration of expected cloud benefits at a cutting edge global IT organization.

We are going to explore the thinking on how cloud computing both the private and public varieties might be useful at CERN, the European Organization for Nuclear Research in Geneva.

CERN has long been an influential bellwether on how extreme IT problems can be solved. Indeed, the World Wide Web owes a lot of its usefulness to early work done at CERN. Now the focus is on cloud computing. How real is it, and how might an organization like CERN approach cloud?

In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere. That's because CERN deals with fantastically large data sets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness.

So please join us, as we track the evolution of high-performance computing (HPC) from clusters to grid to cloud models through the eyes of CERN, and with analysis and perspective from IDC, as well as technical thought leadership from Platform Computing.

Join me in welcoming our panel today, Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN. Welcome, Tony.

Tony Cass: Pleased to meet you.

Gardner: We’re also here with Steve Conway, Vice President in the High Performance Computing Group at IDC. Welcome, Steve.

Steve Conway: Thanks. Welcome to everyone.

Gardner: And, we're also here with Randy Clark, Chief Marketing Officer at Platform Computing. Welcome Randy.

Randy Clark: Thank you. Glad to be here.

Gardner: Over the last several years, we've seen cloud computing become quite popular as a concept. It remains largely confined to experimentation, but this notion of private cloud computing is being scoped out by many large and influential enterprises as well as large early adopters like CERN.

Let me go to you Steve Conway. What's the difference between private and public cloud and how far away are any tangible benefits of cloud computing from your perspective?

Already here

Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.

Just to give you an example, we just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.

Gardner: So, this notion of having a cohesive approach to computing and blending what you do on premises with these other providers isn't just pie in the sky. This is really something people are serious about.

Conway: Well, CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.

Gardner: Randy Clark, from your perspective, how many customers of Platform Computing would you consider to be seriously evaluating what we now refer to as public or private cloud?

Clark: We have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn’t put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily a third and possibly more.

Gardner: Tony Cass, let's go to you at CERN. Tell us first a little bit about CERN for those of our readers who don’t know that much or aren't that familiar. Tell us about the organization and what it does, and then we can start to discuss your perceptions about cloud.

Cass: We're a laboratory that exists to enable, initially Europe’s and now the world’s, physicists to study fundamental questions. Where does mass come from? Why don’t we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.

We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.

We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We’ve developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.

Gardner: So, to look at the IT problem and unpack it a little bit. You're dealing with such enormous amounts of data. You’ve been in the distribution of these workloads for quite some time. Maybe you could explain a little bit the evolution of how you've distributed and managed such extreme workload?

No central management

Cass: If you look at the past, in the 1990’s, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.

We realized in 2000-2001 that this wasn’t going to work and also that the scale of resources that we needed was so vast that it couldn’t all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we’ve done.

What a lot of the grid academics have done is in understanding or exploring what could be done with the grid, as an idea. What we've been focusing on is making it work and not pushing the envelope in terms of the technology, but pushing the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we’ve run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.

We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.

Gardner: As I understand it, the interest you have in cloud isn’t strictly a matter of ripping and replacing, but augmenting what you're already doing vis-a-vis these grid models.

Cass: Exactly. The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.

Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.

These skeleton jobs also install software, because they don’t really trust the sites to have installed the software correctly. So, there's a lot of inefficiency there. This is symptomatic of a more general problem. Batch workers are good at sharing resources that are relatively static, but not when the demand for resource types changes dynamically.

So, we’re looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don’t need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.

Gardner: Steve Conway, you’ve been tracking HPC for some time at IDC. Maybe you have some perceptions on how CERN is a leading adopter of IT over the years, the types of problems they're solving now, or the types of problems other organizations will be facing in the future. Could you tell us about this management issue and do you think that this is going to become a major requirement for cloud computing?

World technology leader

Conway: Starting with CERN, their scientists have earned multiple Nobel prizes over the years for their work in particle physics. As you said before, CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.

More generally, CERN is a recognized world leader in technology innovation. What’s been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.

For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it’s running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.

In the case of CERN’s and Platform’s collaboration, as Tony said, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.

CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.

IDC: By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending.



That’s just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it’s going to move along pretty quickly.

Gardner: How important is this issue that Tony brought up about being able to manage in a dynamic environment and not just more predictable static batch loads?

Conway: It’s the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.

That’s one of the reasons we're here today with Platform and CERN, because that’s been Platform’s business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.

Gardner: Randy Clark, as you look at the marketplace and see organizations like CERN changing their requirements, what, in your thinking, is the most important missing part from what you would do in management with HPC and now cloud? What makes cloud different, from a management perspective?

Dynamic resources

Clark: It’s what Tony said, which is having the resources be dynamic not static. Historically, clusters and grids have been relatively static, and the workloads have been managed across those. Now, with cloud, we have the ability to have a dynamic set of resources.

The trick is to marry and manage the workloads and the resources in conjunction with each other. Last year, we announced our cloud products -- Platform LSF and Platform ISF Adaptive Cluster -- to address that challenge and to help this evolution.

Gardner: Let’s go back to Tony Cass. Tell me what you’re doing with cloud in terms of exploration. I know you’re not in a position to validate, or you haven’t put in place, any large-scale implementation or solutions that would lead the market. But, I’m very curious about what the requirements are. What are the problems that you're trying to solve that you think cloud computing specifically can be useful in?

Cass: The specific problem that we have is to deliver the most physics we can within the fixed budget and the fixed amount of resources. These are limited either by money or by data-center cooling and generally are much less than the experiment wants. The key aim is to deliver the most cycles we can and the most efficient computing we can to the physicists.

I said earlier that we're looking at virtualization to do this. We’ve been exploring how to make sure that the jobs can work in a virtual environment and that we can instantiate virtual machines (VMs), as necessary, according to the different experiments that are submitting workloads at one time to integrate the instantiation of VMs with the batch system.

At the moment, we're looking at how you can reliably send a virtual image that's generated at one place to another site.



Once we got that working, we figured that the real problem was managing the number of VMs. We have something like 4,000 boxes, but if you have a VM per call, plus a few spare, then it can easily get to 60,000, 70,000, or 80,000 VMs. Managing these is the problem that we are trying to explore now, moving away from “can we do it” to “can we do it on a huge scale?”

Gardner: Are you yet at the point where you want to be able to manage the VMs that you have under your own control, and perhaps starting to deploy virtualized environments and workloads in someone else’s cloud and make them managed and complementary.

Cass: There are two aspects to that. The resources in our community are at other sites, and all of the sites are very independent. They are also academic environments. So, they are exploring things in their own way as well. At the moment, we're looking at how you can reliably send a virtual image that's generated at one place to another site.

Amazon does this, but there are tight constraints in the way they manage that cluster, because they built it thinking about this. Universities maybe didn’t build their own cluster in a way that separates that out from some of the other computing they're doing. So, there are security and trust implications there that we are looking at. That will be a thing to collaborate on long-term.

More cost effective

Certainly, if we configure things in our own way, when we look in a cloud environment, perhaps it will be more cost effective for us to only purchase the equipment we need for the average workload and they buy resources from Amazon or other providers. But, there are interesting things you have to explore about the fact that the data is not at Amazon, even if they have the cycles.

There are so many things that we’re thinking about. The one we’re focusing on at the moment is effectively managing the resources that we have here at CERN.

Gardner: Steve Conway, it sounds as if CERN has, with its partnered network, a series of what we might call private-cloud implementations and they're trying to get them to behave in concert at what we might call at a public cloud level. That exercise could, as with the World Wide Web, create some de-facto standards and approaches that might, in fact, help what we call hybrid cloud computing moving forward. Does that fairly surmise where we are?

Conway: That’s right. There are going to have to be more rigorous open standards for the clouds. What Tony was talking about at CERN is something that we see elsewhere. People are turning to public clouds today -- "turning to" just meaning exploring at this point for a way to handle overload work and search workloads.

But, we're seeing some smaller and medium-size businesses looking to public clouds as a way to avoid having to purchase their own internal resources . . . and also as a way of avoiding having to hire experts who know how to operate them.



The Internet itself is a pretty high latency network, if you think of it that way. People are looking to send portions of the workload that doesn't have a lot of communication dependencies particularly inter-processor communication dependencies, because the latency doesn't support that.

But, we're seeing some smaller and medium-size businesses looking to public clouds as a way to avoid having to purchase their own internal resources, clusters for example, and also as a way of avoiding having to hire experts who know how to operate them. For example, engineering services firms don't have those experts in house today.

Gardner: Back to you Tony Cass, I know this is still a bit hypothetical, but if there were the standards in place, and you were able to go to a third-party cloud provider for some of these spikes or occasionally dynamically generated workloads that perhaps exceed your current on-premise’s capabilities, would this be a financial boon to you, where you could protect your pricing and you could decide the right supply and demand fit when it comes to these extreme computing problems?

Cass: It would certainly be a boon. The possibility is being demonstrated by experiments that are actually based at Brookhaven to do simulations that are CPU-intensive, where they don't need much data transfer or data access. They have been able to run simulations cost-effectively with EC2.

Although their cycles, compared to some of the things we're doing, are more expensive, if we don't have to buy all of the resources, we could certainly save money. Another aspect is that it is beyond money in some sense. If you need to get something fixed for a conference, and you are desperately trying to decide whether or not you’ve discovered the Higgs then it's not a case of “money's no object,” but you can get the resources from a cloud much more quickly than you can install capacity at CERN. So both aspects are definitely of interest.

Gardner: Randy Clark, this makes a great deal of sense from the perspective of a large research organization. But, we're not just talking about specific workloads. We're talking about workloads that will be common across many other vertical industries or computing environments. Can you name a few, or mention some from your experience, where we should expect the same sorts of economic benefits to play out.

Different use cases

Clark: What we're seeing is across industries. Financial services is certainly taking a leadership role. There's a lot going on in the semiconductor or electronic industry. Business intelligence (BI) is across industries and government. So, across industries, we see different use cases.

To your point, these use cases are enterprise applications to run the business, and we're seeing that in Java applications, test and development environments, and traditional HPC environments.

That's something driven by the top of the organization. Tony and Steve laid it out well. They look at the public/private cloud economically, and say, "Architecturally, what does this mean for our business?" Without any particular application in mind they're asking how to evolve to this new model. So, we're seeing it very horizontally and, to your point, in enterprise and HPC applications.

Gardner: Steve Conway, thinking about these large datasets, Randy brought up BI, and that, of course, means warehousing, data analytics, and advanced analytics. A lot of organizations are creating datasets at a scale never anticipated, never mind seen before, things from sensors, mobile devices, network computing, or social networking.

BI is one of those markets that, in its attributes, straddles the world of HPC and enterprise computing just as financial services does . . .



How do we bring together these compute resources, the raw power with these large data sets. I think this is an issue that CERN might also be a bellwether on, in somehow managing these large data sets and the compute power, bringing them architecturally into alignment.

Conway: BI is one of those markets that, in its attributes, straddles the world of HPC and enterprise computing just as financial services does, in the sense that they have workloads that don't have a whole lot of communications dependencies. They don't need networks with very high latency for the most part.

You see organizations like the University of Phoenix, which has 280,000 online students, that have already made this evolution -- in this case, with Platform helping them out -- from clusters to grid computing today. Now, they're looking toward cloud computing as a way to take them further.

You also see that not just in the private sector side. One of the other active customers that's really looking in that same direction is the Centers for Disease Control (CDC), which has moved to from clusters to grid computing.

What you're seeing here is people who have already stepped through the earlier stages of this evolution. They've gone from clusters to grid computing for the most part and now are contemplating the next move to cloud computing. It's an evolutionary move. It could have some revolutionary implications, but, from a technological standpoint, sometimes evolutionary is much safer and better than revolutionary.

Gardner: Tell us about some of the solutions that you now need to bring to market or are bringing to market around management and other issues? Where have you found that the rubber hits the road, in terms of where people can take this in real time? What's the current state of the art? Rather than talking about hypothetical, what's now possible, when it comes to moving from cluster and grid to the revolution of cloud?

Interaction of technologies

Clark: What Platform sees is the interaction of distributed computing and new technologies like virtualization requiring management. What I mean by that is the ability, in a large farm or shared environment, to share resources and then make those resources dynamic. It's the ability to add virtualization into those on the resource side, and then, on the server side, to make it Internet accessible, have a service catalog, and move from providing IT support to truly IT as a competitive service.

The state of the art is that you can get the best of Amazon, ease of use, cost, accessibility with the enterprise configuration, scale, and dependability of the enterprise grid environment.

There isn't one particular technology or implementation that I would point to, to say "That is state of the art," but if you look across the installations we see in our installed base, you can see best practices in different dimensions with each of those customers.

Gardner: Randy, what are some typical ways that you're seeing people getting started, when they want to make these leaps from evolutionary progression to revolutionary paybacks? Where do they start making that sort of catalytic difference?

Taking a step back, we see customers thinking about architecturally how do they want to have that management layer.



Clark: The evolution is the technology, as Steve said. The revolution is in the approach architecturally to how to get to that new spot.

Taking a step back, we see customers thinking about architecturally and how they want to have that management layer. What is that management layer going to mean to them going forward? And, can they quickly identify a set of applications and resources and get started?

So, there is an architecture piece to it, thinking about what the future will hold, but then there is a very pragmatic piece -- let's get going, let's engage, let's build something and be able to scale that out over time. We saw that approach in grid computing. We're encouraging folks to think, but then also to get started.

Gardner: Tony Cass at CERN, what are your next steps? Where would you expect to be heading next as you explore the benefits and possible real-world opportunities?

Cass: We’re definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.

Gardner: What would you like to see happen next?

Focusing on delivery

Cass: What I would like to see happen next is a definite cloud environment at CERN, where we move from something that we're thinking about to something that is in operation, where we have the ability to use resources that aren’t primarily dedicated for physics computing to deliver cycles to experiment. I'd like to see a cloud, a dynamically evolving environment in our computer center. We’re convinced it's possible, but delivering that is what we’re focusing on.

Gardner: Steve Conway, where do you see things headed next? What are the next steps that we should look for, as we move from that evolutionary progression to more of a revolutionary productivity?

Conway: It's along a couple of dimensions. One is the dimension of people actually working in these environments. In that sense, the CERN-Platform collaboration is going to help drive the whole state of the art forward over the next period of time.

People are a little bit concerned about testing their data there. The evolution of standards is going to accelerate this trend.



The other one, as Randy mentioned before, it that the evolution of standards is going to be important. For example, right now, one of the barriers to public-cloud computing is vendor lock-in, where the cloud, the Amazons, the Yahoos, and so forth are not necessarily interoperable. People are a little bit concerned about testing their data there. The evolution of standards is going to accelerate this trend.

Gardner: Why don’t I give the last word today to Randy? Tell us about some information that's available out there for folks who are looking to explore and take some first steps toward this more revolutionary benefit.

Clark: I'd encourage everybody to visit our website. There are a number of white papers, webinars, and webcasts that we've done with other customers to highlight some other use cases within development, test, and production environments. I'd point people to the resource page on our website www.platform.com.

Gardner: I want to thank our guests. This has been a very interesting discussion, and I certainly look forward to following what CERN does, because I do think that they’re going to be a leader in terms of what many others will be end up doing in B2B cloud computing.

Thank you to Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN. Thank you, sir.

Cass: Thank you.

Gardner: And also a good, big thank you to Steve Conway, Vice President in the High Performance Computing Group at IDC. Thank you, Steve.

Conway: Thanks.

Gardner: And also, of course, thank you to Randy Clark, Chief Marketing Officer at Platform Computing.

Clark: Thank you for the opportunity.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast on what likely outcomes we can expect from cloud computing and architecture, on the progression from grid to cloud computing, and moving into a more revolutionary set of benefits. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Transcript of a BriefingsDirect podcast on the move to cloud computing for data-intensive operations, focusing on the work being done by the European Organization for Nuclear Research. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: