Showing posts with label vSphere. Show all posts
Showing posts with label vSphere. Show all posts

Tuesday, October 07, 2014

MIT Media Lab Computing Director Details the Virtues of Cloud Computing for Agility and DR

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you directly from the VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT strategy discussions.

Gardner
We’re here in San Francisco the week of August 25 to explore the latest developments in hybrid cloud computing, user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the MIT Media Lab in Cambridge, Massachusetts and how they're exploring the use of cloud and hybrid cloud and enjoying such use benefits as speed, agility and disaster recovery (DR)

To learn more about how the MIT Media Lab is using cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. Welcome.

Michail Bletsas: Thank you. 

Gardner: Tell us about the MIT Media Lab. How big is the organization? What’s your charter?

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/
Bletsas
The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We're not an applied research lab in the sense that we're not looking at what's going to happen two or three years from now. We're not looking at short-term future products. We're looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we're kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We're much heavier than other departments in how many devices you're going to see. We're on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I've been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you're using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What's very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we've been pretty successful with that.
We really need to interact with our infrastructure on a much shorter cycle than the average operation.

Gardner: So your requirements are to support those 350 people with dynamic workloads, many devices. What is it that you needed to do in your data center to accommodate that? How have you created a data center that’s responsive, but also protects your property, and that allows you to reduce your security risk?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We've been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it. 

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what's going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: Of course, virtualization sounds like a great fit when you have such dynamic, different, and varied workloads. But what about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What's the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They're a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an emergency].
We know that remote events are remote, until they happen, and sometimes they do.

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can't do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you're familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: I'd like to explore a little bit more this three-tiered cloud using common management, common APIs. It sounds like you're essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it's DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It's lucky for us, because we're not such a big operation. We're relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we've been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.
We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT's administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this." It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that's not as large as ours.

Gardner: We’re here at VMworld 2014. There's been quite a bit of news, particularly in the vCloud Air arena. We've talked and heard about betas for ObjectStore and for virtual private cloud. Are these of interest to you now that you’ve done a hybrid cloud using DR-as-a-service? Does anything else intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I've said, we're a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We're a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.
That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can't afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

Gardner: The benefits of having a highly virtualized environment -- including the data center, including the end user computing endpoints -- gives you that flexibility of taking workloads and apps from development to test to deployments. So there's a common infrastructure approach there, but also a common infrastructure across cloud, hybrid cloud, and DR.

So it’s sort of a snowball effect. The more virtualization you're adapting, the more dynamic and agile you can be across many more aspects of IT.

Bletsas: For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.

In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Gardner: Very good. I'm afraid we’ll have to leave it there. We’ve been discussing the virtues of cloud computing and hybrid cloud computing with the MIT Media Lab. I’d like to thank our guest, Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab in Cambridge, Mass. Thanks so much.

Bletsas: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the 2014 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Monday, September 23, 2013

Navicure Gains IT Capacity Optimization and Performance Monitoring Using VMware vCenter Operations Manager

Transcript of a BriefingsDirect podcast on how claims clearinghouse Navicure has harnessed advanced virtualization to meet the demands of an ever-growing business.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2013 VMworld Conference in San Francisco. We're here the week of August 26 to explore the latest in cloud-computing and virtualization infrastructure developments.

Gardner
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout the series of VMware-sponsored BriefingsDirect discussions.

Our next innovator interview focuses on how a fast-growing healthcare claims company is gaining better control and optimization across its IT infrastructure. We're going to hear how IT leaders at Navicure have been deploying a comprehensive monitoring and operational management approach.

To understand how they're using dashboards and other analysis to tame IT complexity, and gain better return on their IT investments, please join me in welcoming Donald Wilkins, Director of Information Technology at Navicure Inc. in Duluth, Georgia. Welcome, Donald. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Donald Wilkins: Glad to be here.

Gardner: Tell us a little bit about why your organization is focused on taming complexity. Is this a focus that's a result of cost, or is it taming complexity, or both?

Wilkins
Wilkins: At Navicure, we've been focused on scaling a fast-growing business. And if you incorporate very complex infrastructure, it becomes more difficult to scale it. So we're focused on technologies that are simple to implement, yet have a lot of upward availability of growth from the storage, the infrastructure, and the software we use. We do that in order to be able to scale that growth we needed to satisfy our business objectives.

Gardner: Tell us a little bit about Navicure, what you do, how is that you're growing, and why that's putting a burden on your IT systems.

Wilkins: Navicure has been around for about 12 years. We started the company in about 2001 and delivered the product to our customers in the late 2001-2002 timeframe. We've been growing very fast. We're adding 20 to 30 employees every year, and we're up to about 230 employees today.

We have approximately 50,000 physicians on our system. We're growing at a rate of 8,000 to 10,000 physicians a year, and it’s a healthy growth. We don't want to grow too fast, so as not to water down our products and services, but at the same time, we want to grow at a pace that better enables us to deliver better products for our customers.

Customer service is one of the foundation cornerstones of our business. We feel that our customers are number one, and retaining those customers is one of our primary goals.

Gardner: As I understand it, you're an Internet-based medical claims clearinghouse. Tell us what that boils down to. What is that you do?

Revenue cycle management

Wilkins: Claim clearinghouses have been around for a couple of decades now. We've evolved from that claim-clearinghouse model to what we refer to as revenue cycle management. We pioneered that term early as we started the company.

We take the transactions from physicians and send them to the insurance companies. That’s what the clearinghouse model is. But on that product, we added a lot of value-added services, a lot analytics around those transactions to help the provider generate more revenue for their transactions. They get paid faster, and that they get paid the first time through the system.

It was very costly for transactions to be delayed weeks because of poorly submitted transactions to the insurance company or denials because they coded something wrong.

We try to catch all that, so that they get paid the first time through. That’s the return on investment (ROI) that our customers are looking for when they look at our products, to lower the AR days and to increase their revenue at the bottom line.
We wanted to build a foundational structure that we can just build on as we get go into business and growing the transaction volume.

Gardner: Tell us a little bit about your IT environment. What do you have in your data center? Then, we'll get to how you've been able to better manage it.

Wilkins: The first thing we did at Navicure, when we started the company, is we looked at and decided that we didn't want to be in the data-center business. We wanted to use a colo that does that work at a much higher level than we could ever do. We wanted to focus on our product and let the colo focus on what they do.

They serve us from our infrastructure standpoint, and then we can focus on our products and build a good product. With that, we adopted very early on, the grid approach or the rack approach. This means that we wanted to build a foundational structure that we can just build on as we get go into business and grow the transactions volume.

That terminology has changed over the years, and that can be referred to a software-defined infrastructure today, but back then it was that we wanted to build infrastructure that would have a grid approach to it, so we could plug in more modules and components to add to scale out as we scale up.

With that, we continued to evolve what we do, but that inherent structure is still there. We need to be able to scale our business as our transactional volume doubles approximately every two years.

Gardner: And how did you begin your path to virtualization, and how did that progress into this more of a software-defined environment?

Ramping up fast

Wilkins: In the first few years of the operation of the company, we really had enough headroom in our infrastructure that it wasn't a big issue, but as we got four years into the company, we started realizing that we were going to hit a point where we would have to start ramping-up really fast.

Consolidation was not something that we had to worry about, because we didn’t have a lot to consolidate. It was a very early product, and we had to build the customer base. We had to build our reputation in the industry, and we did that. But then we started adding physicians by the thousands to our system every year.

With that, we started to have to add infrastructure. Virtualization came along at such a time that we could add it virtually faster and more efficiently than we could ever have if we added physical infrastructure.

So it became a product that we put in a test, dev, and production all at the same time, but it was something that just allowed us to meet the demands of the business.
We want to evolve that to be more proactive in our approach to monitoring.

Gardner: Of course, as many organizations have used virtualization to their benefit, they've also recognized that there is some complexity involved. And getting better management means further optimization, which further reduces costs. That also, of course, maintains their performance requirements. How did you then focus in on managing and optimizing this over time?

Wilkins: Well, one of the things we tried to look at, when we look at products and services, was to keep it simple. I have a very limited staff, and the staff needs to be able to drive to the point of whatever issue they're researching and/or inspecting.

As we've added technologies and services, we tried to add those that are very simple to scale, very, very simple to operate. We look at all these different tools to make that happen. This has led us to new products like VMware as they have also tried to drive to the same level, trying to simplify their product offering with their new products.

Gardner: Which products you are using? Maybe you could be more specific about what's working best for you?

Wilkins: For years, we've been doing monitoring with other tools that were network-based monitoring tools. Those drive only so much value. They give us things like up-time alerting and responsiveness that are just about when issues happen. We want to evolve that to be more proactive in our approach to monitoring.

It’s not so much about how we can fix a problem when there is one. It’s more of, let’s keep the problem from happening to start with. That's where we've looked at some products for that. Recently we've actually implemented vCenter Operations Manager.

That product gives us a different twist that other SMNP monitoring tools do. It's a history of what's going on, but also a future analysis of that history and how it will change, based on our historical trends.

New line-up

Gardner: Of course, here at VMworld, we're hearing vSphere improvements and upgrades, but also the arrival of VMware vCloud Suite 5.5 and VMware vSphere with Operations Management 5.5. Is there anything in the new line-up that is particularly of interest to you, and have you had a chance to look at over?

Wilkins: I haven’t had a chance to look over the most recent offering, but we're running the current version. Again, for us, it's the efficiency mechanism inside the product that drives the most value for us to make sure that we can budget a year in advance of the expanding infrastructure that we need to have to meet the demands.

Gardner: What sort of paybacks are there? Do you have any sense on a metrics or ROI basis? What you have been able to gain maybe through virtualization generally, and then the improved operations of those of workloads over time?

Wilkins: Just being able to drive more density in our colo by being virtualized is a big value for us. Our footprint is relatively small. As for an actual dollar amount, it’s hard to pin something on there. We're growing so fast, we're trying to keep up with the demand, and we've been meeting that and exceeding that.
Desktop virtualization is going to be a critical component for that.

Really, the ROI is that our customers aren’t experiencing major troubles with our infrastructure not expanding fast enough. That's our goal, to drive high availability for infrastructure and low downtime, and we can do that with VMware and with their products and service.

Gardner: How about looking to the future, Donald? Do you have any sense of whether things like disaster recovery or mobile support, perhaps even hybrid cloud services, will be something you would be interested in as you grow further?

Wilkins: We're a current customer of Site Recovery Manager. That's a staple in our virtual infrastructure and has been since 2008. We've been using that product for many years. It drives all of the planning and the testing of our virtual disaster recovery (DR) plan. I've been a very big proponent of that product and services for years, and we couldn’t do without it.

There are other products we will be looking at. Desktop virtualization is something that will be incorporated into the infrastructure in the next year or two.

As a small business, the value of that becomes a little harder to prove from a dollar standpoint. Some of those features like remote working come into play as office space continues to be expensive. It's something we will be looking at to expand our operations, especially as we have more remote employees working. Desktop virtualization is going to be a critical component for that.

Gardner: How about some 20/20 hindsight. If there were other folks that were ramping up on virtualization, or getting to the point where complexity was becoming an issue for them, do you have any thoughts on getting started or lessons learned that you could share?

Trusted partner

Wilkins: The best thing with virtualization is to get a trusted partner to help you get over the hurdle of the technical issues that may bring themselves to light.

I had a very trusted partner when I started this in 2005-2006. They actually just sat with me and worked with me, with no compensation whatsoever, to help work through virtualization. They made it such an easy value that it just became, "I've got to do this, because there's no way I can sustain this level of operational expense and of monitoring and managing this infrastructure, if it's all physical."

So, seeing that value proposition from a partner is key, but it has to be a trusted partner. It has to be a partner that has your best interest in mind, and not so much a new product to sell. It’s going to be somebody that brings a lot to the table, but, at the same time, helps you help yourself and lets you learn these products, so that you can actually implement it and research it on your own to see what value you can bring into the company.
It has to be a partner that has your best interest in mind, and not so much a new product to sell.

It’s easy for somebody to tell you how you can make your life better, but you have t to actually see it, because then, you become a passionate person for the technology, and then you become a person that realizes you have to do this and will do whatever it takes to get this in here, because it will make your life easier.

Gardner: How about specific advice for mid-market organizations, not too large. Is there something about dashboard, single pane, ease in getting a sense as the head of IT in your organization, over all the systems? Is there anything in particular that helps on that visualization basis that you would recommend that others perhaps consider?

Wilkins: Well, vCenter Operations Manager is key to understanding your infrastructure. If you don’t have it today, you're going to be very reactive to some of your pains and the troubles you're dealing with.

That product, while it does allow you to do a lot of research for various problems and services to drill down from the cluster level, down into the virtual machine levels and find out where your problems and pain points or, actually allows you to more quickly isolate the issue. At the same time, it allows you to project where you're growing and where you need to put your money into resources, whether that's more storage, compute resources, or network resources.

That's where we're seeing value out of the product, because it allows me to go during budget cycles to say that looking at infrastructure and our current growth, we will be out of resources by this time. We need to add this much, based on our current growth. Barring additional new products and services we may be coming up with, we may be adding to our service, if we don't do anything today. We're growing at this pace and here's the numbers to prove it.

When you have that information in front of you, you can actually build a business case around that that further educates the CFOs and the finance people to understanding what your troubles are and what you have to deal with on a day-to-day basis to operate the business.

Gardner: Must feel good to have some sense of being future proof, no matter what comes down you are going to be prepared for it.

Wilkins: Most definitely.

Gardner: Well, great. We'll have to leave it there. We've been talking about how an organization is gaining better control and optimization over their IT infrastructure, and we have heard how Navicure has been exploring comprehensive, monitoring, and operational management approach.

So a big thank you to our guest. We have been here with Donald Wilkins, the Director of IT at Navicure. Thank Donald.

Wilkins: My pleasure. Thank you.

Gardner: And thanks to our audience for joining this special podcast coming to you from the recent 2013 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout the series of VMware-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how claims clearinghouse Navicure has harnessed virtualization to meet the demands of an ever-growing business. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: