Showing posts with label Software-defined storage. Show all posts
Showing posts with label Software-defined storage. Show all posts

Thursday, September 22, 2016

How Cutting-Edge Storage Provides a Competitive Footing for Canadian Music Service Provider SOCAN

Transcript of a discussion on how Canadian non-profit SOCAN faced digital disruption and fought back with a successful storage modernizing journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on technology innovation -- and how it's making an impact on people's lives.

Gardner
Our next digital business transformation case study examines how Canadian nonprofit SOCAN faced digital disruption and fought back with a successful storage modernizing journey. We'll learn how adopting storage innovation allows for faster responses to end-user needs and opens the door to new business opportunities.

To describe how SOCAN gained a new competitive capability for its performance rights management business we're joined by Trevor Jackson, Director of IT Infrastructure for SOCAN, the Society of Composers, Authors and Music Publishers of Canada, based in Toronto.

Welcome, Trevor.

Trevor Jackson: Hi, thank you for having me.
A Tech Guide
For the Savvy
Flash Buyer
Gardner: The music business has changed a lot in the past five years or so. There are lots of interesting things going on with licensing models and people wanting to get access to music, but people also wanting to control their own art.

Tell us about some of the drivers for your organization, and then also about some of your technology decisions.

Jackson: We've traditionally been handling performances of music, which is radio stations, television and movies. Over the last 10 or 15 years, with the advent of YouTube, Spotify, Netflix, and digital streaming services, we're seeing a huge increase in the volume of data that we have to digest and analyze as an organization.

Gardner: And what function do you serve? For those who are might not be familiar with your organization or the type of organization, tell us the role you play in the music and content industries.

Play music ethically

Jackson: At a very high level, what we do is license the use of music in Canada. What that means is that we allow businesses through licensing to ethically play any type of music they want within their environment. Whether it's a bar, restaurant, television station, or a radio station, we collect the royalties on behalf of the creators of the music and then redistribute that to them.

Jackson
We're a not-for-profit organization. Anything that we don't spend on running the business, which is the collecting, processing, and payment of those royalties, goes back to the creators or the publishers of the music.

Gardner: When you talk about data, tell us about the type of data you collect in order to accomplish that mission?

Jackson: It's all kinds of data. For the most part, it's unstructured. We collect it from many different sources, again radio and television stations, and of course, YouTube is another example.

There are some standards, but one of the challenges is that we have to do data transformation to ensure that, once we get the data, we can analyze it and it fits into our databases, so that we can do the processing on information.

Gardner: And what sort of data volumes are we talking about here?

Jackson: We're not talking about petabytes, but the thing about performance information is that it's very granular. For example, the files that YouTube sends to us may have billions of rows for all the performances that are played, as they're going through their cycle through the month; it's the same thing with radio stations.

We don't store any digital files or copies of music. It's all performance-related information -- the song that was played and when it was played. That's the type of information that we analyze.
We don't store any digital files or copies of music. It's all performance-related information.

Gardner: So, it's metadata about what's been going on in terms of how these performances have been used and played. Where were you two years ago in this journey, and how have things changed for you in terms of what you can do with the data and how performance of your data is benefiting your business?

Jackson: We've been on flash for almost two years now. About two and a half years ago, we realized that the storage area network (SAN) that we did have, which was a traditional tiered-storage array, just didn't have the throughput or the input/output operations per second (IOPS) to handle the explosive amount of data that we were seeing.

With YouTube coming online, as well as Spotify, we knew we had to do something about that. We had to increase our throughput.

Performance requirements

Gardner: Are you generating reports from this data at a certain frequency or is there streaming? How is the output in terms of performance requirements?

Jackson: We ingest a lot of data from the data-source providers. We have to analyze what was played, who owns the works that were played, correlate that with our database, and then ensure that the monies are paid out accordingly.

Gardner: Are these reports for the generation of the money done by the hour, day, or week? How frequently do you have to make that analysis?

Jackson: We do what we call a distribution, which is a payment of royalties, once a quarter. When we're doing a payment on a distribution, it’s typically on performances that occurred nine months prior to the day of the distribution.
A Tech Guide
For the Savvy
Flash Buyer
Gardner: What did you do two and a half years ago in terms of moving to flash and solid state disk (SSD) technologies? How did you integrate that into your existing infrastructure, or create the infrastructure to accommodate that, and then what did you get for it?

Jackson: When we started looking at another solution to improve our throughput, we actually started looking at another tiered-storage array. I came to the HPE Discover [conference] about two years ago and saw the presentation on the all-flash [3PAR Storage portfolio] that they were talking about, the benefits of all-flash for the price of spinning disk, which was to me very intriguing.

I met with some of the HPE engineers and had a deep-dive discussion on how they were doing this magic that they were claiming. We had a really good discussion, and when I went back to Toronto, I also met with some HPE engineers in the Toronto offices. I brought my technical team with me to do a bit of a deeper dive and just to kick the tires to understand fully what they were proposing.
We saw some processes that we were running going from days to hours just by putting it on all flash. To us, that's a huge improvement.

We came away from that meeting very intrigued and very happy with what we saw. From then on, we made the leap to purchase the HPE storage. We've had it running for about [two years] now, and it’s been running very well for us.

Gardner: What sort of metrics do you have in terms of technology, speeds and feeds, but also metrics in terms of business value and economics?

Jackson: I don’t want to get into too much detail, but as an anecdote, we saw some processes that we were running going from days to hours just by putting it on all-flash. To us, that's a huge improvement.

Gardner: What other benefits have you gotten? Are there some analytics benefits, backup and recovery benefits, or data lifecycle management benefits?

OPEX perspective

Jackson: Looking at it from an OPEX perspective, because of the IOPS that we have available to us, planning maintenance windows has actually been a lot easier for the team to work with.

Before, we would have to plan something akin to landing the space shuttle. We had to make sure that we weren’t doing it during a certain time, because it could affect the batch processes. Then, we'd potentially be late on our payments, our distributions. Because we have so many IOPS on tap, we're able to do these maintenance windows within business hours. The guys are happier because they have a greater work-life balance.

The other benefit that we saw was that all-flash uses less power than spinning disk. Because of less power, there less heat, and a need for less floor space. Of course, speed is the number one driving factor for a company to go all-flash.

Gardner: In terms of automation, integration, load-balancing, and some of those other benefits that come with flash storage media environments, were you able to use some of your IT folks for other innovation projects, rather than speeds and feeds projects?

Jackson: When you're freeing up resources from keeping the lights on, it's adding more value to the business. IT traditionally is a cost center, but now we can take those resources and take them off of the day-to-day mundane tasks and put them into projects, which is what we've been doing. We're able to add greater benefit to our members.
We know our business very well and we're hoping to leverage that knowledge with technology to further drive our business forward.

Gardner: And has your experience with flash in modernizing your storage prompted you to move toward other infrastructure modernization techniques including virtualization, software-defined composable infrastructure, maybe hyper converged? Is this an end point for you or maybe a starting point?

Jackson: IT is always changing, always transforming, and we're definitely looking at other technologies.

Some of the big buzzwords out there, blockchain, machine learning, and whatnot are things that we’re looking at very closely as an organization. We know our business very well and we're hoping to leverage that knowledge with technology to further drive our business forward.

Gardner: We're hearing a lot promising sorts of vision these days about how machine learning could be brought to bear on things like data transformation and making that analysis better, faster, cheaper. So, that’s a pretty interesting stuff.

Are you now looking to extend what you do? Is the technology an enabler more than a cost center in some ways for your general SOCAN vision and mission?

Jackson: Absolutely. We're in the music business, but there is no way we can do what we do without technology; technically it’s impossible. We're constantly looking at ways that we can leverage what we have today, as well as what’s out in the marketplace or coming down the pipe, to ensure that we can definitely add the value to our members to ensure that they're paid and compensated for their hard work.

Gardner: And user experience and user quality of experience are top of mind for everybody these days.

Jackson: Absolutely, that’s very true.

Gardner: We'll have to leave it there. We've been learning how Canadian non-profit SOCAN faced digital disruption and fought back with a successful storage modernizing journey. And we've heard how adopting storage innovation is allowing for faster responses to end-user needs and has opened the door to new business opportunities.
A Tech Guide
For the Savvy
Flash Buyer
So, please join me in thanking our guest, Trevor Jackson, Director of IT Infrastructure for SOCAN, the Society of Composers, Authors and Music Publishers of Canada, based in Toronto.

Thank you, Trevor.

Jackson: Thank you for having me.

Gardner: And I'd also like to thank our audience as well for joining us for this HPE Voice of the Customer Podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and please come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Canadian non-profit SOCAN faced digital disruption and fought back with a successful storage modernizing journey. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, August 09, 2016

How Software-Defined Storage Translates into Just-in-Time Data Center Scaling

Transcript of a discussion on scaling benefits from improved storage infrastructure at a multi-tenant hosting organization.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.

Gardner
Our next digital business transformation case study examines how hosting provider Opus Interactive adopted a software-defined storage approach to better support its customers.

We'll learn how scaling of customized IT infrastructure for a hosting organization in a multi-tenant environment benefits from flexibility of hardware licensing, and gains the confidence that storage supply will always meet dynamic demand.

To describe how massive storage and data-center infrastructure needs can be met in a just-in-time manner, we're joined by Eric Hulbert, CEO at Opus Interactive in Portland, Oregon. Welcome, Eric.

Eric Hulbert: Thank you for having me, Dana.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: What were the major drivers when you decided to re-evaluate your storage, and what were the major requirements that you had?

Hulbert: Our biggest requirement was high-availability in multi-tenancy. That was number one, because we're service providers and we have to meet the needs of a lot of customers, not just a single enterprise or even enterprises with multiple business groups.

Hulbert
So we were looking for something that met those requirements. Cost was a concern as well. We wanted it to be affordable, but needed it to be enterprise-grade with all the appropriate feature sets -- but most importantly it would be the scale-out architecture.

We were tired of the monolithic controller-bound SANs, where we'd have to buy a specific bigger size. We'd start to get close to where the boundary would be and then we would have to do a lift-and-shift upgrade, which is not easy to do with almost a thousand customers.

Ultimately, we made the choice to go to one of the first software-defined storage architectures, which is a company called LeftHand Networks, later acquired by HPE, and then some 3PAR equipment, also acquired by HPE. Those were, by far, the biggest factors while we made that selection on our storage platform.

Gardner: Give us a sense of the scale-out requirements.

Hulbert: We have three primary data centers in the Pacific Northwest and one in Dallas, Texas. We also have the ability for a little bit of space in New York, for some of our East Coast customers, and one in San Jose, California. So, we have five data centers in total.

Gardner: Is there a typical customer, or a wide range of customers?

Big range

Hulbert: We have a pretty big range. Our typical customers are in finance and travel and tourism, and the hospitality industries. There are quite a few in there. Healthcare is a growing vertical for us as well.

Then, we rounded out with manufacturing and little bit of retail. One of our actual verticals, if you could call it vertical, are the MSPs and IT companies, and even some VARs, that are moving into the cloud.

We enable them to do their managed services and be the "boots on the ground" for their customers. That spreads us into the tens of thousands of customers, because we have about 30 to 25 MSPs that work with us throughout the country, using our infrastructure. We just provide the infrastructure as a service, and that's been a pretty growing vertical for us.

Gardner: And then, across that ecosystem, you're doing colocation, cloud hosting, managed services? What's the mix? What’s the largest part of the pie chart in terms of the services you're providing in the market?

Hulbert: We're about 75 percent cloud hosting, specifically a VMware-based private cloud, a multi-tenant private cloud. It's considered public cloud, but we call it private cloud.

We do a lot of hybrid cloud, where we have customers that are doing bursting into Amazon or [Microsoft] Azure. So, we have the ability to get them either Direct Connect Amazon connections or Azure ExpressRoute connections into any of our data centers. Then, 20 percent is colocation and about 5 percent for back-up, and disaster recovery (DR) rounds that out.

Gardner: Everyone, it seems, is concerned about digital disruption these days. For you, disruption is probably about not being able to meet demand. You're in a tight business, a competitive business. What’s the way that you're looking at this disruption in terms of your major needs as a business? What are your threats? What keeps you up at night?

Still redundant

Hulbert: Early on, we wanted a concurrently maintainable infrastructure, which also follows through with the data centers that we're at. So, we needed Tier 3-plus facilities that are concurrently maintainable. We wanted the infrastructure be the same. We're not kept up at night, because we can take an entire section of our solution offline for maintenance. It could be a failure, but we're still redundant.

It's a little bit more expensive, but we're not trying to compete with the commodity hosting providers out there. We're very customized. We're looking for customers that need more of that high-touch level of service, and so we architect these big solutions for them -- and we host with a 100 percent up-time.

The infrastructure piece is scalable with scale-out architecture on the storage side. We use only HP blades, so that we just keep stacking in blades as we go. We try to stay a couple of blade chassis ahead, so that we can take pretty large bursts of that infrastructure as needed.

That's the architecture that I would recommend for other service providers looking for a way to make sure they can scale out and not have to do any lift-and-shift on their SAN, or even the stack and rack services, which take more time.

We have to cable all of them versus needing to do one-blade chassis. Then, you can just slot in 16 blades quickly, as you're scaling. That allows you to scale quite a bit faster.
We use only HP blades, so that we just keep stacking in blades as we go. We try to stay a couple blade chassis ahead, so that we can take pretty large bursts of that infrastructure as needed.

Gardner: When it comes to making the choice for software-defined, what has that gotten you? I know people are thinking about that in many cases -- not just service providers, but enterprises. What did service-defined storage get for you, and are you furthering your software-defined architecture to more parts of your infrastructure?

Hulbert: We wanted it to be software-defined because we have multiple locations and we wanted one pane of glass. We use HPE OneView to manage that, and it would be very similar for an enterprises. Say we have 30 remote offices, they want to put the equipment there, and the business units need to provision some service and storage. We want to be going to each individual appliance or chassis or application in one place to provision it all.

Since we're dealing now with nearly a thousand customers -- and thousands and thousands of virtual servers, storage nodes, and all of that, the chunklets of data are distributed across all these. Being able to do that from one single pane of the glass from a management standpoint is quite important for us.

So, it's that software-defined aspect, especially distributing the data into chunklets, which allows us to grow quicker, and putting a lot of  automation on the back-end.

We only have 11 system administrators and engineers on our team managing that many servers, which shows you that our density is pretty high. That only works well if we have really good management tools, and having it software-defined means fewer people walking to and from the data center.

Even though our data centers are manned facilities, our infrastructure is basically lights out. We do everything from remote terminals.

Gardner: And does this software-defined extend across networking as well? Are you hyper-converged, converged? How would you define where you're going or where you'd like to go?

Converged infrastructure

Hulbert: We're not hyper-converged. For our scale, we can’t get into the prepackaged hyper-converged product. For us, it would be more of a converged infrastructure approach.

As I said, we do use the c-Class blade chassis with Virtual Connect, which is software-defined networking. We do a lot of VLANs and things like that on the software side.

We till have some outside of that out of band, networking, the network stacks, because we're not just a cloud provider. We also do colocation and a lot of hybrid computing where people are connecting between them. So, we have to worry about Fibre Channel on iSCSI and connections in SAN.

That adds a couple of other layers that are a few extra management steps, but in our scale, it’s not like we're adding tens of thousands of servers a day or even an hour, as I'm sure Amazon has to. So we can take that one small hit to pull that portion of the networking out, and it works pretty good for us.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: How do you see the evolution of your business in terms of moving past disruption, adopting these newer architectures? Are there types of services, for example, that you're going to be able to offer soon or in the foreseeable future, based on what you're hearing from some of the vendors?

Hulbert: Absolutely. One of the first ones I mentioned earlier was the ability for customers that want to burst into public cloud to be able to do the Amazon Direct Connects. Even with the telecom providers back on, you're looking at 15 to 25 milliseconds latency. For some of these applications, that’s just too much latency. So, it’s not going to work.

Now, with the most recent announcement from Amazon, they put a physical Direct Connect node in Oregon, about a mile from our data-center facility. It's from EdgeConneX, who we partnered with.

Now, we can offer the lowest latency for both Amazon and Azure ExpressRoute in the Pacific Northwest, specifically in Oregon. That’s really huge for our customers, because we have some that do a lot of public-cloud bursting on bold platforms. So that’s one new offering we are doing.

Disruption, as we've heard, is around containers. We're launching a new container-as-a-service platform later this year based on ContainerX. That will allow us to do containers for both Windows or Starnix platforms, regardless of what the developers are looking for.

We're targeting developers, DevOps guys, who are looking to do microservices to take their application, old or new, and architect it into the containers. That’s going to be a very disruptive new offering. We've been working on a platform for a while now because we have multiple locations and we can do the geographic dispersion for that.

I think it’s going to take a little bit of the VMware market share over time. We're primarily a VMware shop, but I don’t think it’s going to be too much of an impact to us. It's another vertical we're going to be going after. Those are probably the two most important things we see as big disruptive factors for us.

Hybrid computing

Gardner: As an organization that's been deep into hybrid cloud and hybrid computing, is there anything out there in terms of the enterprises that you think they should better understand? Are there any sort of misconceptions about hybrid computing that you detect in the corporate space that you would like to set them straight on?

Hulbert: The hybrid that people typically hear about is more like having on-premises equipment. Let’s say I'm a credit union and I’ve got one of the bank branches that we decided to put three or four cabinets of our equipment and one on the vaults. Maybe they've added one UPS and one generator, but it’s not to the enterprise level, and they're bursting to the public cloud for the things that makes sense to meet their security requirements.

To me, that’s not really the best use of hybrid IT. Hybrid IT is where you're putting what used to be on-premises in an actual enterprise-level, Tier 3 or higher data center. Then, you're using either a form of bursting into private dedicated cloud from a provider in one of those data centers or into the public cloud, which is the most common definition of that hybrid cloud. That’s what I would typically define as hybrid cloud and hybrid IT.

Gardner: What I'm hearing is that you should get out of your own data center, use somebody else's, and then take advantage of the proximity in that data center, the other cloud services that you can avail yourself of.
Then, you're using either a form of bursting into private dedicated cloud from a provider in one of those data centers or into the public cloud which is the most common definition of that hybrid cloud.

Hulbert: Absolutely. The biggest benefit to them is at their individual location or bank branches. This the scenario where we use the credit union. They're going to have maybe one or two telco providers, and they're going to be their 100 or maybe 200 Mb-per-second circuits.

They're paying a pretty premium for them, and now when they get into one of these data centers, they're going to have the ability to have 10-gig or even 40- or 100-gig connected internet pipes with a lot higher headroom for connectivity at a better price point. 

On top of that, they'll have 10-gig connection options into the cloud, all the different cloud providers. Maybe they have an Oracle stack that they want to put on an Oracle cloud some day along with their own on- premises. The hybrid things get more challenging, because now, they're not going to get the connectivity they need. Maybe they want to be into the software, they want to do an Amazon or Azure, or maybe they want a Opus cloud.

They need faster connectivity for that, but they have equipment that still has usable life. Why not move that to an enterprise-grade data center and not worry about air conditioning challenges, electrical problems, or whether it’s secure.

All of these facilities, including ours, have every checkbox for compliance and auditing that happens on an annual basis. Those things that used to be really headaches aren’t core of their business. They don’t do those any more. Focus on what's core, focus on the application and their customers.

Gardner: So proximity still counts, and probably will count for an awfully long time. You get benefits from taking advantage of proximity in these data centers, but you can still have, as you say, what you consider core under your control, under your tutelage and set up your requirements appropriately?

Mature model

Hulbert: It really comes down to the fact that the cloud model is very mature at this point. We’ve been doing it for over a decade. We started doing cloud before it was even called cloud. It was just virtualization. We launched our platform in late 2005 and it proved out, time and time again, with 100 percent up-time.

We have one example of a large customer, a travel and tourism operator, that brings visitors from outside the US to the US. They do over a $1 billion a year in revenue, and we host their entire infrastructure.

It's a lot of infrastructure and it’s a very mature model. We've been doing it for a long time, and that helps them to not worry about what used to be on-premises for them. They moved it all. A portion of it is colocated, and the rest is all on our private cloud. They can just focus on the application, all the transactions, and ultimately on making their customers happy.

Gardner: Going back to the storage equation, Eric, do you have any examples of where the storage software-defined environment gave you the opportunity to satisfy customers or price points, either business or technical metrics that demonstrate how this new approach to storage particularly fills out this costs equation?
The ability to easily provision the different sized data storage we need for the virtual servers that are running on that is absolutely paramount.

Hulbert: In terms of the software-defined storage, the ability to easily provision the different sized data storage we need for the virtual servers that are running on that is absolutely paramount.

We need super-quick provisioning, so we can move things around. When you add in the layers of VMware, like storage vMotion, we can replicate volumes between data centers. Having that software-defined makes that very easy for us, especially with the built-in redundancy that we have and not being controller-bound like we mentioned earlier on.

Those are pretty key attributes, but on top of that , as customers are growing, we can very easily add more volumes for them. Say they have a footprint in our Portland facility and want to add a footprint in our Dallas, Texas facility and do geographic load balancing. It makes it very easy for us to do the applications between the two facilities, slowly adding on those layers as customers need to grow. It makes that easy for them as well.

Gardner: One last question, what comes next in terms of containers? What we're seeing is that containers have a lot to do with developers and DevOps, but ultimately I'd  think that the envelope gets pushed out into production, especially when you hear about things like composable infrastructure. If you've been composing infrastructure in the earlier part of the process and development, it takes care of itself in production.

Do you actually see more of these trends accomplishing that where production is lights-out like you are, where more of the definition of infrastructure and applications, productivity, and capabilities is in that development in DevOps stage?

Virtualization

Hulbert: Definitely. Over time, it is going to be very similar to what we saw when customers were moving from dedicated physical equipment into the cloud, which is really virtualization.

This is the next evolution, where we're moving into containers. At the end of the day, the developers, the product managers for the applications for whatever they're actually developing, don't really care what and how it all works. They just want it to work.

They want it to be a utility consumption-based model. They want the composable infrastructure. They want to be able to get all their microservices deployed at all these different locations on the edge, to be close to their customers.

Containers are going to be a great way to do that because they have all the overhead of dealing with the operations knowledge. So, they can just put these little APIs and the different things that they need where they need it. As we see more of that stuff pushed to the edge to get the eyeball traffic, that’s going to be a great way to do that. With the ability to do even further bursting and into the bigger public clouds worldwide, I think we can get to a really large scale in a great way.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: We'll have to leave it there. We've been learning how hosting provider Opus Interactive has adopted a software-defined storage approach to better support its customers. And we've heard how scaling of scale-out IT infrastructure for a hosting organization in a multi-tenant environment delivers big benefits.

So please join me in thanking our guest, Eric Hulbert, CEO at Opus Interactive in Portland, Oregon. Thank you, Eric.

Hulbert: Thank you very much. I appreciate it.

Gardner: And I'd also like to thank our audience as well for joining us for this Hewlett-Packard Enterprise Voice of the Customer Podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on scaling benefits from improved storage infrastructure at a multi-tenant hosting organization. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

 You may also be interested in:

Monday, August 04, 2014

A Gift That Keeps Giving, Software-Defined Storage Now Demonstrates Architecture-Wide Benefits

Transcript of a BriefingsDirect podcast on the future of software-defined storage and how it will have an impact on storage-hungry technologies, especially VDI.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Our latest podcast explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage.

Gardner
The ability to choose low-cost hardware, to manage across different types of storage, and radically simplified data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

We're here now with two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage. Please join me now in welcoming our guests, Alberto Farronato, the Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware. Hello, Alberto.

Alberto Farronato: Hello, Dana. Glad to be here, thanks.

Gardner: We're also here with Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. Welcome, Christos.

Christos Karamanolis: Thank you. Glad to be here.


Gardner: Alberto, we often focus on the speeds and feeds and the costs -- the hard elements -- when it comes to storage and modernization of storage. But what about the wider implications?

Software-defined storage is really changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis
How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.


So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level? Is there a marker that we can point to that says, "This is actually changing things beyond just a technology sphere and into the business sphere?"

Farronato
Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.
When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: As we gain more agility, that prompts more use of software-defined storage, or in your case, Virtual SAN. With that acceleration of adoption, we begin to see more beneficial consequences, such as better manageability of data as a lifecycle, perhaps operations being more responsive to developers so that a DevOps benefit kicks in.

Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.
The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.


Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.
One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?


Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs  and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth
 
So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Christos, Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?
The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.

Find SDS technical insights and best practices on the VSAN storage blog.

What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: It sounds as if there's a simultaneous decentralized benefit here, similar to what we saw 30 years ago in manufacturing. Back in the day, you used to have an assembly line approach where one linear process would lead to another, but when you do simultaneous things, you can see a lot more productivity and innovation.

Do you think that there is a parallel between software modernization and manufacturing 30 years ago?

Managing storage

Karamanolis: Certainly we have a parallel here, taking into account the fact that the customers, the IT professionals that manage storage, understand the processes and the workflows without necessarily having to understand the internals of the technology that implement those workflows.

This is very much like being part of a production line and understanding the big picture, but without having to understand all the little details of every station of that production line. In both cases, you have a fundamental scalability benefit going down that path.

I say this this being fully aware that the real world is demanding. I understand that there may be situations where the IT administrator, whether a VMware admin or a storage expert, has to go and jump into the situation and troubleshoot something that is going wrong.
With this approach you have a more granular way to control the service levels that you deliver to your customers.

He has to troubleshoot, for example, a performance issue, or understanding what's happening under the covers when the requirements specified don’t seem to match what they're getting.

And what we do is we deliver, together with Virtual SAN in an integrated fashion, sophisticated monitoring and reporting tools that help customers not only understand what's happening in their system, but also do an analysis of any situation end-to-end, all the way from the application, down to the VM, the hypervisor and the resources the hypervisor assigns to those VMs, and including the storage resources that are consumed at any point in time across the cluster.


Those are the tools that always have to come together with those simple models we're introducing, because you need to be able to handle those exceptional situations.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.
You can also now enable self-service consumption more easily and effectively.

Gardner: I'm interested in hearing more examples about how this is being used. But before we go to that, there's one questions that I get a lot as an analyst.

Perhaps it's because people come from different parts of IT, or they have specializations, but people say, "We have software-defined storage, we have software-defined networking, a highly virtualized data center, and the goal is to become a software-defined data center, but I don't necessarily understand how these come together in what order. How do I go about that?"


Help us understand the role and impact of software-defined storage in the context of a larger software-defined data center.

Karamanolis: This is a challenging question, and I don’t know how far I can go in answering this. What we're trying to do at VMware is allow our customers to experience the various concepts of software-defined data center in a piecemeal fashion.

They can address the most acute of their problems, whether those are the traditional computer utilization questions, or more recently, whether that is a network scalability and flexibility question or a question of an easy-to-enter, low-cost storage platform. So, yes, we provide integration and fully support integration of all our software-defined aspects of the data center. That is in the three dimensions I mentioned.

We will soon be posting some demos of this kind of working with NSX, for example. But we do not prescribe that our IT professional has to use Virtual SAN with NSX, or vice versa, and only in that way. So Virtual SAN can be used on its own, with more traditional network configurations. NSX can replace those network infrastructure and it will work seamlessly with Virtual SAN. 

We see different parts of adoption by different customers. Some of the bigger enterprises, including financials, being more sophisticated and perhaps more forward-looking, they are more aggressive with total software-defined data center approach. Other customers are a bit more cautious and apply software-defined principles in the main areas they are concerned with.

Value proposition

Farronato: When you look at a product like Virtual SAN, one interesting finding, after the first three months that the product has been available, is that the value proposition is really resonating across pretty much all customer segments, from the smaller SMBs, all the way up to the larger enterprise customers.

While it’s difficult to comment on the exact sequence as to how software-defined data center has been deployed, it is interesting to see that a technology like Virtual SAN is resonating pretty much across all the market segments, and so it expresses a value proposition that is broadly applicable.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.
The value proposition is really resonating across pretty much all customer segments, from the smaller SMBs, all the way up to the larger enterprise customers.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect? 

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product. 

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.
In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs. 

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.
You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.


It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.


Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.
I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Gardner: You've been listening to a sponsored BriefingsDirect podcast discussion on how one of the most costly and complex parts of any enterprise’s IT infrastructure, storage, is being dramatically changed by the accelerating adoption of software-defined storage.

And we've heard how IT leaders are simultaneously tackling storage pain points, such as scalability, availability, agility, and cost, while also gaining significant strategic and architectural level benefits through software-defined storage. Of course, probably the poster child application for that is VDI.

So a big thank you to our guests, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure, Storage, and Availability at VMware. Thank you so much, Alberto.

Farronato: Thank you. It was great being with you.

Gardner: And we've been joined also by Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. Thanks so much, Christos.

Karamanolis: Thank you. It was a pleasure talking with you.

Gardner: And also a big thank you to our audience for joining us once again on BriefingsDirect. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening, and don't forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on the future of Virtual SAN and how it will have an impact on storage-hungry technologies, especially VDI. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: