Showing posts sorted by relevance for query ADP. Sort by date Show all posts
Showing posts sorted by relevance for query ADP. Sort by date Show all posts

Tuesday, August 30, 2011

VMworld Showcase: How ADP Dealer Services Benefits From VMware View in its Expanding Use of Desktop Virtualization

Transcript of a BriefingsDirect podcast on how one company, ADP, uses the latest VDI software to provide virtual workstations for ALM and quality services to application developers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the VMworld 2011 Conference in Las Vegas. We're here in the week of August 29 to explore the latest in cloud computing and virtualization infrastructure developments.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host throughout this series of VMware-sponsored BriefingsDirect discussions.

Our next VMware case study interview focuses on ADP Dealer Services and how they're benefiting from expanding use of desktop virtualization. We will learn about how ADP Dealer Services is enjoying increased security, better management, and higher productivity benefits as they leverage desktop virtualization across their applications development activities. [VMware is a sponsor of BriefingsDirect podcasts.]

To hear more about their story, we're joined by Bill Naughton, the Chief Information Officer at ADP Dealer Services. Welcome, Bill.

Bill Naughton: Thanks for having me.

Gardner: And we're also here with Shane Martinez, Director of Global Infrastructure at ADP Dealer Services. Welcome, Shane.

Shane Martinez: Thanks.

Gardner: Let me start with you, Bill. Why have you pursued VDI for AppDev? Why was this sort of a test case or the low lying fruit, the best place for you to try your desktop virtualization activities?

Naughton: We had an interesting problem to solve. The first issue was developer productivity, which is very important to us, because we do have a big software development engineering house that needs to be productive.

And we had issues where our traditional approach of putting them on the user based plan was not giving them the creativity, flexibility, and productivity they needed to spin up new environments, to have a free workspace so they could do what they needed to create products.

So we thought that a VDI solution, combined with a quick provisioning and deprovisioning for development environments, would make them more productive and protect their normal day-to-day use of email, ERPF, Salesforce automation apps that they might need on the traditional production environment.

Gardner: How long have you been doing virtual desktop infrastructure work with your application development folks?

Lot of process

Naughton: It's been going on for probably about a year-and-a-half. We were looking at what was the right design and what was the process, because there is a lot of process involved with change management, with the provisioning and deprovisioning. So we did some pilots and now we're in full roll out and pretty excited about the results.

Gardner: That’s great. Maybe you could give us a sense of the scale here. Are we talking about hundreds or thousands? How many developers?

Naughton: We're talking over 1,000 technical people who will use the solution -- software engineers, QA type people, test people. And because ADP Dealer Services has a pretty big application portfolio, we're talking about hundreds of environments, thousands of servers that have kind of grown up over the years that support our R&D environment.

Gardner: This is probably a good time to learn more about ADP Dealer Services. Bill or Shane, could you give us the overview of your company? What you do?

Naughton: ADP is the world’s largest outsourced human resources, payroll, and tax benefits company started in 1949. It's about a $10 billion company, with 50,000 employees and close to 600,000 clients. It's one of Fortune’s most admired companies and one of only four companies with a AAA credit rating from Moody’s and Standard & Poor’s.

ADP Dealer Services, is a division of ADP, about a $1.7 billion company that’s serving the auto retail client base throughout the globe.



ADP Dealer Services, is a division of ADP, about a $1.7 billion company that’s serving the auto retail client base throughout the globe. It has about 8,000 employees and 25,000 clients to serve through software and services the auto retail and the OEM auto manufacturing industry.

Gardner: So I imagine that the applications that you are creating for these dealers are very intensive in terms of data. Many different types of applications, custom apps, as well as more off-the-shelf or third-party, need to be integrated, so a fairly complex set, or am I getting this wrong?

Naughton: No, it's a very complicated set. You are right on the money. It's all the way from ERP systems that we develop for the industry, CRM applications, digital marketing applications, all the way to the telephony side of the business.

So there is hardware integration, third party integration, but it’s mostly ground-up software development that is the core base of the business of cloud computing apps, multi-tenant applications, and then applications that will tie into telephony systems and other applications through APIs. But the core products are ground-up software development.

Gardner: So it's a highly technical undertaking and your developers are really on the front lines of making this business work for you. This isn’t a nice to have. This is mission critical across the board.

Creativity and freedom

Naughton: Absolutely. And you want to make sure the developers have as much creativity and freedom as they can possibly have.

At the same time, ADP being a public company and being a company that people entrust the data with, we need to have good security across our different platforms. So the challenge was to give the developers a platform where they could be creative, where they could be given a wide range of latitude of tools and technology and at the same time, protect their day-to-day compute that they needed for things like messaging or applications the managers need to administer the workforce.

Gardner: Bill, you are the CIO, you had this vision about how to empower and enable your developers, perhaps even cut some costs along the way, I can imagine that you went to Shane and said, "Make it happen." Is that how it happened or did Shane come to you and say, "Listen, I've got this great idea?"

Naughton: It was a joint effort between knowing that we wanted to do something different, knowing that the developers had unique needs, knowing that security had definite requirements on how we protect from malware, how we protect from viruses, how do we patch and protect the environments. And then we had a cost consideration too in that the spiral of development that we provide to the CTO in his office was getting quite big.

So the combination of Shane being forward-looking at a solution, the requirements we had from the development community, and the security requirements from our GSO office brought it all together into something where we're going to try something a little bit different than traditional approaches.

There's been a tremendous consumption. The adoption by the associate community has been wonderful.



Gardner: Shane, I'd like to hear your perspective on this, when you started moving towards desktop virtualization, maybe it was a lot to bite off at once, but has there been a virtuous adoption benefit cycle of some sort over time? How has this impacted you from the infrastructure point of view?

Martinez: There's been a tremendous consumption. The adoption by the associate community has been wonderful. We were faced with a challenge where we had to present the development community with an environment, which as Bill mentioned, had the latitude for them to perform their job function and they could be creative again. They were re-empowered to do their job and had all of the operational benefits that a typical compute would give them.

In addition to just that environment flexibility, also with the VDI View infrastructure, we were able to provide them with compute environment that was more specifically designed to meet their needs.

As Bill mentioned, we have a litany of different applications and development communities, and each one of their specific compute requirements are different. Using a technology like View that allows us to abstract from the hardware, we create infrastructure specific to each one of their needs.

Gardner: How far and wide have you taken this? Do you have just an internal AppDev organization to support? Do you have distributed or partnership organizations? How did you take this virtualized desktop benefit but manage it across a wide area network or a distributed environment?

Two discrete networks

Martinez: There was a complex challenge that obviously we had to overcome, which was how do we present this pretty powerful environment and construct to people who are distributed, not just across the continental United States, but globally. By creating a separate VRF instance in our wide area network, we were able to bifurcate our WAN and create two discrete networks. That second network, which effectively became a shadow of our production infrastructure, is where the VDIs and all of our lab environments live.

As Bill said, that separate environment is one that is specifically designed to meet the needs of our development community. By virtue of having VDI and View out there for them to access over the separate network, they then can reach it from anywhere within our global network. So we have associates that are distributed across all of our sites that have the ability to consume these resources that we made available.

Gardner: And they have been mostly happy with the latency issues and performance?

Martinez: Oh, very pleased. As a matter of fact, there are several different ways in which we allow them to consume it. The first one is they can access the assets direct. With the View client, they can access their remote workstation and work on it however they are comfortable with.

In addition, though, we have the ability for them to check out that workstation and they can use that workstation either locally or when they are remote on the road. They can use that on their assets and then come back in and check it back in the library. It works very well for them.

Gardner: And for them to be happy and to continue to use these for more and more of their work, I have to imagine that this provides you with some benefits on the back-end, managing configuration, upgrades, updates, and security. How does it work from the perspective of getting benefits, not only from the productivity of the user, but in terms of your management of important things like data?

Currently, we're managing 300-400 workstations per administrator. So we get a very high level of density to associate from a support standpoint.



Martinez: There are two great benefits. The first thing is, from an administrative standpoint, just purely the FTE consumption. I have a very small staff that is designed to manage this specific environment. Currently, we're managing 300-400 workstations per administrator. So we get a very high level of density to associate from a support standpoint.

In addition, we can create and deploy workstations exceedingly fast, at a rate some days of up to 50 and 60 a day.

In addition to that, there's the server administration, as Bill mentioned, with Lab Manager and the accompanying technologies from VMware that we use. This small team is also able to manage in excess of 2,000 servers for the same group of developers and the development community.

Naughton: It's really important that we try to provide a service to the development community that they send a case in and Shane’s team does the provisioning, deprovisioning for them. We spin the environments up real quick and deprovision and reclaim the space. So we get efficiency there.

Service component

The service by the admin is taken care of -- the whole process that they need for new environments. You want to make sure the environments get taken care of. So they do both of that. There is a service component to it that we think is important.

Gardner: You're referring here to your application development activities, but your R&D and lab, are they separate? Do they overlap? How does that work, and what have you been using to support them both with VDI?

Martinez: There are two different environments, as Bill mentioned, throughout the lifecycle of creating a new product. Our development community has to obviously create code and write code, but as we become more of a cloud-based service provider to the auto, truck, marine industries that are out there in the world, we become more of that and interact with the Internet.

So that lab, that test environment, needs to be very dynamic as we create new product, release it ,and have it interact with the Internet and some of the OEMs and external parties that have access to that.

As a result of that, this environment also is able to provide us with a very secure, remote location that is separate from our ERP applications, our standalone Salesforce automation applications, etc., where we can have people connect and test product, beta product, alpha product even, in a place that poses no risk to the rest of our infrastructure.

For all of their activities interacting with the lab, it stays contained in the lab, thus securing the rest of our infrastructure.



Gardner: Sounds very interesting. So it's a lab that you can open up to a lot of people, but feel low risk in doing so.

Martinez: Yes, absolutely.

Naughton: Fully segmented.

Martinez: Think about it as kind of a puppet per se, where the View client is the only connectivity between our production infrastructure and this lab environment, where the only protocol that we allow to reverse the firewall that segments these two environments is that very specific View client. For all of their activities interacting with the lab, it stays contained in the lab, thus securing the rest of our infrastructure.

Gardner: I heard you mention the cloud word. Are you using vSphere,or how are you supporting the cloud? Second, is there going to be some synergy between what you are doing with VDI as primarily a server-based activity and that cloud that they might be able to play off of one other at some point?

Martinez: Absolutely. As we as an organization continue to abstract our operating systems and the applications from the hardware that underlies it, it allows us to become more flexible in how we deliver compute, and application services, both to our internal associates as well as to our external clients.

Private cloud

So ADP has undertaken a great deal of effort in order for it to create its own private cloud infrastructure and the View client and the vSphere environment really is an adjunct to that strategy.

Gardner: All right. One other area that I've heard folks mention, when it comes to the benefit of more centralized control and management, is in the disaster recovery and business continuity aspects. Are you able to also feel lower risk in terms of how you can back up and maintain continuity regardless of external factors for both your application development activities as well as production?

Martinez: Absolutely. By virtue of compressing a great deal of this very critical data and intellectual property into an environment that is virtualized and abstracted by virtue of all the benefits you get with just a virtual environment, vMotion, etc., our data and our environment are much more highly available.

In addition, by virtue of the design, the way in which it’s architected, by bringing all this critical data together, we then can better manage it through a variety of ways that we manage our DR. However, this has really been the stepping stone for us to begin to compress and consolidate all of our distributed lab environments across the world.

Gardner: It almost sounds like a snowball effect, the more you do this, the more you can avail it. The more you can avail it, the more you can apply it, and so on and so forth. Does that overstate in the case when it comes to virtualization?

By virtue of the design, the way in which it’s architected, by bringing all this critical data together, we then can better manage it through a variety of ways that we manage our DR.



Naughton: No. Shane worked with some of the more forward-looking and toughest R&D owners we have -- Hamid Mirza, our CTO and Mark Rankin, the VP of Engineering for our core products, a person who has very demanding requirements -- and they started at the places where we felt we had the most benefit.

So he has evangelized what we have done. That’s really helped with adoption across the business and it's really starting to gain momentum.

Gardner: Let's look at some of the business outcomes. Do you have any metrics about whether you're able to see improved timing when it comes to your development and test or lab activities? Are you seeing higher quality in your applications, and can you attribute that in any way to any of these? Are there business or productivity benefits that you can measure?

Martinez: From a business standpoint we've stopped the technical infrastructure sprawl that we had in our lab environment. So we don’t see that. It was lots of small purchases for servers, for backup infrastructure, for commodity items. That has stopped. So there's a business benefit on just the rates of buying an infrastructure sprawl.

The provisioning and deprovisioning has compressed the cycles that they have of the rote activity that we had in the past. Developing software is a complicated process. So we've automated the steps that we could through the provisioning and deprovisioning.

Relieved the burden

In terms on all the connectivity challenges for developers, where they had to get to environments and the management of those environments, we have relieved the burden on that. They have the client, it spins up, and they are ready to go instantaneously, versus a lot of traversing and a lot of custom configurations just to get the environments to make a mark.

Gardner: Same question to you, Bill. What’s the business payback for this so far?

Naughton: This had an ROI, and sometimes the infrastructure on the ROIs are difficult because this is enabling technology. But it made our criteria that we have investment. So the ROI is pretty quick. We have certain criteria before we make any investment. This one fell right in line with it and it’s delivering what it’s supposed to.

Gardner: There's one last area to get into. We're almost out of time, but we hear a lot these days about mobile. Is there anything about what you've done with virtualization and desktop virtualization that you think might allow you to go out and bring your apps and business processes to a wider range of devices? I know that might not be the case for the workstation, but maybe on the collaboration and workflow aspects?

Martinez: Absolutely. The environment has very powerfully allowed us to open up our compute activities at the end-user, associate level, so that they can consume applications that typically wouldn’t be available to them on a pad, tablet device, or even a smartphone. Now, by virtue of being able to access those particular workstations in that environment with the View client, they now can consume those applications that don’t have something specifically written for a tablet or a smartphone.

So effectively, they use that remote View workstation as a jump post that allows them to interact with any application. So we are no longer bound by the restriction that a tablet or a smartphone may normally present our associates.

With this View application, we can disconnect, check out a workstation, allow it to securely VPN in, and then interact with all of our applications in the infrastructure.



In addition to that access, we're allowed to do it securely. Historically if you wanted to allow a tablet or a smartphone to interact with applications, you had to do so straight from the Internet. It was very difficult to do so unless the person was connected to your network.

Now with this View application, we can disconnect, check out a workstation, allow it to securely VPN in, and then interact with all of our applications in the infrastructure, via a mechanism that the associate is comfortable with, and an interface that they have historically worked with. So our adoption rates have been very high.

Naughton: What Shane is describing is for our internal users who need applications that we provide internally to our workforce. From product development side of the house, what’s been exciting about what we have put together is, as they have come up with mobile platforms, as they want to do native development, or they want to go to HTML5, this environment, we'll be able to scan up those environments for new technology for them to test and to write code against very quickly. In the past we would have to set up a mobile platform, set up a gateway, or put up an environment that would do native apps.

Quick spin-up

W
hat we have done here is allowed test, QA, and development very quickly for new technology like mobile, which is there in the midst, where we have actually put product in the marketplace, put mobile product out there against our core applications and we are able to spin up those environments very quickly.

Gardner: Here at VMworld, we're hearing a lot about the new View 5.0. I understand you've all seen a little bit of that, maybe as a beta. Do you have any impressions about anything in it in particular that’s enticing, that is of interest, or that you've actually had a chance to try out a bit?

Martinez: Some of the greatest benefits that we see coming down the pike from the new product releases is going to be specifically around the protocols it will support. I think that with some of the features and functionality that can be difficult over high latency links over a wide area network, with improved and tighter protocols, PC-over-IP as an example, the benefits to our associates will be huge.

Some of the challenge is that when you abstract the associate locally from their interface, it can be the WAN, high latency links, etc. We have no challenges with this today, but I can see as we go into more and more remote markets, that we need to support third-world countries, where links can be exceedingly pricy or can be very poor in their quality, this will be a huge benefit to our associates.

Gardner: You had some thoughts on this as well, Bill?

Naughton: Yeah. Depending on where the profiling ends up, that’s also important, because as we get into different user bases in our associate community, profiles is going to be an important piece that will help with faster adoption and the ability to include more of our workforce up to a VDI solution.

Some of the challenge is that when you abstract the associate locally from their interface, it can be the WAN, high latency links, etc.



Gardner: Last question before we wrap up. I imagine too that your success in using VDI for application development is a harbinger of expanding this into other parts of ADP Dealer Services or maybe even ADP at large. Any thoughts about whether you're a proof point that others will look to in terms of taking VDI into even more of your organization?

Naughton: At our payroll division in our corporate office they're looking at different solutions and have solutions in production for VDI. Obviously, the benefits of administrative productivity improvements with patching, deployment, roll outs, streaming applications, are all stuff that are exciting developments.

We have probably gone deeper in our home shore, in our application development areas. But I think that there’s some pretty strong use cases where more of our transaction-based functions like customer support, internal sales, where they are high transaction volumes where a VDI solution would be very helpful.

Gardner: We've been talking about how ADP Dealer Services has been enjoying increased security, better management, and higher productivity benefits as they use desktop virtualization across their applications development lifecycle.

Pease join me in thanking our guests. We've been here with Bill Naughton. He is the CIO of ADP Dealer Services. Thanks so much, Bill.

Naughton: Thank you.

Gardner: And Shane Martinez, Director of Global Infrastructure at ADP Dealer Services. Thanks to you too, Shane.

Martinez: Thanks.

Gardner: And also thanks to our audience for joining this special podcast coming to you from the 2011 VMworld Conference in Las Vegas.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how one company, ADP, uses the latest VDI software to provide virtual workstations for ALM and quality services to application developers. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, January 09, 2019

How Global HCM Provider ADP Mines an Ocean of Employee Data for Improved Talent Management

Transcript of a discussion on how advances in infrastructure, data access, and AI combine to produce a step-change in human capital analytics and new business services.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories.

Gardner
Our next big data analytics and artificial intelligence (AI) strategies discussion explores how human capital management (HCM) services provider ADP unlocks new business insights from vast data resources.

With more than 40 million employee records to both protect and mine, ADP is in a unique position to leverage its business data network for unprecedented intelligence on employee trends, risks, and productivity.

ADP is entering a bold new era in talent management by deploying advanced infrastructure to support data assimilation and refinement of a vast, secure data lake as foundations for machine learning (ML).

Stay with us now as we unpack how advances in infrastructure, data access, and AI combine to produce a step-change in human capital analytics. With that, please join me in welcoming Marc Rind, Vice President of Product Development and Chief Data Scientist at ADP Analytics and Big Data. Welcome to BriefingsDirect, Marc.

Marc Rind: Thank you, Dana.

Gardner: We’re also here with Dr. Eng Lim Goh, Vice President and Chief Technology Officer for High Performance Computing and Artificial Intelligence at Hewlett Packard Enterprise (HPE). Welcome, Dr. Goh.

Dr. Eng Lim Goh: Thank you for having me.


Gardner: Marc, what's unique about this point in time that allows organizations such as ADP to begin to do entirely new and powerful things with its vast data?

Rind: What’s changed today is the capability to take data -- and not just data that you originally collect for a certain purpose, I am talking about the “data exhaust” -- and to start using that data for purposes that are not the original intention you had when you started collecting it.

Rind
We pay one in six full-time employees in the US, so you can imagine the data that we have around the country, and around the world of work. But it's not just data about how they get paid -- it's how they are structured, what kind of teams are they in, advances, bonuses, the types of hours that they work, and everything across the talent landscape. It's data that we have been able to collect, curate, normalize, and then aggregate and anonymize to start leveraging to build some truly fascinating insights that our clients are able to leverage.

Gardner: It's been astonishing to me that companies like yours are now saying they want all of the data they can get their hands on -- not just structured data, but all kinds of content, and bringing in third-party data. It's really “the more, the merrier” when it comes to the capability to gather entirely new insights.

The vision of data insight

Rind: Yes, absolutely. Also there have been advances in methodologies to handle this data -- like you said, unstructured data, non-normalized data, taking data from across hundreds of thousands of our clients, all having their own way that they define, categorize, and classify their workforces.

Learn How IT Best Supports 

The Greatest Data Challenges

Now we are able to make sense of all of that -- across the board -- by using various approaches to normalize, so that we can start building insights across the board. That’s something extremely exciting for us to be able to leverage.

Gardner: Dr. Goh, it's only been recently that we have been able to handle such vast amounts of data in a simplified way and at a manageable cost. What are partners like HPE bringing to the table to support these data platforms and approaches that enable organizations like ADP to make analytics actionable?

Goh: As Marc mentioned, these are massive amounts of data, not just the data you intend to keep, but also the data exhaust. He also mentioned the need to curate it. So the idea for us in terms of data strategy with our partners and customers is, one, to retain data as much as you can.

Goh
Secondly, we ensure that you have the tools to curate it, because there is no point having massive amounts of data over decades – and when you need them to train a machine –  and you don’t know where all of the data is. You need to curate it from the beginning, and if you have not, start curating your data now.

The third area is to federate. So retain, curate, and federate. Why is the third part, to federate, important? As many huge enterprises evolve and grow, a lot of the data starts to get siloed. Marc mentioned a data lake. This is one way to federate, whereby you can cut across the silos so that you can train the machine more intelligently.

We at HPE build the tools to provide for the retention, curation, and federation of all of that data.

Gardner: Is this something you are seeing in many different industries? Where are people leveraging ML, AI, and this new powerful infrastructure? 

Goh: It all begins with what I call the shift. The use of these technologies emerged when industries shifted from when prediction decisions were made using rules and scientific law-based models.

Then came a recent reemergence of ML, where instead of being based on laws and rules, you evolve your model more from historical data. So data becomes important here, because the intelligence of your model is dependent on the quantity and quality of the data you have. And by using this approach you are seeing many new use cases emerge, of using the ML approach on historical data.

One example would be farming. Instead of spraying the entire crop field, they just squirt specifically at the weeds and avoid the crops.

Gardner: This powerful ML example is specific to a vertical industry, but talent management insights can be used by almost any business. Marc, what has been the challenge to generate talent management insights based on historical data?

Rind: It’s fascinating because Dr. Goh’s example pertains to talent management, too. Everyone that we work with in the HCM space is looking to gain an advantage when it comes to finding, keeping, and retaining their best talent.

We look at a vast amount of employment data. From that, we can identify people who ended up leaving an organization voluntarily versus those who stayed and grew, why they were able to grow, based on new opportunities, promotions, different methods of work, and by being on different teams. Similar to the agriculture example, we have been able to use the historical data to find patterns, and then identify those who are the “crops” and determine what to do to keep them happier for longer retention.

It’s fascinating because Dr. Goh’s example pertains to talent management, too. Everyone that we work with in the HCM space is looking to gain an advantage when it comes to finding, keeping, and retaining their best talent.
This is a big shift in the talent management space. We are leveraging vast data -- but not presenting too much data to an HCM professional. We spend a lot of time handling it on their behalf so the HCM professional and even managers can have the insights pushed to them, rather than be bombarded with too much data.

At the end of the day, we are using AI to say, “Hey, here are the people you should go speak with. Or this manager has a lot of high-risk employees. Or this is a critical job role that you might see higher than expected turnover with.” We can point the managers in that direction and allow them to figure out what to do about it. And that's a big shift in simplifying analysis, and at the same time keeping the data secure.

Data that directs, doesn’t distract 

Goh: What Marc described is very similar to what our customers are doing by converting their call center voice recordings into text. They then anonymize it but gain the ability to figure out the sentiment of their customers.

The sentiment analysis of the text -- after converting from a voice recording – helps them better understand churn. In the telco industry, for example, they are very concerned about churn, which means a customer leaving you for another vendor.

Yes, it’s very similar. First you go through a massive amount of historical data, and then use smart tools to convert the data to make it useable, and then a different set of tools analyzes it all -- to gain such insights as the sentiment of your customers.

Gardner: When I began recording use case discussions around big data, AI, and ML, I would talk to organizations like refineries or chemical plants. They were delighted if they could gain a half-percent or a full percent of improvement. That alone meant billions of dollars to them.

But you all are talking about the high-impact improvement for employees and talent. It seems to me that this isn’t just shaving off a rounding number of improvement. Marc, this type of analysis can make or break a company's future.

So let's look at the stakes here. When we talk about improving talent management, this isn’t trivial. This could mean major improvement for any cdanStaveMen66ompany.

Learn How IT Best Supports 

The Greatest Data Challenges

Rind: Every company. Any leader of an organization will tell you that their most important resource is the people that work for the company. And that value is not an easy thing to measure.

We are not talking about how much more we can save on our materials, or how to be smarter in electricity savings. You are talking about people. At the end of the day, they are not a resource as much as they are human beings. You want to figure out what makes them tick, gain insight into where people need to be growing, and where you should spend the human time with them.

Where the AI comes in is to provide that direction and offer suggestions and recommendations on how to keep those people there, happy and productive.

Another part of keeping people productive is in automating the processes necessary for managers. We still have a lot of users punching clocks, managing time, and approving pay cards and processing payroll. And there are a lot of manual things that go on and there is still a lot of paperwork

We are using AI to simplify and make recommendations to handle a lot of those pieces, so the HR professional can be focused on the human part -- to help grow careers rather than be stuck processing paperwork and running reports.

Cost-effective AI, ML has arrived 

Gardner: We’re now seeing AI and ML have a major impact on one of the most important resources and assets a company can have, human capital. At the same time, we’re seeing the cost and complexity of IT infrastructure that support AI go down thanks to things like hyperconverged infrastructure (HCI), lower cost of storage, capability to create whole data centers that can be mirrored, backed up, and protected -- as well as ongoing improvements in composable infrastructure.

Are we at the point where the benefits of ML and AI are going up while the cost and composability of the underlying infrastructure are going down?

Goh: Absolutely. That’s the reason we have a reemergence of AI through machine learning of historical data. These methods were already available decades ago, but the infrastructure was just too costly to amass enough data for the machine to be intelligent. You just couldn’t get enough compute power to go through that data for the machine to be intelligent. It wasn’t until now that the various infrastructure required came down in cost, and therefore you see this reemergence of ML.

If one were to ask why in the last few years there has been a surge to AI, it would be lower cost of compute capability. We have reached a certain point where it is cost-effective enough to amass the data. Also because of the Internet, the data has become more easily accessible in the last few years.

Gardner: Marc, please tell us about ADP. People might be familiar with your brand through payroll processing, but there's a lot more to it.

Find, manage, and keep talent 

Rind: At ADP, or Automatic Data Processing, data is our middle name. We’ve been working at a global scale for 70 years, now with $12 billion in revenue and supporting over 600,000 businesses -- ranging from multinational corporations to three-person small businesses. We process $2 trillion in payroll and taxes, running about 40 million employee records per month. The amount of data we have been collecting is across the board, not just payroll.

Talent management is a huge thing now in the world of work -- to find and keep the best resources. Moving forward, there is a need to understand innovative engagement of that workforce, to understand the new world of pay and micro-pay, and new models where people are paid almost immediately.

The contingent workforce means a work market where people are moving away from traditional jobs. So there are lots of different areas within the world of payroll processing and talent management. It has really gotten exciting.

This could mean major improvement for any company. Where the artificial intelligence comes in is to provide that direction and offer suggestions and recommendations on how to keep those people there happy and productive.
With all of this -- optimizing your workforce – also brings better understanding of where to save the organization from lost dollars. Because of the amounts of data, we can inform a client not just on, “Okay, this is what your cost of turnover is based on who is leaving and how long it takes them to get productive again, and the cost of recruiting.”

We can also show how your HCM compares against others in your field. It's one thing to share some information. It’s another to give an insight on how others have figured this out or are handling this better. You gain the potential to save more by learning about other methods out there that you should explore to improve talent retention.

Once you begin generating cost savings for an organization -- be it in identifying people who are leaving, getting them on-boarded better, or reducing cost from overtime – it shows the power of the insights and of having that kind of data. And that’s not just about your own organization, but it’s in how you compare to your peers.

So that’s very exciting for us.

All-access data analytics

Goh: Yes, we are very keen to get such reports on intelligence with regards to our talent. It’s become very difficult to hire and retain data scientists focused on ML and AI. These reports can be helpful in hiring and to understand if they are satisfied in their jobs.

Rind: That’s where we see the future of work, and the future of pay, going. We have the organization, the clients, and the managers -- but at the end, it’s also data insights for the employees. We are in a new world of transparency around data. People understand more, they are more accepting of information as long as they are not bombarded with it.

As an employee, your partner in your career growth and your happiness at work is your employer. That’s the best partnership, where the employer understands how to put you into the right place to be more productive and knows what makes you tick. There should be understanding of the employees’ strengths, to make sure they use those strengths every day, and anticipate what makes them happier and more productive employees.

Those conversations start to happen because of the data transparency. It’s really very exciting. We think this data is going to help guide the employees, managers, and human resources (HR) professionals across the organizations.

Learn How IT Best Supports 

The Greatest Data Challenges

Gardner: ADP is now in a position where your value-added analysis services are at the level where boards of directors and C-suite executives will be getting the insights. Did that require a rethinking of ADP’s role and philosophy?

Rind: Through our journey we discovered that providing insights to the HR professional is one thing. But we realized that to fully unleash and unlock the value in the data, we needed to get it into the hands of the managers and executives in the C-suite.

And the best way to do that was to build ADP’s mobile app. It’s been in the top three of the most downloaded applications from the business section on the iTunes Store. People initially got this application to check their paystub and manage their deductions, et cetera. But now, that application is starting to push up to the managers, to the executives, insights about their organization and what's going on.

A key part was to understand the management persona. They are busy running their organizations, and they don’t have the time to pore through the data like a data scientist might to find the insights.

So we built our engine to find and highlight the most important critical data points based on their statistical significance. Do you have an outlier? Are you in the bottom 10 percent as an organization in such areas as new hire attrition? Finding those insights and pushing them to the manager and executive gets them these headlines.


Next, as they interact with the application, we gain intelligence about what's important to that manager and executive. We can then then push out the insights related to what's most important to them. And that's where we see these value-added services going. An executive is going to care about some things differently than a supervisor or a line manager might.

We can generate the insights based on their own data when they need it through the application versus them having to go in and get it. I think that push model is a big win for us, and we are seeing a lot of excitement from our clients as they are start using the app.

Gardner: Dr. Goh, are you seeing other companies extend their business models and rethinking who and what they are due to these new analytics opportunities?

Data makes all the difference

Goh: Yes, yes, absolutely. The industry has shifted from one where your differentiated asset was your method and filed patent, to one where your differentiated asset is the data. Data becomes your defensible asset, because from that data you can build intelligent systems to make better decisions and better predictions. So you see that trend.

In order for this trend to continue, the infrastructure must be there to continually reduce cost, so you can handle the growing amounts of data and not have the cost become unmanageable. This is why HPE has gone with the edge-to-cloud hybrid approach, where the customer can implement this amassing of data in a curated and federated way. They can handle it in the most cost-effective way, depending on their operating or capital budgets.

Gardner: Marc, you have elevated your brand and value through trends analysis around pay equity or turnover trends, and gaining more executive insights around talent management. But that wouldn't have been possible unless you were able to gain the right technology.

What do you have under the hood? And what choices have you made to support this at the best cost?

Rind: We build everything in our own development shop. We collect all the data on our Cloudera [big data lake] platform. We use various frameworks to build the insights and then push those applications out through our ADP Data Cloud.

We have everything open via a RESTful API, so those insights can permeate throughout the entire ADP ecosystem -- everyone from a practitioner getting insights as they on-board a new employee and on out to the recruiting process. So having that open API is a critical part of all of this.

Gardner: Dr. Goh, one of the things I have seen in the market is that the investments that companies like ADP make in the infrastructure to support big data analytics and AI sets in motion a virtuous adoption benefit. The investments to process the data leads to an improvement in analytics, which then brings in more interest in consumption of those analytics, which leads to the need for more data and more analytics.

It seems to me like it’s a gift that keeps giving and it grows in value over time.

Steps in the data journey 

Goh: We group our customers on this AI journey into three different groups: Early, started, and advanced. About 70 percent of our customers are in the early phase, about 20 percent in the started phase, where they have already started on the project, and about 10 percent are in the advanced phase.

The advanced-phase customers are like the automotive customers who are already on autonomous vehicles but would like us to come in and help them with infrastructure to deal with the massive amounts of data.

But the majority of our customers are in the early phase. When we engage with them, the immediate discussion is about how to get started. For example, “Let’s pick a low-hanging fruit that has an outcome that’s measurable; that would be interesting.”

We work with the customer to decide on an outcome to aim for, for the ML project. Then we talk about gaining access to the data. Do they have sufficient data? If so, does it take a long time to clean it out and normalize it, so you can consume it?

After that phase, we start a proof of concept (POC) for that low-hanging fruit outcome -- and hopefully it turns out well. From there the early customer can approach their management for solid funding to get them started on an operational project.

We are using AI to simplify and make recommendations to handle a lot of those pieces, so the HR professional can be focused on the human part -- to help grow careers rather than be stuck processing paperwork and running reports.
That’s typically how we do it. It always starts with the outcome, and what we are aiming for this machine to be trained at. Once they have gone through the learning phase, what is it they are trying to achieve, and would that achievement be meaningful for the company? A low-hanging fruit POC doesn’t have to be that complex.

Gardner: Marc, any words of wisdom looking back with 20/20 hindsight? When it comes to the investments around big data lakes, AI, and analytics, what would you tell those just getting started?

Rind: Much to Dr. Goh’s point, picking a manageable project is a very important idea. Go for something that is tangible, and that you have the data for. It's always important to get a win instead of boiling the ocean, to prove value upfront.

A lot of large organizations -- instead of building data lakes, they end up with a bunch of data puddles. Large companies can suffer from different groups building their own.

We have committed to localizing all of the data into a single data lake. The reason is that you can quickly connect data that you would never have thought to connect before. So understanding what the sales and the service process is, and how that might impact or inform the product or vice versa, is only possible if you start putting all of your data together. Once you get it together, just work on connecting it up. That's key to opening up the value across your organization. 

Connecting the data dots 

Goh: It helps you connect more dots.

Gardner: The common denominator here is that there is going to be more and more data. We’re starting to see the Internet of Things (IoT) and the Industrial Internet of Things (IIoT) bring in even more data.

Even relevant to talent management, there are more ways of gathering even more data about what people are doing, how they are working, what their efficacy is in the field, especially across organizational boundaries like contingent workforces, being able to measure what they are doing and then pay them accordingly.

Marc, do you see ever more data coming online to then need to be measured about how people work?

Rind: Absolutely! There is no way around it. There are still a lot of disconnected points of data, for sure. The connection points are going to just continue to be made possible, so you get a 360-degree view of the world at work. From that you can understand better how they are working, how to make them more productive and engaged, and bringing flexibility to allow them to work the way they want. But only by connecting up data across the board and pulling it all together would that be possible.

Learn How IT Best Supports 

The Greatest Data Challenges

Gardner: We haven’t even scratched the surface of incentivization trends. The more data allows you to incentivize people on a micro basis in near-real time, is such an interesting new chapter. We will have to wait for another day, another podcast, to get into all of that.

I’m afraid we’ll have to leave it there. We’ve been discussing how global human capital management services provider ADP has unlocked new business insights and services from its vast data resources. And we have learned that by deploying the most advanced infrastructure for AI, a new era is dawning for talent management.

Please join me now in thanking our guests, Marc Rind, Vice President of Product Development and Chief Data Scientist at ADP Analytics and Big Data. Thank you so much, Marc.

Rind: Thank you for having me, Dana.

Gardner: And we have been joined too by Dr. Eng Lim Goh, Vice President and Chief Technology Officer for High Performance Computing and Artificial Intelligence at HPE. Thank you so much, Dr. Goh.

Goh: Thank you, Dana. Thank you, Marc.

Rind: Great, thank you.

Gardner: And a big thank you as well to our audience for joining us for this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along to your own IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how advances in infrastructure, data access, and AI combine to produce a step-change in human capital analytics and new business services. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: