This image has an empty alt attribute; its file name is channel-logo-300x300.jpg

In the final installment of our AI at Work series, partner Guy Brenner and senior counsel Jonathan Slowik tackle a critical issue: mismatches between how artificial intelligence (or AI) tools are designed and how they are actually used in practice. Many AI developers emphasize their rigorous efforts to eliminate bias, reassuring employers that their tools are fair and objective, but a system designed to be bias-free can still produce biased outcomes if used improperly. Tune in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly.

Listen to the podcast.


Guy Brenner: Welcome to The Proskauer Brief: Hot Topics in Labor and Employment Law. I’m Guy Brenner, a partner in Proskauer’s Employment Litigation & Counseling group, based in Washington, D.C.  I’m joined by my colleague, Jonathan Slowik, a special employment law counsel in the practice group, based in Los Angeles. This is the final installment of our initial multi-part series detailing what employers need to know about the use of artificial intelligence, or AI when it comes to employment decisions, such as hiring and promotions. Jonathan, thank you for joining me today.

Jonathan Slowik: It’s great to be here, Guy.

Guy Brenner: So if our listeners haven’t heard the earlier installments of the series, we encourage you to go back and listen to them. In part one, we go through what we hope is a useful background about what AI is and the solutions it offers to employers. In part two, we talk about issues with training data and how that can lead to biased or otherwise problematic outputs with AI tools. In part three, we discussed so-called black box issues. In other words, issues that arise due the fact that may be difficult to understand the inner workings of many advanced AI systems. Today’s episode is about mismatches between the design of an AI tool and how the tool is used in practice. Jonathan, for background, AI developers generally put a lot of effort in eliminating bias from their products, isn’t that right?

Jonathan Slowik: Yes, that’s right. And that’s a major selling point for a lot of these developers. Employers obviously have a great interest in ensuring that they’re deploying a tool that’s not going to create bias in an unintended way. And so, if you go to just about any of these developers’ websites, you can find statements or even full pages about the efforts and lengths they’re going through to ensure that they’re putting out products that are bias free. And this should provide some measure of comfort for employers. It’s clearly something that the developers are competing on. But even if a product is truly bias free, it could still produce biased results if it’s deployed in a way that the developer didn’t intend to make this concrete. I want to go through a few examples. So first, suppose an employer instructs their resume scanner to screen out applicants that are more than a certain distance from the workplace. Perhaps on the theory that these people are less likely to be serious candidates for the position. And if you remember, in part one of this series, hiring managers are overwhelmed with applications these days. Given the ability to submit resumes at scale on platforms like LinkedIn or indeed. Guy, do you see any problem with this particular screening criteria?

Guy Brenner: Well, Jonathan, I can see the attractiveness of it. And I can also see how I can make something like this that hiring managers may have thought of in the past possible when otherwise it would be impossible. Just by virtue of the speed and efficiency and ability of AI to do things, you know, in a matter of seconds. And it sounds unbiased and objective, and it’s a rational basis for trying to cull through the numerous resumes that employers are inundated with whenever they’re trying to fill a position. But the fact is that many of the places in which we live are highly segregated by race and ethnicity. So depending on where the workplace is located, this kind of approach might disproportionately screen out legitimate candidates of certain races, even though that may not be the intent.

Jonathan Slowik: Right. And even though this is something that you could do manually, a hiring manager could just decide to toss out all the resumes of a certain zip code. Doing this with technology increases the risk. So again, a hiring manager doing this manually might start to notice a pattern at some point and realize that this screening criterion was creating an unrepresentative pool. The difference with using software to do this kind of thing is that it can be done at scale very quickly, and only show you the output. And so, the same hiring manager doing this with technology might screen out mostly racial minorities and have no idea that that was even the case. All right. Next hypothetical. What if an employer uses a tool that tries to verify candidate’s backgrounds by cross-referencing social media, and then boosts candidates whose backgrounds are verifiable in that way? Any issues with that one?

Guy Brenner: Well, the one that comes to mind is, I mean, I don’t think this is a controversial proposition that, generally speaking, younger applicants are more active on social media than older applicants. And I think that’s exacerbated depending on which platform we’re talking about.

Jonathan Slowik: So we actually have data on that. So it’s not a stereotype. It’s actually on the Pew Research has issued data confirming what all of us I think suspect.

Guy Brenner: Right. And so it’s not hard to imagine an enterprising plaintiff’s lawyer arguing that a screening tool like this may have a disparate impact on older applicants. I would also be concerned if the scoring takes into account other information on social media pages that could be used as proxy for discriminatory decisions.

Jonathan Slowik: Okay, one more hypothetical. Suppose an employer trying to fill positions for a call center uses a test that tries to predict whether the applicant would be adept at handling distractions under typical working conditions. And supposing this call center that includes a lot of background values. So this is clearly a screening mechanism that’s testing something job related. The employer wants to see how this person is going to perform under the conditions we expect them to be placed in when we actually put them in the job. Is there any problem with this kind of test?

Guy Brenner: Well, first, like any other test, you’d want to know if the test itself has any disparate impact on any particular group, you would want to have it validated. But I also want to know if the company had considered whether some applicants would be entitled to a reasonable accommodation. For example, you can imagine someone who’s neurodiverse performing poorly on this type of simulation, but doing just fine if they were provided with some noise canceling headphones.

Jonathan Slowik: For sure. And this is something the EEOC has issued guidance about. Many of these types of job skills simulations are designed to test an applicant’s ability to perform tasks, assuming typical working conditions, as the employer did in this example. But what the EEOC has made clear is that many employees with disabilities don’t work under atypical working conditions because they work with reasonable accommodations. So for that reason, over reliance on the test without considering the impact on people with disabilities and whether the test should allow for accommodations is potentially problematic.

Guy Brenner: Well, thanks, Jonathan, and to those listening, thank you for joining us on The Proskauer Brief today. We hope you found this series informative. And please note that as developments warrant, we will be recording new podcasts to help you stay on top of this fascinating and ever-changing area of the law and technology. Also, please be sure to follow us on Apple Podcasts, YouTube music, and Spotify so you can stay on top of the latest hot topics in labor and employment law.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Guy Brenner Guy Brenner

Guy Brenner is a partner in the Labor & Employment Law Department and leads the Firm’s Washington, D.C. Labor & Employment practice. He is head of the Government Contractor Compliance Group, co-head of the Counseling, Training & Pay Equity Group and a member…

Guy Brenner is a partner in the Labor & Employment Law Department and leads the Firm’s Washington, D.C. Labor & Employment practice. He is head of the Government Contractor Compliance Group, co-head of the Counseling, Training & Pay Equity Group and a member of the Restrictive Covenants, Trade Secrets & Unfair Competition Group. He has extensive experience representing employers in both single-plaintiff and class action matters, as well as in arbitration proceedings. He also regularly assists federal government contractors with the many special employment-related compliance challenges they face.

Guy represents employers in all aspects of employment and labor litigation and counseling, with an emphasis on non-compete and trade secrets issues, medical and disability leave matters, employee/independent contractor classification issues, and the investigation and litigation of whistleblower claims. He assists employers in negotiating and drafting executive agreements and employee mobility agreements, including non-competition, non-solicit and non-disclosure agreements, and also conducts and supervises internal investigations. He also regularly advises clients on pay equity matters, including privileged pay equity analyses.

Guy advises federal government contractors and subcontractors all aspects of Office of Federal Contract Compliance Programs (OFCCP) regulations and requirements, including preparing affirmative action plans, responding to desk audits, and managing on-site audits.

Guy is a former clerk to Judge Colleen Kollar-Kotelly of the US District Court of the District of Columbia.

Photo of Jonathan Slowik Jonathan Slowik

Jonathan Slowik represents employers in all aspects of litigation, with a particular emphasis in wage and hour class, collective, and representative actions, including those under the Private Attorneys General Act (PAGA). He has defended dozens of class, collective, and representative actions in state…

Jonathan Slowik represents employers in all aspects of litigation, with a particular emphasis in wage and hour class, collective, and representative actions, including those under the Private Attorneys General Act (PAGA). He has defended dozens of class, collective, and representative actions in state and federal trial and appellate courts throughout California and beyond. In addition to his core wage and hour work, Jonathan has defended employers in single-plaintiff discrimination, harassment, and retaliation cases, and in labor arbitrations. Jonathan also regularly advises clients on a wide range of compliance issues and on employment issues arising in corporate transactions.

Jonathan has deep experience representing clients in the retail and hospitality industries, but has assisted all types of clients, including those in the health care, telecommunications, finance, media, entertainment, professional services, manufacturing, sports, nonprofit, and information technology industries.

Jonathan is a frequent contributor to Proskauer’s California Employment Law Blog and has written extensively about PAGA on various platforms. He has been published or quoted in Law360, the Daily Journal, the California Lawyer, the Northern California Record, and the UCLA Law Review.

Jonathan received his B.A. from the University of Southern California in 2007, magna cum laude, and J.D. from UCLA School of Law in 2012, where he was a managing editor of the UCLA Law Review.