By Ken Wang
With the economy opening back up and a post-pandemic hiring boom around the corner, more and more employers will be turning to artificial intelligence (AI) and algorithms to save costs and streamline hiring and recruitment. Massive data collection, machine learning, and other advanced computational techniques are transforming traditional pre-employment assessments to help employers assess the skills, aptitude, and fit of prospective workers.
So, what do these new technologies mean for workers?
To explore this question, the Department of Fair Employment and Housing will be hosting the first-ever virtual public hearing on algorithms and bias this Friday, April 30th. The hearing will be from 10:00 AM – 3:00 PM PST, with the employment section from 10:00 AM – 11:40 AM. You can RSVP using this link and tune in via Zoom.
As employers increasingly move to automating hiring and other HR functions, it is imperative that we explore the growing role of algorithms in the workplace and assess whether our existing labor and employment laws are adequately protecting workers’ rights. For example, websites like Facebook use a vast amount of user information to target ads to a precise audience. Unlike the traditional paper ads placed in the classified section, accessible to all those who pick up the paper, the new world of micro-targeting means you only see opportunities that are targeted to you. Facebook’s “Affinity Group” feature categorizes users based on interests and demographics, allowing advertisers to precisely target their desired audience. This can be used to limit ad delivery to specific age bands, such as those from ages 18-38.
Consider Facebook’s “Lookalike Audience” feature, which allows the ad buyer to import a “sample audience,” using a variety of data points that includes demographics information, from which Facebook can generate a target ad audience that “looks like” that sample. When used in the employment context, a sample audience with skewed demographics — such as a tech firm with overwhelmingly white, young, and male staff — will result in a target ad audience that is similarly skewed.
You’d be right to wonder how these features could possibly withstand scrutiny under our anti-discrimination laws. In 2019, Facebook settled a lawsuit brought by national civil rights groups and agreed to make significant changes to the way these features are used for housing, employment, and credit ads.
These Facebook features present obvious issues, but other forms of algorithmic hiring tools can make discrimination virtually impossible to detect. Our friends at Upturn have done a nice overview of the kinds of tools being deployed in each step of the “hiring funnel” — the process by which prospective applicants turn into new hires. Here are a few examples at each stage:
- At the “screening” stage:
- Employers may use employee assessment tools to measure the skill, personality, or other traits of applicants. For example, the tool asks applicants to play games to measure traits such as processing speed, memory, and perseverance. The data is used to predict and rank who is the best match with the best performers of the employer’s workforce.
- This means that the applicants that are selected to advance will likely mirror the existing workplace demographics.
- At the “interviewing” stage:
- Employers may record an applicant’s interview using technology capable of facial recognition. Verbal responses, tone, and facial expressions are recorded to analyze word choice, enthusiasm, and other criteria in order to predict future job performance.
- This means that applicants who do not speak English as a first language may perform poorly.
- At the “selection” stage:
- Employers may use automated background check tools that trawl for information online in order to flag potential risks in hiring an applicant. For example, a tool can automatically analyze an applicant’s social media history to determine the likelihood of that person to engage in toxic behavior. The tool may also surface information that reveals information that is otherwise protected from disclosure
- This means that an otherwise qualified applicant may be denied a job because a tool misinterpreted a previous social media post or revealed information about private health information.
These are just a few examples of how AI and algorithms have been used in the employment context. What’s clear is that there is so much more that we don’t know and we hope that Friday’s hearing is the first of many to explore this important topic. As worker advocates, we must ensure that our state agencies are staying engaged on these issues and that our employment laws are developing in a way that keeps pace with these evolving technologies. In addition to Friday’s hearing, you can also view an issue briefing CELA recently co-hosted on this same topic here.
About Ken Wang, Esq.
Legislative Policy Associate, California Employment Lawyers Association