Home PublicationsCommentary Event Recap: How AI Can Help People Back to Work

Event Recap: How AI Can Help People Back to Work

by Eline Chivot
by

During a discussion on the role and use of artificial intelligence (AI) in human resources (HR) hosted by the Center for Data Innovation, panelists agreed that AI creates a range of opportunities for companies, hiring managers, employees, and applicants, and those deploying the technology should address concerns about discrimination and bias, particularly through transparency, communication, and a mix of industry-led best practices and policy enforcement.

There is a broad range of AI applications in HR, such as sourcing and shortlisting candidates, screening resumes or social media profiles, assessing candidates in video interviews, automating the onboarding process, and predicting employee retention. 

Markellos Diorinos, CEO at Bryq, mentioned that in recent years, AI has been moving from experimentation to mainstream use in the recruiting world due to AI tools becoming more advanced and accessible. According to Ben Mones, founder and CEO of Fama.io, this growing use also stems from the need to automate workflows and processes in order to be competitive—in this case, by finding top talent, which is scarce in a globalized, saturated job market. Another reason behind the growing adoption of AI is that HR has grown from being viewed as generating costs rather than profits to a field of expertise that can increase productivity for organizations. AI serves both of these purposes.

AI tools for recruiting and hiring can indeed lead to more efficiency and productivity, as effectively screening resumes quickly remains one of the biggest challenges. Lindsey Zuloaga, director of data science at HireVue explained how AI can address skill shortages across key industries by enabling companies to find qualified people for hard-to-fill positions. Eline Chivot, senior policy analyst at the Center for Data Innovation, mentioned that the hardest part of recruitment for talent acquisition leaders is to identify the right candidates from a large applicant pool.

AI tools can save time for candidates who would otherwise apply for jobs they are not qualified for or miss out on opportunities they may be interested in. In addition, hiring managers can spend less time on mundane tasks and more time connecting with applicants. AI is repairing a broken hiring process that did not benefit many people: In-person interviews are an imperfect tool for selecting  employees, and resumes do not allow employers to assess people for future job performance, soft skills, or compatibility. 

AI can increase diversity in teams and reduce discrimination. Standardizing the process using AI such as by asking all candidates the same questions during interviews provides more room for people who are traditionally underrepresented or overlooked to be treated more fairly. For instance, one of HireVue’s clients increased the diversity of its teams by 16 percent by using automated assessments and classifiers. Chivot and Zuloaga recall that our brains are, after all, the ultimate black box, and while concerns often point to algorithms being biased, algorithms could replace another biased system—people.

Andrea Glorioso, principal policy officer for the future of work at the European Commission’s DG Connect, concurred with Zuloaga and Diorinos that AI in recruiting and hiring can be seen as conducive to risks such as discrimination and biased selection. Glorioso acknowledged that these issues existed before, but we know how to address them: Through the law and going to court, or through industrial relations and labor unions. With technologies entering the equation, the ways to find out information or appeal decisions may become less clear. To address these concerns, the European Commission is preparing rules that could limit the use of AI in certain sectors and for certain activities that present potential risks.

Glorioso suggested that greater transparency from industry could alleviate this apprehension and prevent wrong assumptions about the use of the technology. Panelists responded that companies are increasingly communicating to customers and the public. For instance, for assessments based on pre-recorded video interviews, HireVue discloses which aspects are taken into consideration, what data is collected, who is it shared with, and which assessments use AI or not. Mones recalled that explaining solutions based on automated decision-making is not always human readable; it may not be useful to explain how they operate to candidates who may be unfamiliar with machine learning. In addition, when engaging with customers who consider using those solutions, it is important for companies providing AI tools to set expectations about the data and the insights that can be shared, and the permissible uses of the software.

Addressing concerns about AI should involve a mix of both policy enforcement and private and public sector responsibility. Industry-led self-regulation and internal procedures to increase transparency can be useful. For example, HireVue is working with independent third-party auditors to assess algorithmic fairness, risk, and legal aspects, and has an expert advisory board which serves as internal self-regulation. Mones recalled that public policy and regulations on privacy and data protection such as the GDPR (the EU’s privacy law), the U.S. Fair Credit Reporting Act (FCRA), and ban-the-box laws already provide relevant safeguards.

There are many potential opportunities to use AI in HR, and policymakers should support efforts to expand adoption of this technology to help workers and jobseekers.

You may also like

Show Buttons
Hide Buttons