SeeTalent Insights

Generative AI in HR: Are we allowed to use it?

Airlie Hilliard

Generative AI is currently at the peak of the AI hype cycle, with many having high hopes for its transformative potential. The release of ChatGPT in November 2022 marked a significant inflexion point for generative AI, putting it in the hands of everyday people and not just developers. Since then, various providers have made their chatbots available to the public, including Google’s Gemini (previously Bard) and Anthropic’s Claude, and a number have released multimodal generative AI solutions, such as OpenAI’s text-to-video model Sora and Stability AI’s Stable Diffusion text-to-image model. The availability of these powerful models with user-friendly interfaces has fuelled a number of innovative applications, including in HR, where generative AI is being used on both the candidate side and employer side.

How is generative AI being used by job candidates?

Given its widespread availability, some job candidates are harnessing generative AI in their job applications and during the assessment process to support their performance, with many sharing their “hacks” on social media. For example, some candidates are using AI to draft their cover letters, and 45% of job seekers have reportedly used AI to craft, edit, or improve their CVs. There are even startups emerging that specifically offer solutions for CV and cover letter crafting. 

Other candidate applications of generative AI include LinkedIn profile optimisation and interview prep. Specific apps have also been created to transcribe interview questions in real-time and generate a response for candidates, highlighting the need to ensure that recruitment funnels have more AI-resistant assessments, such as game and image-based assessments that do not have as many verbal or text-based elements. 

How is generative AI being used in HR?

On the HR side, generative AI has a number of applications such as for writing job descriptions and job ads, with the technology potentially able to increase gender inclusivity by removing gender-coded language. It is also being used by employers to screen resumes, provide feedback to candidates, and coordinate interview scheduling. It can even be used during offboarding to provide personalised instructions and feedback to employees as they depart the organisation, and some have trialled the use of large language models to infer personality from video interviews

While generative AI can be useful for automating repetitive tasks and personalising communications, its use in HR can raise some ethical, as well as legal, concerns that employers must keep in mind to ensure that they are not inadvertently disadvantaging candidates or threatening the integrity and validity of their recruitment process. 

What to keep in mind when using generative AI in recruitment decision-making

It is vital that there is evidence of the validity of selection procedures not only to ensure that it conforms with equal opportunity laws and stands up to legal scrutiny, but also to maximise the value of the tool. Part of this stems from having a good understanding of the tool, what it measures, and how. 

Although algorithms can complicate this since they can identify patterns in data that might be somewhat unintuitive to humans, with a well-designed tool, we should at least know the inputs and how they link to the outputs. If we have access to the model’s inputs and outputs, this also allows us to easily test for adverse impact by examining whether outputs vary by subgroup through quantitative testing. 

Since generative AI models are rarely created in-house and are typically off-the-shelf models or fine-tuned versions of them, knowing how they make decisions can be more challenging since less is known about the model in the first place. This can make it more difficult to provide evidence for the validity of models, particularly if using them for broad tasks that are not grounded in validated frameworks. For example, asking a large language model to simply recommend which candidates to move forward in the hiring process based on their CVs could be less valid and useful compared to asking a large language model to make a recommendation based on how well their experience in previous roles aligns with the job description that was developed based on job analysis. In other words, it’s about how the generative AI is used that can affect how ethical and useful it is for making recruitment decisions. 

Moreover, generative AI models are trained on vast amounts of data sourced from the internet, the majority of which was created by humans. However, humans can often be influenced by stereotypes and unconscious biases, which may influence the outputs of large language models if they reflect the same biases they were trained on. Since generative AI tools often produce qualitative outputs, biases can be more subtle and harder to detect compared to quantitative outputs. Even if the output is quantitative, such as asking the LLM to rate candidates out of 10 or rank them, we have very limited knowledge about the scale construction compared to assessments specifically constructed for recruitment decisions. Furthermore, the outputs can also be affected by hallucinations and toxicities, which can reduce their validity and utility and can adversely affect candidates if the outputs are not checked. 

Although researchers have made progress toward measuring and mitigating these issues with generative AI, there is still a way to go. This progress is also being made in a general sense and not specifically with recruitment decisions in mind, so bodies such as SIOP have not yet been able to publish guidelines for acceptable statistical procedures for evaluating the outputs of generative AI. 

In summary, generative AI can be used in recruitment decisions but is better used as a supporting resource for decision-making that is verified using other solutions, rather than as the sole factor used to make decisions. 

What to keep in mind when using generative AI to enhance candidate experience

While candidate experience enhancement is less regulated compared to making selection decisions, it is still important to scrutinise the use of generative AI here. When interacting with candidates, inaccurate outputs or even rude outputs could be damaging to your reputation and could frustrate users, impacting the candidate experience. It might also impact how likely that candidates are to accept a job offer, which has important implications for the pool of talent you have access to. Some ways to overcome this are by testing models for toxicity, hallucinations, and efficacy and using techniques such as red teaming to see how well the model can withstand malicious actors.

When interacting with candidates, the information that large language models process should also be considered. For example, if using large language models to schedule interviews with clients, this might expose the language model to the candidate’s contact details such as their email address and phone number. Given that experiments have shown that the models’ alignment can be circumvented, resulting in the reproduction of training data that can contain personal and identifying information, the personal information that large language models have access to must be limited to only the information that is essential for it to carry out the required task to enhance the candidate experience. 

If using generative AI to interact with candidates, this should also be disclosed to increase transparency and trust, but also so that candidates themselves can limit the information that they share with generative AI models to lessen the risk of data leaks. 

Keep best practices in mind when using generative AI in recruitment

Generative AI can be used in a number of innovative and valuable ways during the recruitment process, but it must also be used with AI ethics best practices in mind. The use of generative AI is still subject to the rigorous scrutiny of other hiring practices, so it should be used in a way that is supported by evidence in order to get the best value out of it. 

 

Schedule a demo to find out how SeeTalent ethically uses generative AI in our augmented reporting.

Related Posts

How SeeTalent’s AI-Augmented Reporting Can Transform Your Talent Assessments

Airlie Hilliard

Psychometric assessments are an essential talent management tool. They can be used during recruitment to evaluate job applicants and identify the strongest talent as well as during the talent management lifecycle for applications such as coaching and development. However, the utility of psychometric assessments for both test-takers and talent managers is reduced if reports are poor quality, too high-level, do not provide personalised insights, or are missing altogether.

Read More

Why you should use work sample tests in your recruitment funnel instead of CVs

Airlie Hilliard

Almost every job opening asks applicants to provide their CV and/or fill in an application form that largely contains information that would be included in a CV, such as education and past experience. This is typically the first step in the process. Due to the large volume of applicants, mechanisms are used to screen applicants and reduce the size of the applicant pool so that resources can be invested into applicants that demonstrate more promise. While this makes intuitive sense, evidence shows us that CVs are bad predictors of future job performance.

Read More

How AI-Powered Work Sample Tests Can Help You Find Top Talent

Airlie Hilliard

Identifying top talent can be difficult, especially if you have ineffective or inefficient selection assessments. Luckily, there is over a century of research into the best predictors of future job performance, with work samples consistently indicated to be one of the strongest predictors of performance. Work sample tests, also known as work simulations, work previews or virtual job tryouts, are used to evaluate how applicants perform with a job-relevant task that is aligned with what their responsibilities would be if they were to get the role.

Read More
Skip to content