Candidates: Are you interviewing and need support?
Candidates: Are you interviewing and need support?
In my first blog post in this series, I shared the history behind HireVue Assessments, which use machine learning to reduce bias in decision-making for recruiters screening job applicants.. This approach solves the problems inherent in traditional multiple-choice pre-hire assessment tests, which are time-consuming and all too often a deal-breaker for the candidates who have the most options. HireVue has now built a system that aims to accurately predict job success from a structured video interview, making the screening process both faster and fairer.
The team at HireVue is composed of data scientists, industrial/organizational (IO) psychologists, and engineers. To create a custom HireVue Assessments model for a particular job role at a customer site, we use input gathered from a recorded video interview. To ensure that it’s effective at predicting success, our IO psychologists need to understand the job role in-depth — and most importantly, what it takes to perform well (or not) in that role. We use this understanding to train our model. When a candidate is assessed, HireVue Assessments provide a score that indicates how well that candidate is likely to perform in the role.
It’s important to note that HireVue Assessments don’t replace all person-to-person interviews, nor do they determine whom to hire. These scores provide decision support for recruiters, who then decide which candidates to move along to the person-to-person phases of the recruitment process.
Our process for building models follows the best practices of industrial/organization psychology (the science behind traditional assessments) and machine learning:
The HireVue Assessments model — the algorithm — is looking for the same things that you or I would notice if we were doing an interview. This includes which words are used, the type and meaning of statements made, and facial movements. Because there are variations between different cultures, we train the model using data featuring people from the same culture. See our Japanese assessments model press release for one example.
Our data scientists and IO psychologists are able to design HireVue Assessments algorithms to include only the data points proven to be predictive of job performance, and to leave out the things that don’t matter. This eliminates consideration of many characteristics that can be unintentionally distracting or influential to human evaluators.
It’s critical that companies creating artificial intelligence algorithms be committed to a code of ethics. At HireVue, we have been dedicated to doing good science in a methodical, fair, and ethical way since Day One, and you have our commitment that we will remain dedicated going forward.
In part three of this blog-post series, I address the issues and misconceptions about algorithmic bias in more detail.