logo
logo
Sign in

AI-driven assessments

avatar
Charles Sanchez

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world's inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

It is predicted that networked artificial intelligence will amplify human effectiveness and threaten human autonomy, agency, and capabilities. The wide-ranging possibilities are, computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, CX sophisticated analytics and pattern recognition, visual acuity, speech recognition, and language translation. These "smart" systems in communities, in vehicles, buildings, and utilities, on farms, and in business processes will save time, money, and lives and offer opportunities for individuals to enjoy a more customized future.

When it comes to assessment, AI is a part of many candidates and employee psychometric assessments. This can range from realistic chatbot-type conversations with candidates in situational judgment tests (SJT) to proven algorithm-based decisions made from analyzing candidate responses to test questions. The use of AI in assessment is not new. Personality questionnaires have been scored and 'interpreted' by expert-developed algorithms for since long. However, this is just the beginning. AI in assessment is growing at a rapid pace. Most of the companies using AI to give better user experience. Even edTech Platforms like Byju’s using AI chatbots for giving better learning experience to students.  

When it comes to AI, one may feel a bit lost in the sea of technical terms—citing a few keywords regarding the usage of AI in assessment.

Robotic process automation: This is achieved by gathering and transferring expert knowledge and then programming the system with an 'if/then' rule-based approach. However, this rule-based system is incapable of learning and improving without being given explicit instructions. Computer-generated interpretative reports in talent assessment are using this technology.

Machine learning: A computer cannot think for itself, but statistical tools can enable a system to make predictions from a given data and to improve its predictions over time. This is used in data analysis to create predictive people analytics to help employers make better talent assessments.

Pattern matching: This AI technique uses the computer to check the sequence of responses to determine if there is a pattern. It can be used to carry out some 'human' tasks, such as recognizing faces or identifying emotions.

Natural language processing: It makes use of text and speech analytics to extract the underlying meaning. This can be applied to analyze speech in interview question responses.

Combining all these aspects, AI plays a crucial role in analyzing and interpreting vast amounts of applicant data.

Beyond these technological considerations, few challenges have to be addressed:

Defensibility: The standardized available 'plug-and-play' AI systems won't differentiate the employer brand. If the competitors use the same systems, they will all be chasing the same talent. Moreover, these systems use 'deep learning networks' that learn as they go. This may sound promising, but it is difficult to explain why an applicant was accepted or rejected. These systems can lead to selection decisions that cannot be defended, making them vulnerable to litigation. Only custom AI systems offer the ability to make transparent and defensible selection decisions.

Time: Custom AI systems mirror human behavior and replicate the best practices of the assessors and raters. To achieve this, the system must be pre-feed with relevant information. It is expected to take up to six months to 'train' an AI system to assess candidates in the same way that the assessors and raters want. Managing this lead time can be a significant challenge.

Ethics: The ethical question is how much support should be taken from an AI system. For example, whether the assessors are happy for an AI system to reject the applicants? Or would they prefer it to flagged as unsuitable candidates to review and check the applicant's details? How to use AI ethically would be a key consideration for many assessors. However, AI's role should be restricted to providing additional information and enhancing efficiency. Assessors should always set the objectives while assessing. AI may then deliver helpful details at various stages of the selection process to arrive at a final decision.

Data handling: AI can analyze vast amounts of data, but the results may be misinterpreted or even deliberately abused. Good data-handling practices will be essential not just for confidentiality but also for maintaining the organization's reputation. AI should be used carefully and honorably to help in predicting the eligible candidates suitable for the role.

There is no reason and no way that a human mind can keep up with an artificial intelligence machine in the future. During the last decade, the use of AIs is also being incorporated into the educational field, whether to support the analysis of human behavior in teaching-learning contexts, as a didactic resource combined with other technologies, or as a tool for the assessment of the students. Is artificial intelligence less than our intelligence?

Find out more about Artificial intelligence on Byju’s.

collect
0
avatar
Charles Sanchez
guide
Zupyak is the world’s largest content marketing community, with over 400 000 members and 3 million articles. Explore and get your content discovered.
Read more