Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task

Loading...
Thumbnail Image
Author (Corporation)
Publication date
2024
Typ of student thesis
Course of study
Type
01A - Journal article
Editors
Editor (Corporation)
Supervisor
Parent work
Scientific Reports
Special issue
DOI of the original publication
Link
Series
Series number
Volume
14
Issue / Number
9736
Pages / Duration
Patent number
Publisher / Publishing institution
Nature
Place of publication / Event location
London
Edition
Version
Programming language
Assignee
Practice partner / Client
Abstract
Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants’ overreliance on inaccurate advice when the systems’ predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.
Keywords
Project
Event
Exhibition start date
Exhibition end date
Conference start date
Conference end date
Date of the last check
ISBN
ISSN
2045-2322
Language
English
Created during FHNW affiliation
No
Strategic action fields FHNW
Publication status
Published
Review
Peer review of the complete publication
Open access category
Closed
License
Citation
Cecil, J., Lermer, E., Hudecek, M., Sauer, J., & Gaube, S. (2024). Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Scientific Reports, 14(9736). https://doi.org/10.1038/s41598-024-60220-5