Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task
Kein Vorschaubild vorhanden
Autor:innen
Autor:in (Körperschaft)
Publikationsdatum
2024
Typ der Arbeit
Studiengang
Typ
01A - Beitrag in wissenschaftlicher Zeitschrift
Herausgeber:innen
Herausgeber:in (Körperschaft)
Betreuer:in
Übergeordnetes Werk
Scientific Reports
Themenheft
DOI der Originalpublikation
Link
Reihe / Serie
Reihennummer
Jahrgang / Band
14
Ausgabe / Nummer
9736
Seiten / Dauer
Patentnummer
Verlag / Herausgebende Institution
Nature
Verlagsort / Veranstaltungsort
London
Auflage
Version
Programmiersprache
Abtretungsempfänger:in
Praxispartner:in/Auftraggeber:in
Zusammenfassung
Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants’ overreliance on inaccurate advice when the systems’ predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.
Schlagwörter
Fachgebiet (DDC)
150 - Psychologie
004 - Computer Wissenschaften, Internet
004 - Computer Wissenschaften, Internet
Veranstaltung
Startdatum der Ausstellung
Enddatum der Ausstellung
Startdatum der Konferenz
Enddatum der Konferenz
Datum der letzten Prüfung
2024/10/16/12:34:49
ISBN
ISSN
2045-2322
Sprache
Englisch
Während FHNW Zugehörigkeit erstellt
Nein
Zukunftsfelder FHNW
Publikationsstatus
Veröffentlicht
Begutachtung
Peer-Review der ganzen Publikation
Open Access-Status
Closed
Lizenz
Zitation
CECIL, Julia, Eva LERMER, Matthias HUDECEK, Jan SAUER und Susanne GAUBE, 2024. Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Scientific Reports. 2024. Bd. 14, Nr. 9736. DOI 10.1038/s41598-024-60220-5. Verfügbar unter: https://irf.fhnw.ch/handle/11654/47593