Hochschule für Angewandte Psychologie FHNW
Dauerhafte URI für den Bereichhttps://irf.fhnw.ch/handle/11654/1
Listen
4 Ergebnisse
Bereich: Suchergebnisse
Publikation Insights on the current state and future outlook of AI in health care: expert interview study(JMIR Publications, 2023) Hummelsberger, Pia; Koch, Timo K.; Rauh, Sabrina; Dorn, Julia; Lermer, Eva; Raue, Martina; Hudecek, Matthias; Schicho, Andreas; Colak, Errol; Ghassemi, Marzyeh; Gaube, SusanneBackground Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. Objective This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. Methods For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. Results Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology’s performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. Conclusions Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.01A - Beitrag in wissenschaftlicher ZeitschriftPublikation Early and later perceptions and reactions to the COVID-19 pandemic in Germany: on predictors of behavioral responses and guideline adherence during the restrictions(Frontiers Research Foundation, 2021) Lermer, Eva; Hudecek, Matthias; Gaube, Susanne; Raue, Martina; Batz, FalkIn March 2020, the German government enacted measures on movement restrictions and social distancing due to the COVID-19 pandemic. As this situation was previously unknown, it raised numerous questions about people’s perceptions of and behavioral responses to these new policies. In this context, we were specifically interested in people’s trust in official information, predictors for self-prepping behavior and health behavior to protect oneself and others, and determinants for adherence to social distancing guidelines. To explore these questions, we conducted three studies in which a total of 1,368 participants were surveyed (Study 1 N=377, March 2020; Study 2 N=461, April 2020; Study 3 N=530, April 2021) across Germany between March 2020 and April 2021. Results showed striking differences in the level of trust in official statistics (depending on the source). Furthermore, all three studies showed congruent findings regarding the influence of different factors on the respective behavioral responses. Trust in official statistics predicted behavioral responses in all three studies. However, it did not influence adherence to social distancing guidelines in 2020, but in 2021. Furthermore, adherence to social distancing guidelines was associated with higher acceptance rates of the measures and being older. Being female and less right-wing orientated were positively associated with guidelines adherence only in the studies from 2020. This year, political orientation moderated the association between acceptance of the measures and guideline adherence. This investigation is one of the first to examine perceptions and reactions during the COVID-19 pandemic in Germany across 1year and provides insights into important dimensions that need to be considered when communicating with the public.01A - Beitrag in wissenschaftlicher ZeitschriftPublikation Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays(Nature, 2023) Gaube, Susanne; Suresh, Harini; Raue, Martina; Lermer, Eva; Koch, Timo K.; Hudecek, Matthias; Ackery, Alun D.; Grover, Samir C.; Coughlin, Joseph F.; Frey, Dieter; Kitamura, Felipe C.; Ghassemi, Marzyeh; Colak, ErrolArtificial intelligence (AI)-generated clinical advice is becoming more prevalent in healthcare. However, the impact of AI-generated advice on physicians’ decision-making is underexplored. In this study, physicians received X-rays with correct diagnostic advice and were asked to make a diagnosis, rate the advice’s quality, and judge their own confidence. We manipulated whether the advice came with or without a visual annotation on the X-rays, and whether it was labeled as coming from an AI or a human radiologist. Overall, receiving annotated advice from an AI resulted in the highest diagnostic accuracy. Physicians rated the quality of AI advice higher than human advice. We did not find a strong effect of either manipulation on participants’ confidence. The magnitude of the effects varied between task experts and non-task experts, with the latter benefiting considerably from correct explainable AI advice. These findings raise important considerations for the deployment of diagnostic advice in healthcare.01A - Beitrag in wissenschaftlicher ZeitschriftPublikation Differences in risk perception between hazards and between individuals(Springer, 2018) Visschers, Vivianne; Siegrist, Michael; Raue, Martina; Lermer, Eva; Streicher, BernhardHow people think about a hazard often deviates from experts’ assessment of its probability and severity. The aim of this chapter is to clarify how people perceive risks. We thereby focus on two important research lines: (1) research on the psychometric paradigm, which explains variations between the perceptions of different risks, and (2) research on factors that may determine an individual’s perception of a risk (i.e., perceived benefits, trust, knowledge, affective associations, values, and fairness). Findings from studies about various risks (e.g., genetically modified organisms, food additives, and climate change) are reviewed in order to provide practical implications for risk management and communication. Overall, this chapter shows that the roles of benefit perception, trust, knowledge, affective associations, personal values, and fairness are not always straightforward; different factors appear involved in the perception of different hazards. We recommend practitioners, when they encounter a new hazard, to consult previous studies about similar hazards in order to identify the factors that describe the public’s perception of the new04A - Beitrag Sammelband