Hochschule für Angewandte Psychologie FHNW

Dauerhafte URI für den Bereichhttps://irf.fhnw.ch/handle/11654/1

Listen

Bereich: Suchergebnisse

Gerade angezeigt 1 - 10 von 32
  • Publikation
    Konsumentscheidungen und Konsumpsychologie
    (18.03.2024) Tobler, Christina
    06 - Präsentation
  • Publikation
    Unsicherheit. Globale Herausforderungen psychologisch verstehen und bewältigen
    (Reinhardt, 2022) Lermer, Eva; Hudecek, Matthias
    Ob Covid-19-Pandemie, Fake Stories oder politische Erdbeben: Der Umgang mit Unsicherheit ist eine wesentliche Herausforderung im menschlichen Alltag. Obwohl viele beunruhigende Ereignisse der Vergangenheit (z. B. Sonnenfinsternis)erklärt werden konnten, verharren wir bei neuen Unsicherheitslagen in unseren alten Denk- und Verhaltensmustern. Diese sind geprägt durch Phänomene wie verzerrte Wahrnehmung oder (Selbst-)Überschätzung. Dieses Buch leistet einen Beitrag zum kompetenten Umgang mit Unsicherheit. Mithilfe von psychologischem Wissen werden Denkprozesse und Interaktionen besser verständlich gemacht, um künftig reflektierter (re-)agieren zu können. Das Buch ist ein Plädoyer für eine neue Aufklärung mit einem Appell an die individuelle Verantwortlichkeit, sich seines Verstandes zu bedienen.
    02 - Monographie
  • Publikation
    Insights on the current state and future outlook of AI in health care: expert interview study
    (JMIR Publications, 2023) Hummelsberger, Pia; Koch, Timo K.; Rauh, Sabrina; Dorn, Julia; Lermer, Eva; Raue, Martina; Hudecek, Matthias; Schicho, Andreas; Colak, Errol; Ghassemi, Marzyeh; Gaube, Susanne
    Background Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. Objective This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. Methods For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. Results Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology’s performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. Conclusions Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Predicting acceptance of autonomous shuttle buses by personality profiles: a latent profile analysis
    (Springer, 2023) Schandl, Franziska; Fischer, Peter; Hudecek, Matthias
    Abstract Autonomous driving and its acceptance are becoming increasingly important in psychological research as the application of autonomous functions and artificial intelligence in vehicles increases. In this context, potential users are increasingly considered, which is the basis for the successful establishment and use of autonomous vehicles. Numerous studies show an association between personality variables and the acceptance of autonomous vehicles. This makes it more relevant to identify potential user profiles to adapt autonomous vehicles to the potential user and the needs of the potential user groups to marketing them effectively. Our study, therefore, addressed the identification of personality profiles for potential users of autonomous vehicles (AVs). A sample of 388 subjects answered questions about their intention to use autonomous buses, their sociodemographics, and various personality variables. Latent Profile Analysis was used to identify four personality profiles that differed significantly from each other in their willingness to use AVs. In total, potential users with lower anxiety and increased self-confidence were more open toward AVs. Technology affinity as a trait also contributes to the differentiation of potential user profiles and AV acceptance. The profile solutions and the correlations with the intention to use proved to be replicable in cross validation analyses.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Voraussetzungen für die erfolgreiche Nutzung von agilen Methoden und agiler Führung im Schulkontext
    (Springer, 2022) Hudecek, Matthias; Fischer, Julia; Stricker, Tobias
    04A - Beitrag Sammelband
  • Publikation
    Surfing in the streets: How problematic smartphone use, fear of missing out, and antisocial personality traits are linked to driving behavior
    (Public Library of Science, 2023) Hudecek, Matthias; Lemster, Simon; Fischer, Peter; Cecil, Julia; Frey, Dieter; Gaube, Susanne; Lermer, Eva
    Smartphone use while driving (SUWD) is a major cause of accidents and fatal crashes. This serious problem is still too little understood to be solved. Therefore, the current research aimed to contribute to a better understanding of SUWD by examining factors that have received little or no attention in this context: problematic smartphone use (PSU), fear of missing out (FOMO), and Dark Triad. In the first step, we conducted a systematic literature review to map the current state of research on these factors. In the second step, we conducted a cross-sectional study and collected data from 989 German car drivers. A clear majority (61%) admitted to using the smartphone while driving at least occasionally. Further, the results showed that FOMO is positively linked to PSU and that both are positively associated with SUWD. Additionally, we found that Dark Triad traits are relevant predictors of SUWD and other problematic driving behaviors––in particular, psychopathy is associated with committed traffic offenses. Thus, results indicate that PSU, FOMO, and Dark Triad are relevant factors to explain SUWD. We hope to contribute to a more comprehensive understanding of this dangerous phenomenon with these findings.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Where a psychopathic personality matters at work: a cross-industry study of the relation of dark triad and psychological capital
    (BioMed Central, 2023) Stephan, Birgit; Lechner, Dominik; Stockkamp, Mariella; Hudecek, Matthias; Frey, Dieter; Lermer, Eva
    Background The concepts of Dark Triad and Psychological Capital (PsyCap) have been extensively researched separately, but until one recent study, their interrelation has not been investigated. Purpose of this study was to uncover differences of the relationship of both concepts across work related industries. Methods In total, 2,109 German employees across 11 industries completed a questionnaire on Dark Triad (narcissism, psychopathy and Machiavellianism) and PsyCap. Multiple regression analyses were used to test the association of both concepts across industries. Results Values of narcissism, psychopathy and PsyCap generally differed between industries. No significant differences were found for Machiavellianism. While narcissism relates positively to PsyCap in all industry sectors, psychopathy only showed a negative relation to PsyCap in some sectors. For industries architecture, automotive and consulting, psychopathy did not significantly predict PsyCap. Conclusions We argue that different expectations of employees per industry make it easier or harder for different personalities to assimilate (homogeneity hypothesis) to the work context (measured by PsyCap). Future studies should investigate this further with other variables such as person-organization-fit. This study was, however, the first to simultaneously investigate Dark Triad and PsyCap among employees and their respective industry. It extends previous findings by revealing differences of both concepts across and within industry sectors. The study can help to reconsider in which industries Dark Triad personality affects PsyCap as antecedent of workplace outcomes such as work satisfaction or job performance.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays
    (Nature, 2023) Gaube, Susanne; Suresh, Harini; Raue, Martina; Lermer, Eva; Koch, Timo K.; Hudecek, Matthias; Ackery, Alun D.; Grover, Samir C.; Coughlin, Joseph F.; Frey, Dieter; Kitamura, Felipe C.; Ghassemi, Marzyeh; Colak, Errol
    Artificial intelligence (AI)-generated clinical advice is becoming more prevalent in healthcare. However, the impact of AI-generated advice on physicians’ decision-making is underexplored. In this study, physicians received X-rays with correct diagnostic advice and were asked to make a diagnosis, rate the advice’s quality, and judge their own confidence. We manipulated whether the advice came with or without a visual annotation on the X-rays, and whether it was labeled as coming from an AI or a human radiologist. Overall, receiving annotated advice from an AI resulted in the highest diagnostic accuracy. Physicians rated the quality of AI advice higher than human advice. We did not find a strong effect of either manipulation on participants’ confidence. The magnitude of the effects varied between task experts and non-task experts, with the latter benefiting considerably from correct explainable AI advice. These findings raise important considerations for the deployment of diagnostic advice in healthcare.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task
    (Nature, 2024) Cecil, Julia; Lermer, Eva; Hudecek, Matthias; Sauer, Jan; Gaube, Susanne
    Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants’ overreliance on inaccurate advice when the systems’ predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.
    01A - Beitrag in wissenschaftlicher Zeitschrift