Hudecek, Matthias

Lade...
Profilbild
E-Mail-Adresse
Geburtsdatum
Projekt
Organisationseinheiten
Berufsbeschreibung
Nachname
Hudecek
Vorname
Matthias
Name
Matthias Hudecek

Suchergebnisse

Gerade angezeigt 1 - 10 von 22
  • Publikation
    You may fail but won’t quit? Linking servant leadership with error management culture is positively associated with employees’ motivational quality
    (Taylor & Francis, 09.10.2024) Hudecek, Matthias; Grünwald, Klara C.; Von Gehlen, Johannes; Lermer, Eva; Heiss, Silke F. [in: Cogent Business & Management]
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Patient:innen und KI: Eine Frage der Perspektive bei der Bewertung von KI bei medizinischen Online-Diensten
    (Frankfurt University of Applied Sciences, 2024) Lermer, Eva; Gaube, Susanne; Cecil, Julia; Kleine, Anne-Kathrin; Kokje, Eesha; Frey, Dieter; Hudecek, Matthias; Klein, Barbara; Rägle, Susanne; Klüber, Susanne [in: Künstliche Intelligenz im Healthcare-Sektor]
    04A - Beitrag Sammelband
  • Publikation
    Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task
    (Nature, 2024) Cecil, Julia; Lermer, Eva; Hudecek, Matthias; Sauer, Jan; Gaube, Susanne [in: Scientific Reports]
    Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants’ overreliance on inaccurate advice when the systems’ predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    If It Concerns Me: An Experimental Investigation of the Influence of Psychological Distance on the Acceptance of Autonomous Shuttle Buses
    (University of California Press, 2024) Schandl, Franziska; Lermer, Eva; Hudecek, Matthias [in: Collabra: Psychology]
    Autonomous vehicles (AVs) will revolutionize our everyday mobility in the future. However, the prerequisite for this is that the technology is accepted by the population. Currently, AVs are still difficult to grasp for many people, i.e., the topic of autonomous driving is psychologically distant. In other contexts, it has been shown that this psychological distance or proximity can be used to influence product perception. However, the influence of psychological distance has never been investigated in the AV context. To address this research gap, we investigated the impact of psychological distance on the intention to use (ITU) AVs. We manipulated psychological distance in a 2x2x2 scenario-based experiment (N = 2114) on two different dimensions and additionally varied driving modality for comparison purposes: subjects either imagined themselves or an average person (social distance) using either a traditional or autonomous bus (driving modality) either today or in ten years (temporal distance). Our results showed a main effect of driving modality and social distance, with higher ITU for AVs and the average person. Temporal distance interacted with social distance to affect ITU. Interestingly, psychological distance also affected ITU for traditional buses with a similar interaction pattern. Thus, our study suggests that psychological distance affects the ITU of buses in general rather than AV technology. Providers can benefit from framing AVs as temporally close and providing as concrete, detailed information as possible. Future research should examine the underlying mechanisms (e.g., a shift in bus use priorities) that can explain why social distance plays an important role, particularly in future scenarios.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Fine for others but not for me: The role of perspective in patients’ perception of artificial intelligence in online medical platforms
    (Elsevier, 2024) Hudecek, Matthias; Lermer, Eva; Gaube, Susanne; Cecil, Julia; Heiss, Silke F.; Batz, Falk [in: Computers in Human Behavior: Artificial Humans]
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Where a psychopathic personality matters at work: a cross-industry study of the relation of dark triad and psychological capital
    (BioMed Central, 2023) Stephan, Birgit; Lechner, Dominik; Stockkamp, Mariella; Hudecek, Matthias; Frey, Dieter; Lermer, Eva [in: BMC Psychology]
    Background The concepts of Dark Triad and Psychological Capital (PsyCap) have been extensively researched separately, but until one recent study, their interrelation has not been investigated. Purpose of this study was to uncover differences of the relationship of both concepts across work related industries. Methods In total, 2,109 German employees across 11 industries completed a questionnaire on Dark Triad (narcissism, psychopathy and Machiavellianism) and PsyCap. Multiple regression analyses were used to test the association of both concepts across industries. Results Values of narcissism, psychopathy and PsyCap generally differed between industries. No significant differences were found for Machiavellianism. While narcissism relates positively to PsyCap in all industry sectors, psychopathy only showed a negative relation to PsyCap in some sectors. For industries architecture, automotive and consulting, psychopathy did not significantly predict PsyCap. Conclusions We argue that different expectations of employees per industry make it easier or harder for different personalities to assimilate (homogeneity hypothesis) to the work context (measured by PsyCap). Future studies should investigate this further with other variables such as person-organization-fit. This study was, however, the first to simultaneously investigate Dark Triad and PsyCap among employees and their respective industry. It extends previous findings by revealing differences of both concepts across and within industry sectors. The study can help to reconsider in which industries Dark Triad personality affects PsyCap as antecedent of workplace outcomes such as work satisfaction or job performance.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays
    (Nature, 2023) Gaube, Susanne; Suresh, Harini; Raue, Martina; Lermer, Eva; Koch, Timo K.; Hudecek, Matthias; Ackery, Alun D.; Grover, Samir C.; Coughlin, Joseph F.; Frey, Dieter; Kitamura, Felipe C.; Ghassemi, Marzyeh; Colak, Errol [in: Scientific Reports]
    Artificial intelligence (AI)-generated clinical advice is becoming more prevalent in healthcare. However, the impact of AI-generated advice on physicians’ decision-making is underexplored. In this study, physicians received X-rays with correct diagnostic advice and were asked to make a diagnosis, rate the advice’s quality, and judge their own confidence. We manipulated whether the advice came with or without a visual annotation on the X-rays, and whether it was labeled as coming from an AI or a human radiologist. Overall, receiving annotated advice from an AI resulted in the highest diagnostic accuracy. Physicians rated the quality of AI advice higher than human advice. We did not find a strong effect of either manipulation on participants’ confidence. The magnitude of the effects varied between task experts and non-task experts, with the latter benefiting considerably from correct explainable AI advice. These findings raise important considerations for the deployment of diagnostic advice in healthcare.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Surfing in the streets: How problematic smartphone use, fear of missing out, and antisocial personality traits are linked to driving behavior
    (Public Library of Science, 2023) Hudecek, Matthias; Lemster, Simon; Fischer, Peter; Cecil, Julia; Frey, Dieter; Gaube, Susanne; Lermer, Eva [in: PLOS ONE]
    Smartphone use while driving (SUWD) is a major cause of accidents and fatal crashes. This serious problem is still too little understood to be solved. Therefore, the current research aimed to contribute to a better understanding of SUWD by examining factors that have received little or no attention in this context: problematic smartphone use (PSU), fear of missing out (FOMO), and Dark Triad. In the first step, we conducted a systematic literature review to map the current state of research on these factors. In the second step, we conducted a cross-sectional study and collected data from 989 German car drivers. A clear majority (61%) admitted to using the smartphone while driving at least occasionally. Further, the results showed that FOMO is positively linked to PSU and that both are positively associated with SUWD. Additionally, we found that Dark Triad traits are relevant predictors of SUWD and other problematic driving behaviors––in particular, psychopathy is associated with committed traffic offenses. Thus, results indicate that PSU, FOMO, and Dark Triad are relevant factors to explain SUWD. We hope to contribute to a more comprehensive understanding of this dangerous phenomenon with these findings.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Predicting acceptance of autonomous shuttle buses by personality profiles: a latent profile analysis
    (Springer, 2023) Schandl, Franziska; Fischer, Peter; Hudecek, Matthias [in: Transportation]
    Abstract Autonomous driving and its acceptance are becoming increasingly important in psychological research as the application of autonomous functions and artificial intelligence in vehicles increases. In this context, potential users are increasingly considered, which is the basis for the successful establishment and use of autonomous vehicles. Numerous studies show an association between personality variables and the acceptance of autonomous vehicles. This makes it more relevant to identify potential user profiles to adapt autonomous vehicles to the potential user and the needs of the potential user groups to marketing them effectively. Our study, therefore, addressed the identification of personality profiles for potential users of autonomous vehicles (AVs). A sample of 388 subjects answered questions about their intention to use autonomous buses, their sociodemographics, and various personality variables. Latent Profile Analysis was used to identify four personality profiles that differed significantly from each other in their willingness to use AVs. In total, potential users with lower anxiety and increased self-confidence were more open toward AVs. Technology affinity as a trait also contributes to the differentiation of potential user profiles and AV acceptance. The profile solutions and the correlations with the intention to use proved to be replicable in cross validation analyses.
    01A - Beitrag in wissenschaftlicher Zeitschrift
  • Publikation
    Insights on the current state and future outlook of AI in health care: expert interview study
    (JMIR Publications, 2023) Hummelsberger, Pia; Koch, Timo K.; Rauh, Sabrina; Dorn, Julia; Lermer, Eva; Raue, Martina; Hudecek, Matthias; Schicho, Andreas; Colak, Errol; Ghassemi, Marzyeh; Gaube, Susanne [in: JMIR AI]
    Background Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. Objective This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. Methods For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. Results Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology’s performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. Conclusions Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.
    01A - Beitrag in wissenschaftlicher Zeitschrift