4Guidelines for Human–AI Interaction and User Experience
Helmut Degen
Siemens Corporation, Foundational Technologies, Princeton, NJ, USA
4.1 Introduction
We are experiencing an AI summer characterized by rapid advancements in machine learning, natural language processing, and foundational models, also known as large language models. AI systems1 have achieved unprecedented capabilities. They assist radiologists in breast cancer diagnostics (McKinney et al., 2020), aid in drug discovery (Zhavoronkov et al., 2020), and reach accuracy levels in protein-folding prediction comparable to those of trained laboratory experts (Callaway, 2022; Jumper et al., 2021). Additionally, AI systems have outperformed humans in Go (Chen, 2016; Silver et al., 2016). They are also used to create interviews with deceased individuals, such as an interview with Wislawa Szymborska, a Polish author and the 1996 Nobel Prize in Literature laureate (Higgins, 2024).
In addition to their impressive capabilities, AI systems possess another “quality”: Their outcomes can sometimes be incorrect from a human perspective. In this paper, we refer to this phenomenon as the “uncertainty premise.” Due to their unprecedented capabilities, incorrect outcomes can have significant negative impacts on humans and their environment.
Several underlying reasons contribute to the uncertainty premise. Four of these reasons are rooted in the characteristics of machine learning models and their operational mechanisms: (1) the reliance ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access