How do we understand machines that talk to us?

Robot with two thought bubbles. Illustration.

Photo: Canva.

Duration:
01.01.2024–31.12.2027

How do people interact with machines that use large language models (LLMs)? Do we interpret the ‘utterances’ of LLMs in the same way we understand each other?

Contact persons

About the research initiative

Many scholars rely on Gricean inferentialism (Grice 1957, 1967) as a fundamental aspect of human communication. According to this perspective, understanding verbal utterances, whether spoken or written, involves making inferences to determine the speaker's intended meaning. 

This process requires integrating contextual information and background knowledge with the syntactic structure obtained from parsing sentences. However, this raises a perplexing question: How can we effectively engage in conversational exchanges with individuals who lack communicative intentions, such as LLMs?

Purpose

By combining both theoretical and experimental research, we aim to answer several key questions:

  1. Is inferentialism about communication correct for our interactions with LLMs?
  2. Is a unified account possible of the interpretation of human and LLM utterances or are they understood in fundamentally different ways?
  3. How do children perceive their interactions with LLMs?

Participants

Funding

Faculty of Humanities, strategic initiatives 2024–2028.

Norwegian version of this page
Published July 1, 2024 10:10 AM - Last modified July 1, 2024 10:10 AM