
Conversations allow humans to communicate their thoughts, feelings and ideas to others. This in turn enables them to learn new things, deepen their social connections, and co-operate with peers to solve specific tasks.
Understanding how the human brain makes sense of what is said during conversations could inform the development of brain-inspired computational models.
Conversely, machine learning-based agents designed to process and respond to user queries in various languages, such as ChatGPT, could help to shed new light on the organization of conversational content in the brain.
Researchers at the University of Osaka and the National Institute of Information and Communications Technology (NICT) carried out a study aimed at further exploring how the brain derives meaning from spontaneous conversations, using the large language model (LLM) underpinning the functioning of ChatGPT and functional magnetic resonance imaging (fMRI) data collected while humans were talking with each other.
Their findings, published in Nature Human Behaviour, offer valuable new insight into how the brain allows humans to interpret language during real-time conversations.
“Our long-term goal is to understand how the human brain enables everyday life. Because language-based conversation is one of the most fundamental expressions of human intellect and social interaction, we set out to investigate how the brain supports natural dialogue,” Shinji Nishimoto, senior author of the paper, told Medical Xpress.
“Recent advances in large language models such as GPT have provided the quantitative tools needed to model the rich, moment-by-moment flow of linguistic information, making this study possible.”
As part of their study, Nishimoto and his colleagues carried out an experiment involving eight human participants, who were asked to converse spontaneously about specific topics.
As they engaged in conversation with one of the experiments, the participants’ brain activity was monitored using fMRI, a widely used neuroimaging technique that picks up changes in blood flow in the brain.
“We measured brain activity using fMRI while participants engaged in spontaneous conversations with an experimenter,” explained Masahiro Yamashita, first author of the paper.
“To analyze the content of these conversations, we converted each utterance into numerical vectors using GPT, a core component of ChatGPT. To capture different levels of linguistic hierarchy—such as words, sentences, and discourse—we varied the timescale of analysis from 1 to 32 seconds.”
Using the GPT computational model, the researchers created numerical representations of the language used by the participants during conversations. These representations allowed them to predict how strongly the brains of different individuals responded both as they spoke and as they listened to the person they were conversing with.
“An increasing body of research suggests that the meanings of spoken and perceived language are represented in overlapping brain regions,” said Yamashita.
“However, in real conversations, what I say and what you say must be distinguishable, and little is known about how this distinction is made. Our study revealed that the brain integrates words into sentences and discourse differently during speech production compared to comprehension.”
The results of this study suggest that the brain employs different strategies to construct meaning from what is said during conversations, depending on whether it is working on producing speech or processing what another is saying. This interesting observation contributes to the understanding of the intricate processes that allow humans to draw meaning from everyday conversations.
In the future, the work by Nishimoto, Yamashita and their colleague Rieko Kubo could inspire other research teams to investigate brain processes using a combination of LLMs and neuroimaging data.
“In my next studies, I would like to explore how the brain selects what to say from many possible options during real-time conversation,” added Yamashita. “I am particularly interested in how these decisions are made so rapidly and efficiently in the context of natural conversations.”
Written for you by our author Ingrid Fadelli,
edited by Sadie Harley
, and fact-checked and reviewed by Robert Egan —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Masahiro Yamashita et al, Conversational content is organized across multiple timescales in the brain, Nature Human Behaviour (2025). DOI: 10.1038/s41562-025-02231-4.
© 2025 Science X Network
Citation:
Neurocomputational study sheds light on how the brain organizes conversational content (2025, July 3)
retrieved 3 July 2025
from https://medicalxpress.com/news/2025-06-neurocomputational-brain-conversational-content.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Conversations allow humans to communicate their thoughts, feelings and ideas to others. This in turn enables them to learn new things, deepen their social connections, and co-operate with peers to solve specific tasks.
Understanding how the human brain makes sense of what is said during conversations could inform the development of brain-inspired computational models.
Conversely, machine learning-based agents designed to process and respond to user queries in various languages, such as ChatGPT, could help to shed new light on the organization of conversational content in the brain.
Researchers at the University of Osaka and the National Institute of Information and Communications Technology (NICT) carried out a study aimed at further exploring how the brain derives meaning from spontaneous conversations, using the large language model (LLM) underpinning the functioning of ChatGPT and functional magnetic resonance imaging (fMRI) data collected while humans were talking with each other.
Their findings, published in Nature Human Behaviour, offer valuable new insight into how the brain allows humans to interpret language during real-time conversations.
“Our long-term goal is to understand how the human brain enables everyday life. Because language-based conversation is one of the most fundamental expressions of human intellect and social interaction, we set out to investigate how the brain supports natural dialogue,” Shinji Nishimoto, senior author of the paper, told Medical Xpress.
“Recent advances in large language models such as GPT have provided the quantitative tools needed to model the rich, moment-by-moment flow of linguistic information, making this study possible.”
As part of their study, Nishimoto and his colleagues carried out an experiment involving eight human participants, who were asked to converse spontaneously about specific topics.
As they engaged in conversation with one of the experiments, the participants’ brain activity was monitored using fMRI, a widely used neuroimaging technique that picks up changes in blood flow in the brain.
“We measured brain activity using fMRI while participants engaged in spontaneous conversations with an experimenter,” explained Masahiro Yamashita, first author of the paper.
“To analyze the content of these conversations, we converted each utterance into numerical vectors using GPT, a core component of ChatGPT. To capture different levels of linguistic hierarchy—such as words, sentences, and discourse—we varied the timescale of analysis from 1 to 32 seconds.”
Using the GPT computational model, the researchers created numerical representations of the language used by the participants during conversations. These representations allowed them to predict how strongly the brains of different individuals responded both as they spoke and as they listened to the person they were conversing with.
“An increasing body of research suggests that the meanings of spoken and perceived language are represented in overlapping brain regions,” said Yamashita.
“However, in real conversations, what I say and what you say must be distinguishable, and little is known about how this distinction is made. Our study revealed that the brain integrates words into sentences and discourse differently during speech production compared to comprehension.”
The results of this study suggest that the brain employs different strategies to construct meaning from what is said during conversations, depending on whether it is working on producing speech or processing what another is saying. This interesting observation contributes to the understanding of the intricate processes that allow humans to draw meaning from everyday conversations.
In the future, the work by Nishimoto, Yamashita and their colleague Rieko Kubo could inspire other research teams to investigate brain processes using a combination of LLMs and neuroimaging data.
“In my next studies, I would like to explore how the brain selects what to say from many possible options during real-time conversation,” added Yamashita. “I am particularly interested in how these decisions are made so rapidly and efficiently in the context of natural conversations.”
Written for you by our author Ingrid Fadelli,
edited by Sadie Harley
, and fact-checked and reviewed by Robert Egan —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Masahiro Yamashita et al, Conversational content is organized across multiple timescales in the brain, Nature Human Behaviour (2025). DOI: 10.1038/s41562-025-02231-4.
© 2025 Science X Network
Citation:
Neurocomputational study sheds light on how the brain organizes conversational content (2025, July 3)
retrieved 3 July 2025
from https://medicalxpress.com/news/2025-06-neurocomputational-brain-conversational-content.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.