
Humans can remember various types of information, including facts, dates, events and even intricate narratives. Understanding how meaningful stories are stored in people’s memory has been a key objective of many cognitive psychology studies.
Researchers at the Institute for Advanced Study, Emory University and the Weizmann Institute of Science recently set out to model how humans represent meaningful narratives and store them in their memory, using mathematical objects known as “random trees.” Their paper, published in Physical Review Letters, introduces a new framework for the study of human memory processes, which is rooted in mathematics, computer science and physics.
“Our study was aimed at solving a key challenge, namely creating a mathematical theory of human memory for meaningful material, such as narratives,” Misha Tsodyks, senior author of the paper, told Medical Xpress. “The kind of consensus in the field is that narratives are too complex for such a theory to be possible, but I believe we proved this to be wrong—that despite the complexity, there are statistical trends in the way people recall narratives that can be predicted by a few simple basic principles.”
To effectively model the representation of meaningful memories using random trees, Tsodyks and his colleagues used online platforms like Amazon and Prolific to perform recall experiments on a large number of subjects, using narratives borrowed from a paper by W. Labov. Essentially, they asked 100 people to recall 11 narratives of various lengths (ranging from 20 to 200 sentences) and later analyzed recorded recalls to test their theory.
“We chose a collection of spoken narratives recorded by a famous linguist W. Labov in the 1960s,” explained Tsodyks. “We quickly realized that analyzing this amount of data requires modern AI tools in the form of recently developed large language models (LLMs).
“We then discovered that people don’t simply recall various events from the narratives but often summarize relatively large parts of a narrative (such as episodes) in single sentences. This gave rise to an idea that a narrative is represented in memory as a tree where nodes that are closer to the root represent an abstract summary of larger episodes.”
Tsodyks and his colleagues hypothesized that a tree representing a narrative is first constructed when an individual first hears or reads a story and understands it. As past studies suggest that individuals comprehend the same narratives differently, then the resulting trees would have unique structures.
“We formulated a model as an ensemble of random trees of a particular structure,” said Tsodyks. “The beauty of this model is that it can be solved mathematically, and its predictions can be directly tested with the data, which we did. The main novelty of our random tree model of memory and recall is the assumption that any meaningful material is generically represented in the same fashion.
“Our study could have broader implications of this fact for human cognition because narratives seem to be a general way we reason about a wide range of phenomena in our individual lives and social and historical processes.”
The recent work by this team of researchers highlights the promise of mathematical approaches and AI-based techniques for studying how humans store and represent meaningful information in their memories. In their next studies, Tsodyks and his colleagues plan to assess the extent to which their theory and random tree modeling approach could apply to other types of narratives, such as fiction stories.
“A more ambitious direction for future research will be to find more direct proofs for the tree model,” added Tsodyks. “This would require designing other experimental protocols beyond simple recall. Brain imaging with people engaged in narrative comprehension and recall could be another interesting direction.”
Written for you by our author Ingrid Fadelli,
edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Weishun Zhong et al, Random Tree Model of Meaningful Memory, Physical Review Letters (2025). DOI: 10.1103/g1cz-wk1l
© 2025 Science X Network
Citation:
Mathematical model reveals how humans store narrative memories using ‘random trees’ (2025, July 11)
retrieved 11 July 2025
from https://medicalxpress.com/news/2025-07-mathematical-reveals-humans-narrative-memories.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Humans can remember various types of information, including facts, dates, events and even intricate narratives. Understanding how meaningful stories are stored in people’s memory has been a key objective of many cognitive psychology studies.
Researchers at the Institute for Advanced Study, Emory University and the Weizmann Institute of Science recently set out to model how humans represent meaningful narratives and store them in their memory, using mathematical objects known as “random trees.” Their paper, published in Physical Review Letters, introduces a new framework for the study of human memory processes, which is rooted in mathematics, computer science and physics.
“Our study was aimed at solving a key challenge, namely creating a mathematical theory of human memory for meaningful material, such as narratives,” Misha Tsodyks, senior author of the paper, told Medical Xpress. “The kind of consensus in the field is that narratives are too complex for such a theory to be possible, but I believe we proved this to be wrong—that despite the complexity, there are statistical trends in the way people recall narratives that can be predicted by a few simple basic principles.”
To effectively model the representation of meaningful memories using random trees, Tsodyks and his colleagues used online platforms like Amazon and Prolific to perform recall experiments on a large number of subjects, using narratives borrowed from a paper by W. Labov. Essentially, they asked 100 people to recall 11 narratives of various lengths (ranging from 20 to 200 sentences) and later analyzed recorded recalls to test their theory.
“We chose a collection of spoken narratives recorded by a famous linguist W. Labov in the 1960s,” explained Tsodyks. “We quickly realized that analyzing this amount of data requires modern AI tools in the form of recently developed large language models (LLMs).
“We then discovered that people don’t simply recall various events from the narratives but often summarize relatively large parts of a narrative (such as episodes) in single sentences. This gave rise to an idea that a narrative is represented in memory as a tree where nodes that are closer to the root represent an abstract summary of larger episodes.”
Tsodyks and his colleagues hypothesized that a tree representing a narrative is first constructed when an individual first hears or reads a story and understands it. As past studies suggest that individuals comprehend the same narratives differently, then the resulting trees would have unique structures.
“We formulated a model as an ensemble of random trees of a particular structure,” said Tsodyks. “The beauty of this model is that it can be solved mathematically, and its predictions can be directly tested with the data, which we did. The main novelty of our random tree model of memory and recall is the assumption that any meaningful material is generically represented in the same fashion.
“Our study could have broader implications of this fact for human cognition because narratives seem to be a general way we reason about a wide range of phenomena in our individual lives and social and historical processes.”
The recent work by this team of researchers highlights the promise of mathematical approaches and AI-based techniques for studying how humans store and represent meaningful information in their memories. In their next studies, Tsodyks and his colleagues plan to assess the extent to which their theory and random tree modeling approach could apply to other types of narratives, such as fiction stories.
“A more ambitious direction for future research will be to find more direct proofs for the tree model,” added Tsodyks. “This would require designing other experimental protocols beyond simple recall. Brain imaging with people engaged in narrative comprehension and recall could be another interesting direction.”
Written for you by our author Ingrid Fadelli,
edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Weishun Zhong et al, Random Tree Model of Meaningful Memory, Physical Review Letters (2025). DOI: 10.1103/g1cz-wk1l
© 2025 Science X Network
Citation:
Mathematical model reveals how humans store narrative memories using ‘random trees’ (2025, July 11)
retrieved 11 July 2025
from https://medicalxpress.com/news/2025-07-mathematical-reveals-humans-narrative-memories.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.