Bees use flight movements and body wiggles to help their brains learn and recognize visual patterns with remarkable accuracy, according to University of Sheffield research that could reshape how next-generation artificial intelligence is developed.
The discovery reveals how even tiny insect brains can solve complex visual tasks using surprisingly few brain cells, challenging assumptions about intelligence and computing power.
The research team built a computational model of a bee’s brain to understand how flight movements create clear neural signals that allow bees to efficiently identify features in their environment. This biological approach suggests future robots could become smarter by using movement to gather information rather than relying on massive computer networks.
Movement Drives Neural Precision
Professor James Marshall, Director of the Centre of Machine Intelligence at the University of Sheffield, emphasizes the study’s implications: “In this study we’ve successfully demonstrated that even the tiniest of brains can leverage movement to perceive and understand the world around them. This shows us that a small, efficient system – albeit the result of millions of years of evolution – can perform computations vastly more complex than we previously thought possible.”
The model demonstrates how bees generate unique electrical patterns in their brains through scanning movements during flight. These movements help create sparse, decorrelated neural responses where only specific neurons activate for particular visual features—a highly efficient coding strategy that conserves both energy and processing power.
Key findings from the research include:
- Active vision superiority: Bees scanning only the lower half of patterns achieved 96-98% accuracy versus 60% for stationary observation
- Minimal neural requirements: Just 16 lobula neurons proved sufficient for complex pattern discrimination tasks
- Speed optimization: Normal scanning speeds outperformed faster movements, suggesting evolved timing precision
- Face recognition capability: The model successfully identified human faces, matching real bee performance
Brain Networks Adapt Through Experience
The study reveals how exposure to natural images during flight automatically shapes neural connectivity in bee visual systems. Dr. HaDi MaBouDi, lead researcher, explains the learning process: “Our model of a bee’s brain demonstrates that its neural circuits are optimised to process visual information not in isolation, but through active interaction with its flight movements in the natural environment.”
Through non-associative learning—neural adaptation without reinforcement—the model’s brain networks gradually tuned themselves to specific directions and movements. This created orientation-selective neurons that respond maximally to particular visual features while remaining largely silent for irrelevant stimuli.
The researchers validated their computational model using the same visual challenges faced by real bees. In experiments distinguishing between plus and multiplication signs, the model performed significantly better when mimicking actual bee scanning strategies focused on specific pattern regions.
Implications for Robotics and AI
Professor Lars Chittka from Queen Mary University of London highlights the broader significance: “Scientists have been fascinated by the question of whether brain size predicts intelligence in animals. Here we determine the minimum number of neurons required for difficult visual discrimination tasks and find that the numbers are staggeringly small, even for complex tasks such as human face recognition.”
The findings suggest that intelligence emerges from how brains, bodies, and environments work together rather than from sheer computational power. This principle could enable more efficient robotic systems that actively shape their sensory input through movement rather than passively processing massive datasets.
Professor Mikko Juusola notes: “This work strengthens a growing body of evidence that animals don’t passively receive information – they actively shape it. Our new model extends this principle to higher-order visual processing in bees, revealing how behaviourally driven scanning creates compressed, learnable neural codes.”
The research offers a pathway for developing bio-inspired AI systems that could dramatically reduce computational requirements while improving performance in real-world applications like autonomous navigation, robotic vision, and adaptive learning systems. By harnessing evolutionary insights about efficient information processing, these findings could drive advances in self-driving vehicles and environmental robotics.
Related
If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!