Courtship is complicated, even in fruit flies

New research from Ben-Shahar lab illuminates courtship of Drosophila melanogaster males

A fruit fly, or vinegar fly (Drosophila melanogaster), rests on a banana. WashU researchers are using a computer vision and machine learning-based approach to study male fruit fly courtship behavior. (Photo: Shutterstock)

Love is in the air for the vinegar fly. Drosophila melanogaster has long been a model for understanding how brains translate sensory information into courtship behavior. Male flies perform a multitude of romantic actions — orienting, tapping, chasing and singing — directed toward eligible females. While researchers know that things like pheromones and sound play essential roles in these rituals, the influence of vision has been thought to be fairly simple in comparison: spot the female, track her and follow.

A study published in February in G3 from Yehuda Ben-Shahar, a professor of biology in Art & Sciences at Washington University in St. Louis, challenges that view. Using a computer vision and machine learning-based approach, this research reveals that male flies rely on surprisingly specific visual cues at a close range, particularly the female’s eyes, to determine her anterior-posterior body axis. In turn, this visual recognition shapes when, where and how different elements of courtship are deployed.

“To date, most of what we understand about the role of vision in Drosophila courtship relates to motion detection and tracking at long distances (for a fly…),” Ben-Shahar said.

The research has implications for biomedical research, where fruit flies often serve as a model organism for better understanding the human brain and sensory systems. So any tools making fruit fly observation easier and more accurate can help expedite neuroscience and genetics research.

Because males will initiate courtship even in the dark and will also court anything that roughly resembles a potential partner, few researchers had considered whether vision might still contribute anatomical information once the ritual was set in motion.

That’s what Ben-Shahar, together with neuroscience graduate students Ross McKinney and Christian Monroy Hernandez, set out to investigate. By developing a simplified courtship paradigm paired with automated behavioral tracking, they were able to map male courtship behaviors relative to specific regions of the female’s body. What they discovered was that males consistently bias certain behaviors toward either the anterior or posterior half of the female, and that this bias depends heavily on visual input. “These visual inputs inform how often (and where) males perform certain courtship behaviors,” Ben-Shahar said.

At the center of this finding are the female’s eyes, which mark the “front” of the female. This recognition increases the likelihood that certain courtship behaviors, for example the song, will be directed toward the female’s head. When that visual input was removed, the males lost their spatial precision.

“Our interpretation is that males use the visual recognition of specific anatomical features of the female as triggers for releasing specific behavior at the right location and distance from the female,” Ben-Shahar said. Rather than being a simple on-off switch, courtship appears to be a continuously modulated behavioral program, shaped from moment to moment by sensory cues.

Another key insight from the study concerns the underlying neural architecture. By analyzing the contributions of different visual projection neurons, the researchers found that spatial recognition of the female body does not depend on a single specialized neural pathway, but rather it emerges from multiple, independent neuron populations.

Beyond the biological insights, this study also marks the first published use of Ben-Shahar’s new computer vision-based framework for automatic spatial analysis of courtship behavior. Traditional analyses rely on manual scoring, which is time-consuming and susceptible to observer bias. And because males spend most of their time chasing the females around, trying to figure out how they might be using vision to tell the heads or tails of courted females is “almost impossible,” according to Ben-Shahar.

This new approach addresses these problems by combining high-resolution video tracking of male courtship toward a stationary female with trainable machine-learning classifiers. “Using a trainable computer algorithm for the analyses provided more robust data by reducing errors due to human observation biases,” Ben-Shahar said.

Looking ahead, this framework opens the door to a wide range of new questions. “We would like to expand it to studies of other behaviors that can be measured in two-dimensional spaces,” Ben-Shahar said. In the future, Ben-Shahar and his team also hope to develop a system capable of capturing behavior in three dimensions, allowing them to analyze more complex interactions, romantic and otherwise.


McKinney RM, Hernandez CM, Ben-Shahar Y, Visual recognition of the anteroposterior female body axis drives spatial elements of male courtship in Drosophila, G3 Genes|Genomes|Genetics, 2026; DOI

https://doi.org/10.1093/g3journal/jkag037

This work was supported by a Howard A. Schneiderman Graduate Fellowship from WashU to RMM, the Genome Analysis Training Program at WashU to CMH, and NIH grants NS089834 and ES025991, and NSF grants 1545778 and 1707221 to YB-S