Human whistled languages may offer model for how to study dolphin communication

By Peter Rejcek, science writer

Image credit: Ricardo Canino / Shutterstock

More than 80 cultures still use whistled language to communicate over long distances by simplifying words, syllable by syllable, into whistled melodies. Researchers trying to decode how bottlenose dolphins, highly social mammals with the second largest brain relative to their body size after humans, communicate are leveraging insights from studies looking at how human whistled speech is structured and organized. This model may provide new algorithms for helping understand how dolphins’ whistles encode information.

Whistling while you work isn’t just a distraction for some people. More than 80 cultures employ a whistled form of their native language to communicate over long distances. A multidisciplinary team of scientists believe that some of these whistled languages can serve as a model for elucidating how information may be encoded in dolphin whistle communication. They made their case in a new paper published in the journal Frontiers in Psychology.

Whistled human speech mostly evolved in places where people live in rugged terrain, such as mountains or dense forest, because the sounds carry much farther than ordinary speech or even shouting. While these whistled languages vary by region and culture, the basic principle is the same: People simplify words, syllable by syllable, into whistled melodies.

Trained whistlers can understand an amazing amount of information. In whistled Turkish, for example, common whistled sentences are understood up to 90 percent of the time. This ability to extract meaning from whistled speech has attracted linguists and other researchers interested in investigating the intricacies of how the human brain processes and even creates language.

The idea that human whistled speech could also be a model for how mammals like bottlenose dolphins communicate first emerged in the 1960s with work by René-Guy Busnel, a French researcher who pioneered the study of whistled languages. More recently, some of Busnel’s former colleagues have teamed up to explore the potential synergy between bottlenose dolphins and humans, which have largest brain relative to body size on the planet.

While humans and dolphins produce sounds and convey information differently, the structure and attributes found across human whistle languages may provide insights as to how bottlenose dolphins encode complex information, according to coauthor Dr Diana Reiss, a professor of psychology at Hunter College in the United States whose research focuses on understanding cognition and communication in dolphins and other cetaceans.

Lead author Dr Julien Meyer, a linguist in the Gipsa Lab at the French national research center (CNRS), offered this example: The ability of a listener to decode human language or whistled speech relies on the listener’s language competency, such as understanding phonemes, a unit of sound that can distinguish one word from another. However, images of sounds called sonograms are not always segmented by silences between these units in human whistled speech.


► Read original article► Download original article (pdf)


“By contrast, scientists trying to decode the whistled communication of dolphins and other whistling species often categorize whistles based on the silent intervals between whistles," Reiss noted. In other words, researchers may need to rethink how they categorize whistled animal communication based on what the sonograms reveal about how information is conveyed structurally in human whistled speech.

Meyer, Reiss and coauthor Dr Marcelo Magnasco, a biophysicist and professor at Rockefeller University, plan to apply this and other insights discussed in their paper to develop new techniques to analyze dolphin whistles. They will leverage dolphin whistle data compiled by Reiss and Magnasco with a database on whistled speech that Meyer has been collecting since 2003 with the CNRS, the Collegium of Lyon, the Museu Paraense Emílio Goeldi in Brazil and several nonprofit research associations focused on whistled and instrumental speech (The World Whistles, Yo Silbo, Silbo herreño). 

“On these data, for example, we will develop new algorithms and test some hypotheses about combinatorial structure,” Meyer said, referring to the building blocks of language like phonemes that can be combined to impart meaning. 

Magnasco noted that scientists already use machine learning and AI to help track dolphins in videos and even to identify dolphin calls. However, Reiss said, to have an AI algorithm capable of “deciphering” dolphin whistle communication, “we would need to know what the minimum unit of meaningful sound is, how they are organized, and how they function.”

REPUBLISHING GUIDELINES: Open access and sharing research is part of Frontiers’ mission. Unless otherwise noted, you can republish articles posted in the Frontiers news blog — as long as you include a link back to the original research. Selling the articles is not allowed.