Human whistled languages may offer model for how to study dolphin communication

Credit: Unsplash/CC0 public domain
Whistling during work isn’t just distracting for some people. To communicate long distances, more than 80 languages use a whistled version of their native language. Scientists from many disciplines believe that whistled languages could be used as a model to explain how dolphin whistle communication encodes information. In a paper that was published in Frontiers in Psychology, they presented their argument.

Because whistled human speech is more portable than shouting or ordinary speech, it was most commonly developed in areas where people live in mountains or dense forests. Although these whistled languages may vary by culture and region, the principle of whistled languages is the same: People simplify words syllable-by-syllable into whistled melodies.

A trained whistler can comprehend an incredible amount of information. Whistled Turkish sentences can be understood up to 90% of the time. Linguists and other researchers have been drawn to this ability to decipher meaning from whistled speech.

Ren-Guy Busnel was a French researcher who pioneered whistled languages research. In the 1960s, the idea that humans can also use whistled speech as a model to communicate with mammals like bottlenose dolphins emerged. Some of Busnel’s former colleagues joined forces to study the synergy potential between bottlenose dolphins, humans, and their larger brains relative to their bodies.

Although dolphins and humans produce different sounds and communicate information differently, Dr. Diana Reiss, coauthor, is a professor of psychology at Hunter College in America whose research focuses upon understanding cognition in dolphins and other cetaceans.

Dr. Julien Meyer (a linguist at the Gipsa Laboratory at the French national scientific center (CNRS) was the lead author. He explained that the ability to understand phonemes (a unit of sound that can differentiate one word from the next) is a key component of a listener's ability to decode human speech or language. Sonograms, which are images of sounds, can be segmented with silences.

Reiss pointed out that scientists who attempt to decode whistles of dolphins or other whistling species use silence intervals to classify whistles. Researchers may need to reconsider how they categorize whistled animal communication, based on the sonograms that reveal how information is transmitted structurally in human whistled voice.

Meyer, Reiss, and Marcelo Magnasco (a coauthor and a Rockefeller University professor and biophysicist) will use this information and other insights to create new methods to analyze dolphin whistles. Reiss and Magnasco have compiled dolphin whistle data. They will combine this with a database of whistled speech Meyer has collected since 2003, with the CNRS and the Collegium of Lyon.

Meyer stated, "On these data we will develop new algorithms, and test some hypotheses regarding combinatorial structure." Meyer was referring to the building blocks that make up language, such as phonemes, which can be combined to convey meaning.

Magnasco pointed out that machine learning and AI are already used by scientists to track dolphins in video and identify their calls. Reiss stated that an AI algorithm capable "deciphering" dolphin communication would require knowledge about the minimum unit of meaning sound, their organization, and how they work.

Further Whistled Turkish Challenges ideas about language and brain

Further information: Frontiers in Psychology, The Relevance of Human Whistled Languages to the Analysis and Decoding of Dolphin Communication. (2021). Journal information: Frontiers in Psychology The Relevance of Human Whistled Languages for the Analysis and Decoding of Dolphin Communication,(2021). DOI: 10.3389/fpsyg.2021.689501