Scientists at the Duke University believe they may have found the answer to why musical perception is associated with various moods and emotions, and vice-versa, something that music producers have known for a long time. Any musician knows that minor chords sound sad and depressing, whereas major ones sound happy and joyful, and each band or singer makes their own sounds based on this knowledge and the public they are trying to reach. But, at the same time, music is also related to speech, to the way we, as a species, are constructed from a biological point of view.
In a new study, DU investigators found that most scales – a row of specific notes that always sound good together – were designed in a manner that made them sound very close to the possibilities that the physics of the human voice allowed musicians to play. In other words, the two went hand in hand, influencing each other as the centuries passed. This is most obvious with middle-Eastern and Arabian music, where scales are constructed differently than in other parts of the world. Upon hearing Arabian music, the wealth of expressions and nuances in the songs should make all Western musicians bow their heads in shame.
“There is a strong biological basis to the aesthetics of sound. Humans prefer tone combinations that are similar to those found in speech,” Neurobiology Professor Dale Purves, the DU expert who led the team behind the paper, explains. Details of the discovery appear in the current issue of the respected Journal of the Acoustical Society of America (JASA). A second paper was released by DU team member Kamraan Gill, in today's issue of the open-access scientific journal PLoS One. The work further expands explanations on the correlations between musical scales and speech patterns. In Purves' study, the researchers also showed that sad or happy speech could be categorized in major and minor intervals, just as scales could, ScienceDaily reports.
“Our appreciation of music is a happy byproduct of the biological advantages of speech and our need to understand its emotional content,” Purves says. “Emotional communication in both speech and music is rooted in earlier non-lingual vocalizations that expressed emotion,” graduate student Dan Bowling, who has also been a part of the research team, believes. The authors conclude by saying that further proof of their work can be found in the tones of scales that are most commonly used. Although millions of combinations are possible, only a few, the closest to human-produced sounds, are favored.