Sonificiation of Twitter hashtags using earcons based on vowel formants
Abstract: The amount of notifications we receive from our digital devices is higher today than ever, often causing distress in users constantly having to move their devices into the center of attention, digesting the information received visually. By using earcons, a sonic abstraction of the underlying information, we can give the users an understanding of what information is received by allowing them to digest in auditorily. This can be seen as a potential part of a concept called calm technologies, allowing users to keep one thing in the center of attention, while simultaneously monitoring others in the periphery of attention. Using hashtags from Twitter as the underlying data, a sonic abstraction is made by mapping the vowels contained in a hashtag to a melody, and enhancing the formant frequencies of these vowels. This gives rise to the question if it is possible to sonify data based on Twitter hashtags with the use of formant synthesis to help users identify different tweets, exploring if earcons are prone to become a calm technology, allowing users to retake control of their attention. A methodology is described, with a mapping of several phonetic vowels containing the fundamental frequency, f0, the first two formant frequencies, f1 and f2, for each vowel, as well as a rhythmic mapping based on the hashtags’ syllables as well as where the emphasis lies. It is shown that participants clearly recognize the melody related to each hashtag, but that the effect of the formant synthesis cannot be strengthened at a high significance level. This evokes discussion of how the formant frequencies can be made more distinguishable and how a further mapping of formant synthesis together with other musical parameters could be made to create earcons that are intuitively understandable and that could embody even more information.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)