Metrics sonification as a new way to convey bibliometric data
When it comes to making sense of data, visualizations rule. But what about translating data into sound? This blog post explores the origins of ‘data sonification’ and its many applications in science and demonstrates how even bibliometric data can be turned into sound.
For centuries, the scientific community has relied on visual tools to communicate (complex) data. But what if we could listen to data as effortlessly as we read a graph? This question lies at the heart of data sonification, a practice that translates datasets into sound. While visualization adheres to the adage that “a picture is worth a thousand words”, sonification challenges us to consider how a melody or rhythm might convey a thousand data points. The concept of data sonification is not entirely new. Devices like the Geiger counter—with its iconic crackling clicks to signal radiation levels—or the steady beep of a hospital heart rate monitor demonstrated how sound could provide real-time, intuitive feedback. These early applications laid the groundwork for more complex data sonification, which gained momentum in the late 1980s with advancements in microprocessor technology.
So, what exactly is data sonification? At its core, it involves systematically mapping of data variables—such as temperature, population size, or chemical concentrations—to sound parameters like pitch, volume, tempo, or timbre. Unlike spoken language, which conveys explicit meaning, sonification relies on non-speech audio to evoke patterns or relationships within datasets. This approach leverages the auditory system’s unique strengths. Humans, for instance, excel at detecting subtle changes over time. A car mechanic may diagnose engine trouble from an irregular hum long before a dashboard warning light appears. Clinicians monitor patients’ vital signs through continuous auditory feedback, freeing their eyes for other tasks. For visually impaired researchers, sonification is more than a novelty—it’s a gateway to scientific participation. Jake Noel-Storr and Michelle Willebrands interviewed visually impaired researchers. Astronomers like Garry Foran use sonification tools to analyze astrophysical data, though skepticism persists among peers who still view sound as a “gimmick” rather than a legitimate analytical method.
One of the most compelling advantages of sonification lies in its ability to reveal temporal dynamics. Take ecology, for example. In one of Miriam Quick and Duncan Geere’s Loud Numbers projects “The end of the road”, the dramatic decline of Danish insect populations over two decades is translated into a sparse, haunting soundscape. Monthly insect counts are represented by fluttering synth tones, with pitch corresponding to insect size. As populations plummet, the audio grows eerily quiet, punctuated by a somber bell tolling each year—an auditory metaphor for biodiversity loss. This project, like many others, underscores sonification’s power to evoke emotional resonance while conveying scientific facts.
Yet for all its promise, sonification faces significant challenges. Unlike visual graphs, which are taught from childhood, interpreting sound requires specialized training. As Christian Dayé and Alberto de Campo note, “we learn how to read graphical displays … But we do not learn how to identify structures or patterns in a given sequence of sounds”. This skills gap limits broader adoption. Additionally, the subjective nature of sound design raises questions about clarity. Should a sonification of climate data use a cheerful melody to engage listeners, or a dissonant tone to signal urgency? Striking a balance between aesthetic appeal and scientific rigor is no small feat. Paul Vickers warn that overly musical sonification risk obscuring the data’s message. Standardization is another hurdle. Without shared methods or frameworks, sonification projects vary widely in approach, making understanding and reproducibility difficult.
Despite these challenges, the fusion of science and art in data sonification offers unique opportunities for public engagement. Consider Bristol Burning, a project by Miriam Quick that transformed air quality data into a hip-hop track. Collaborating with artist T. Relly, Miriam Quick paired pulsating beats with pollution metrics, creating an immersive experience that educates as it entertains.
Such projects highlight sonification’s versatility, bridging disciplines and audiences. In astronomy, over 60% of sonification initiatives combine sound with visuals to enhance public understanding of cosmic phenomena.
Since bibliometrics is usually based on big data that are available in literature databases, bibliometrics is predestined for data sonification. In a new study, my co-author Rouven Lazlo Haegner and I introduce metrics sonification—a novel application of sonification using bibliometric data. Defined as the auditory translation of bibliometric data (e.g., measurements, results, or trends) into sound for analytical or communicative purposes, metrics sonification aims to expand how researchers engage with bibliometric information. To demonstrate metrics sonification, our study focuses on the scholarly output of Loet Leydesdorff (referred to hereafter as Loet), a seminal figure in scientometrics who passed away in 2023.
The accompanying track on Soundcloud is crafted in F minor, a key chosen to express the very sad event of Loet’s death. The track integrates both quantitative and qualitative elements. Three bibliometric properties of Loet’s publications are mapped to sound parameters: publication output, open access status, and citation impact. Spoken audio in the track provides context, explaining the sonification methodology and summarizing selected papers based on titles and abstracts. While our study highlights one specific application of metrics sonification, one can imagine many other possible applications in bibliometrics. Just as sonification has enriched fields like astronomy or ecology, bibliometrics could adopt diverse auditory strategies—from tracking collaboration networks to mapping global citation flows. By transforming data into sound, bibliometric researchers may uncover patterns, enhance accessibility, and communicate findings to the general public in innovative ways.
The future of data sonification may hinge on collaboration and innovation. Researchers advocate for empirical studies to validate its effectiveness in data analysis and reception of empirical results. Training programs could equip scientists and the public to “listen” to data as fluently as they read charts. Collaboration between technologists, artists, and domain experts can refine tools and expand applications. Imagine a world in which scientific papers include audio supplements, or climate datasets are choreographed into symphonies performed in concert halls. As Christian Dayé and Alberto de Campo suggest, the most fruitful path forward may lie in combining sight and sound—harnessing the strengths of both senses to navigate our data-rich world. Data sonification is not a replacement for visualization but an expansion of our sensory toolkit. It may democratize access to science, offering new ways to analyze, communicate, and connect with information. The crackle of a Geiger counter, the melancholy of an insect requiem, or the rhythm of microbial jazz—all remind us that data is not silent. It has a voice, waiting to be heard.
Header image by Anthony Roberts on Unsplash.
DOI: 10.59350/h057p-evj38 (export/download/cite this blog post)
0 Comments
Add a comment