Arthur T Knackerbracket has processed the following story:
We often think of astronomy as a visual science with beautiful images of the universe. However, astronomers use a wide range of analysis tools beyond images to understand nature at a deeper level.
Data sonification is the process of converting data into sound. It has powerful applications in research, education and outreach, and also enables blind and visually impaired communities to understand plots, images and other data.
[...] Imagine this scene: you're at a crowded party that's quite noisy. You don't know anyone and they're all speaking a language you can't understand—not good. Then you hear bits of a conversation in a far corner in your language. You focus on it and head over to introduce yourself.
While you may have never experienced such a party, the thought of hearing a recognizable voice or language in a noisy room is familiar. The ability of the human ear and brain to filter out undesired sounds and retrieve desired sounds is called the "cocktail party effect".
Similarly, science is always pushing the boundaries of what can be detected, which often requires extracting very faint signals from noisy data. In astronomy we often push to find the faintest, farthest or most fleeting of signals. Data sonification helps us to push these boundaries further.
[...] Data sonification is useful for interpreting science because humans interpret audio information faster than visual information. Also, the ear can discern more pitch levels than the eye can discern levels of color (and over a wider range).
Another direction we're exploring for data sonification is multi-dimensional data analysis—which involves understanding the relationships between many different features or properties in sound.
Plotting data in ten or more dimensions simultaneously is too complex, and interpreting it is too confusing. However, the same data can be comprehended much more easily through sonification.
As it turns out, the human ear can tell the difference between the sound of a trumpet and flute immediately, even if they play the same note (frequency) at the same loudness and duration.
Why? Because each sound includes higher-order harmonics that help determine the sound quality, or timbre. The different strengths of the higher-order harmonics enable the listener to quickly identify the instrument.
Now imagine placing information—different properties of data—as different strengths of higher-order harmonics. Each object studied would have a unique tone, or belong to a class of tones, depending on its overall properties.
[...] Sonification also has great uses in education (Sonokids) and outreach (for example, SYSTEM Sounds and STRAUSS), and has widespread applications in areas including medicine, finance and more.
But perhaps its greatest power is to enable blind and visually impaired communities to understand images and plots to help with everyday life.
It can also enable meaningful scientific research, and do so quantitatively, as sonification research tools provide numerical values on command.
Journal Reference:
A. Zanella et al, Sonification and sound design for astronomy research, education and public engagement, Nature Astronomy (2022). DOI: 10.1038/s41550-022-01721-z
Related Stories
We're all familiar with the elements of the periodic table, but have you ever wondered what hydrogen or zinc, for example, might sound like? W. Walker Smith, now a graduate student at Indiana University, combined his twin passions of chemistry and music to create what he calls a new audio-visual instrument to communicate the concepts of chemical spectroscopy.
Smith presented his data sonification project—which essentially transforms the visible spectra of the elements of the periodic table into sound—at a meeting of the American Chemical Society being held this week in Indianapolis, Indiana. Smith even featured audio clips of some of the elements, along with "compositions" featuring larger molecules, during a performance of his "The Sound of Molecules" show.
As an undergraduate, "I [earned] a dual degree in music composition and chemistry, so I was always looking for a way to turn my chemistry research into music," Smith said during a media briefing.
[...]
Data sonification is not a new concept. For instance, in 2018, scientists transformed NASA's image of Mars rover Opportunity on its 5,000th sunrise on Mars into music. The particle physics data used to discover the Higgs boson, the echoes of a black hole as it devoured a star, and magnetometer readings from the Voyager mission have also been transposed into music. And several years ago, a project called LHCSound built a library of the "sounds" of a top quark jet and the Higgs boson, among others. The project hoped to develop sonification as a technique for analyzing the data from particle collisions so that physicists could "detect" subatomic particles by ear.
Related:
Scientists Are Turning Data Into Sound to Listen to the Whispers of the Universe (and More) (Aug. 2022)
How one Astronomer Hears the Universe (Jan. 2020)
The Bird's Ear View of Space Physics: NASA Scientists Listen to Data (Sept. 2014)
(Score: 1) by khallow on Friday August 19 2022, @11:16AM (6 children)
So for activities that take years or decades to evolve, this small fraction of a second and slightly greater resolution will matter significantly? Not hearing that.
My take also is that visual information has way more information. That will matter more than a slightly faster science interpretation reaction speed.
(Score: 0) by Anonymous Coward on Friday August 19 2022, @12:03PM (4 children)
I don't know what the relative potential amount of information one can extract between the two, but I think there is very good potential there to leverage the brain to recognize patterns in sounds, or to pick out things that are "unnatural" and are interesting to focus on, much like that whole thing about how much better we are at certain visual recognition tasks than a computer is. I don't know if this work is saying auditory is necessarily better, but this approach certainly hasn't been explored much. One easy example I can think of is for looking for a periodic signal in very low signal-to-noise data. I would bet this approach does pretty good, just based upon my own early experiences scanning up and down the shortwave radio bands and my more recent experiences using Fourier transforms in data analysis.
I've wondered about dabbling in this approach when I first heard about it several years ago. I haven't thought about it from the angle that the ear might have a better dynamic range than the eye (better A/D converters!). This is obviously going to depend upon the kind of data one is looking at and how one assigns data variables to sound, but I'm also intrigued by the idea of using tone and timbre as an extra dimension to the analysis. I'm going to have to play around with their tools a bit. I hope they are fairly flexible and modifiable, otherwise I'll have to write something myself (and that's been the barrier for me that has kept this topic at the "I should look into this some day" level).
(Score: 1) by khallow on Friday August 19 2022, @12:35PM (3 children)
And that brings me to my original point. Visual is acknowledged as being at least almost as fast and it can display a lot more data at a time.
I think the real value of audio would be as an added channel to visual output, possibly in the form of words. After all, we don't usually have lecturers just talk or just draw pictures. There's already proven modes of communication that use both.
(Score: 1) by khallow on Friday August 19 2022, @12:40PM
(Score: 3, Interesting) by Immerman on Friday August 19 2022, @05:13PM (1 child)
Okay - now add 10,000 more dots moving at random speeds and try to spot the "string" among them.
For communicating clear, concise information, visual is *really* hard to beat. You've got three more dimensions to work with than sound (x,y,r,g,b versus sounds frequency and intensity), you can pack a LOT more information into a well designed visual
However, for noticing a faint flute playing in the distance through a cacophony of wind, or a friend calling you across noisy traffic... hearing has its advantages. A pale squiggle of shadow in a movie is a lot easier to overlook.
Teasing a faint signal from lots of louder noise is one of hearing's big challenges. From an evolutionary perspective, noticing anomalous sounds draws your attention to potential prey, predators, and other threats and opportunities long before you lay eyes on them, and the better you can pick out faint signals against the noise, the more opportunities they'll lead you to.
Vision excels when you already know what you're looking at. We don't really get a lot of visual noise normally - there's lots of stuff going on, but it's all pretty coherent. About the closest thing to noise vision deals with is leaves or other patchy obstructions partially obscuring things in the distance - and persistence-of-vision solves that pretty easily. Instead of teasing information out of noise, vision provides so much information that the challenge is tuning most of it out to save our attention for the "important" parts. I assume you've seen demonstrations of selective attention [youtube.com]
Which is great if you know what's important (or can assume, as in your diagram, that only the important bits are shown). But not so much when you're trying to look for something you don't recognize. Like a Where's Waldo puzzle - except that instead of looking through crowds of strangers for the one person you recognize, you're looking through crowds of people you recognize trying to find a stranger. It can be done, but it's not something our visual system is innately good at.
(Score: 1) by khallow on Friday August 19 2022, @11:56PM
And there's a shifty history of what happens when we rely on human senses to find stuff: out of body/near death experiences, visions and dream interpretation, subliminal perception, UFOs, N-rays and similar pathological science that depends on extreme human sense measurements, extreme audiophile stuff like hearing the gold plating on your cables, and so on.
(Score: 2) by maxwell demon on Friday August 19 2022, @04:14PM
Actually I watched the first video in the article, which for several events showed an image and played the corresponding sound.
For those images where I did hear something other than noise, I also could immediately see the line in the image. And where I didn't see the line, I didn't hear anything but noise; I'm not sure if there was supposed to be something in those which I didn't see or hear. But the point is, whenever I heard something, I saw something. And I say it immediately, while with sound I had to wait until it happened. Which means I was faster with the images than with the sound.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by SomeGuy on Friday August 19 2022, @11:47AM
And they are hearing some really mind-warping perverted stuff. :P
(Score: 3, Interesting) by Gaaark on Friday August 19 2022, @12:06PM
Being autistic, i find i have the "forest full of trees effect": i can't hear individual trees... i hear the entire forest! And yes, bears DO shit in the woods... ALL THE FECKING TIME! :)
I just can't filter when there's a lot of noise: i hear it all but am unable to pick out individual sounds.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 2, Informative) by Wally-o on Friday August 19 2022, @12:54PM
... featured a blind scientist who listened to an extraterrestrial signal and extracted info that the sighted scientists missed. Same idea.
(Score: 2) by inertnet on Friday August 19 2022, @10:41PM
APOD had an example of exoplanets turned into sounds [nasa.gov] just a couple of days ago.