https://www.nature.com/articles/d41586-019-03938-x
Astronomy is inextricably associated with spectacular images and visualizations of the cosmos. But Wanda Diaz Merced says that by neglecting senses other than sight, astronomers are missing out on discoveries.
For 15 years, Diaz Merced, an astronomer at the International Astronomical Union (IAU) Office for Astronomy Outreach in Mitaka, Japan, has pioneered a technique called sonification. The approach converts aspects of data, such as the brightness or frequency of electromagnetic radiation, into audible elements including pitch, volume and rhythm. It could help astronomers to avoid methodological biases that come with interpreting data only visually, argues Diaz Merced, who lost her sight in her twenties.
Last month, she co-organized the IAU's first symposium dedicated to diversity and inclusion. The event, in Mitaka from 12 to 15 November, showcased, among other topics, efforts aimed at presenting cosmic data in formats that are accessible through senses other than vision.
Diaz spoke to Nature about how bringing these efforts to mainstream science would boost accessibility — and discoveries.
How one astronomer hears the Universe, (DOI: doi:10.1038/d41586-019-03938-x)
Related Stories
We're all familiar with the elements of the periodic table, but have you ever wondered what hydrogen or zinc, for example, might sound like? W. Walker Smith, now a graduate student at Indiana University, combined his twin passions of chemistry and music to create what he calls a new audio-visual instrument to communicate the concepts of chemical spectroscopy.
Smith presented his data sonification project—which essentially transforms the visible spectra of the elements of the periodic table into sound—at a meeting of the American Chemical Society being held this week in Indianapolis, Indiana. Smith even featured audio clips of some of the elements, along with "compositions" featuring larger molecules, during a performance of his "The Sound of Molecules" show.
As an undergraduate, "I [earned] a dual degree in music composition and chemistry, so I was always looking for a way to turn my chemistry research into music," Smith said during a media briefing.
[...]
Data sonification is not a new concept. For instance, in 2018, scientists transformed NASA's image of Mars rover Opportunity on its 5,000th sunrise on Mars into music. The particle physics data used to discover the Higgs boson, the echoes of a black hole as it devoured a star, and magnetometer readings from the Voyager mission have also been transposed into music. And several years ago, a project called LHCSound built a library of the "sounds" of a top quark jet and the Higgs boson, among others. The project hoped to develop sonification as a technique for analyzing the data from particle collisions so that physicists could "detect" subatomic particles by ear.
Related:
Scientists Are Turning Data Into Sound to Listen to the Whispers of the Universe (and More) (Aug. 2022)
How one Astronomer Hears the Universe (Jan. 2020)
The Bird's Ear View of Space Physics: NASA Scientists Listen to Data (Sept. 2014)
(Score: 3, Touché) by ikanreed on Thursday January 02 2020, @04:19PM (1 child)
She may have been doing it for 15 years, but it was featured prominently as a plot element in Contact(1997), so "pioneered" seems a bit excessive.
(Score: 2) by JoeMerchant on Thursday January 02 2020, @06:34PM
Watching static on TV screens was an overdramatized plot element... and they were listening to radiotelescope data, not "looking" at stars.
🌻🌻🌻 [google.com]
(Score: 4, Funny) by Unixnut on Thursday January 02 2020, @04:32PM
... just for this: https://www.youtube.com/watch?v=0czFnIvKOJY [youtube.com]
(Score: 3, Interesting) by Snotnose on Thursday January 02 2020, @04:42PM
Dunno if it's the same person but about 15 years ago I bought a CD where a musician had taken radio astronomy, um, images?, downshifted the frequencies to the audible range, and recorded it.
It was a pretty interesting CD, but boring as hell for music. I imagine it would work a lot better as a DVD, where you could see images that went with the music.
Of course I'm against DEI. Donald, Eric, and Ivanka.
(Score: 2, Insightful) by Anonymous Coward on Thursday January 02 2020, @06:05PM (8 children)
Sight seems to be, by many orders of magnitude, the highest bandwidth of our sensory mechanisms. Think about all the information you can parse from an image in a few second glance. You can express a practically infinite number of concepts that are nearly instantly absorbed. By contrast compare that to something like hearing. Imagine trying to create an audio representation of an arbitrary image that could be at least as well understood as an optical one. There's just no way you could squeeze it into a similar bandwidth.
In theory I think this sounds like a very reasonable proposal, but in practice I think all of our other senses are near irrelevant compared to sight when it comes to transmitting information.
(Score: 3, Informative) by Coward, Anonymous on Thursday January 02 2020, @10:10PM (1 child)
I'm often surprised by the audible information from a dropped object. We can usually tell if it shattered or stayed intact. If it ended up on something soft or hard. If it skittered and bounced against the wall or not, and the general direction of it's path.
Bandwidth isn't everything.
Gravitational wave detectors are a lot like ears, with only a few spatially distinct channels of input data. I wonder if sonification has been tried with them.
(Score: 1, Interesting) by Anonymous Coward on Friday January 03 2020, @04:07AM
Sure, but again you could also show this visually with far more detail in a comparable amount of time.
Another thing not mentioned here is that you can also derive far more information than the human ear, even a trained one, is capable of discerning by doing things like applying a fourier transform to a waveform. And, once again, the representation is visual.
(Score: 2, Interesting) by Ethanol-fueled on Thursday January 02 2020, @11:41PM (3 children)
This is a fucking awesome, but hardly revolutionary idea. Most laymen can better interpret confusing data through good visualization, but as others have pointed out, audio-ization has long been a fixture in sci-fi movies. For people who have both good auditory and visual ability, visualization is probably the quicker and easier method to parse data at a glance. People preferring auditory information are either visually impaired or people who grew up with a sound/music background and can tell the difference in timbre between one musician playing the same notes on 2 different clarinets. Converting one pattern to the other to be interpreted reasonably also probably involves some transformation (scaling/shifting) in frequency and duration.
The real shame is the "social justice bullshit" angle used to spin the techniques:
So she, being a bad stereotype of a modern woman, used a well-known even to pop-culture technique and cashed out her minority and disadvantage points to make it all about "diversity and inclusion." She would have likely had a lot less eye-rolling in the audience had she left the political bullshit and globalist buzzwords out of it. If I were a disabled minority (my Hispanic blood probably makes me both anyway) I would stick to discussing the techniques and leave it to the others to research my background and give me brownie points without me shoving it in their faces.
(Score: 0) by Anonymous Coward on Friday January 03 2020, @04:11AM (1 child)
I'd tend to agree with this. There may be some form of representation in which an audio format is more robust, though I certainly cannot think of any. Her deciding to immediately jump to social politics instead of focusing on and describing such a method makes me think she also probably cannot think of any, but wants to engage in 'parsing' in science nonetheless.
(Score: 1) by Ethanol-fueled on Sunday January 05 2020, @02:16AM
What's so difficult about imagining techniques? Shift high frequencies down to the audio domain, shift low frequencies up to the audio domain, possibly both at once. Take given duration, shift speed with or without altering pitch so that distinguishing and/or periodic events can be heard. Play a section of interest in a loop and have a few sliders to tweek all that in real-time until something interesting is heard, document audio events and transformation parameters. There are people who would do this all day for free, for fun. If the IAU provided the raw data, some kid could bang out a Python app to do the above in like a week.
DAWs have had those abilities for decades, but if you needed to do any heavier lifting or conceptually-advanced filtering, get the DSP people to start writing some hardcore MATLAB shit. There are people who do this kind of shit as school assignments and as early as the undergrad and grad school levels.
(Score: 0) by Anonymous Coward on Sunday January 05 2020, @04:36PM
Uuuuh.
Are you aware that the human visual system is fundamentally flawed? In so many ways that we're still discovering and naming new "optical illusions" (aka cognitive shortcuts which can be tricked)?
An example of failing cognitive bias in vision, as impacts finding and communicating scientific results, is how humans interpret pie chart ratios extremely poorly. Another example is how box and whisker plots are accurate to centre of mass but not to shape; compare eg. to violin plots. Fouriering might help but it also might not; audio processing is extremely good at autocorrelation-like tasks where there is a continuous but higher order shift to the beat (so there's no fundamental, but a sliding one - think of a gradually speeding calypso rhythm, which would be trivial to hear but would come out as mud post FFT); such a signal requires an explicitly included and tuned second step to make visually clear - ie the researcher has to know it's there and apply not only the bog-standard fft but a second decoding step.
All of this is probably abundantly clearly to fyngerz, but I guess EF doesn't have a musically-attuned mind in the same way.
So, I get that you have a hate-on for whatever political views. But visual interpretation of data *is* cognitively biased in ways unique to vision, and there *are* ways in which our brains' audio processing is clearer (and also many in which it's far worse!). Arguing otherwise is factually incorrect if you know the facts, or displays both a lack of imagination and a lack of understanding signal processing, digital or analogue, if you're just shooting the shit.
(Score: 3, Interesting) by darkfeline on Friday January 03 2020, @04:54AM (1 child)
In fact, sight can encode so much information that our brains are incapable of handling it all. The processing throughput of the conscious human mind has been estimated around 50 bits per second. You don't actually "see" most of the information entering your eyes; your visual centers preprocess it down to a few bite sized bits. In fact, even our audio throughput is too much for our conscious minds; processing speech without our dedicated spoken language facilities would be overwhelming.
Thus, I don't think the bandwidth different of audio compared to sight would be the key limitation here.
Join the SDF Public Access UNIX System today!
(Score: 0) by Anonymous Coward on Sunday January 05 2020, @04:40PM
Right.
Another poster said "well you can just FFT it visually" but they miss the point that the brain has incredible systems for pattern recognition, systems which work very differently for different sense modalities. It's not as trivial as transcoding. There's a ton of powerful signal processing going on, and not all of it even *can* be meaningfully transcoded.
Eg. the ability to hear a tiny 300ms snippet of Canon in D or of Beethoven's 9th and identify which? There's no real visual transform that would let a human 'see' the waveform, or a transform, and from that say which it was. But if you *hear* such a snippet, the identification is trivial.