Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday January 02 2020, @03:24PM   Printer-friendly
from the do-you-see-what-I-hear? dept.

https://www.nature.com/articles/d41586-019-03938-x

Astronomy is inextricably associated with spectacular images and visualizations of the cosmos. But Wanda Diaz Merced says that by neglecting senses other than sight, astronomers are missing out on discoveries.

For 15 years, Diaz Merced, an astronomer at the International Astronomical Union (IAU) Office for Astronomy Outreach in Mitaka, Japan, has pioneered a technique called sonification. The approach converts aspects of data, such as the brightness or frequency of electromagnetic radiation, into audible elements including pitch, volume and rhythm. It could help astronomers to avoid methodological biases that come with interpreting data only visually, argues Diaz Merced, who lost her sight in her twenties.

Last month, she co-organized the IAU's first symposium dedicated to diversity and inclusion. The event, in Mitaka from 12 to 15 November, showcased, among other topics, efforts aimed at presenting cosmic data in formats that are accessible through senses other than vision.

Diaz spoke to Nature about how bringing these efforts to mainstream science would boost accessibility — and discoveries.

How one astronomer hears the Universe, (DOI: doi:10.1038/d41586-019-03938-x)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Ethanol-fueled on Thursday January 02 2020, @11:41PM (3 children)

    by Ethanol-fueled (2792) on Thursday January 02 2020, @11:41PM (#938871) Homepage

    This is a fucking awesome, but hardly revolutionary idea. Most laymen can better interpret confusing data through good visualization, but as others have pointed out, audio-ization has long been a fixture in sci-fi movies. For people who have both good auditory and visual ability, visualization is probably the quicker and easier method to parse data at a glance. People preferring auditory information are either visually impaired or people who grew up with a sound/music background and can tell the difference in timbre between one musician playing the same notes on 2 different clarinets. Converting one pattern to the other to be interpreted reasonably also probably involves some transformation (scaling/shifting) in frequency and duration.

    The real shame is the "social justice bullshit" angle used to spin the techniques:

    " It could help astronomers to avoid methodological biases that come with interpreting data only visually, argues Diaz Merced, who lost her sight in her twenties.

    Last month, she co-organized the IAU's first symposium dedicated to diversity and inclusion. "

    So she, being a bad stereotype of a modern woman, used a well-known even to pop-culture technique and cashed out her minority and disadvantage points to make it all about "diversity and inclusion." She would have likely had a lot less eye-rolling in the audience had she left the political bullshit and globalist buzzwords out of it. If I were a disabled minority (my Hispanic blood probably makes me both anyway) I would stick to discussing the techniques and leave it to the others to research my background and give me brownie points without me shoving it in their faces.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Friday January 03 2020, @04:11AM (1 child)

    by Anonymous Coward on Friday January 03 2020, @04:11AM (#938953)

    I'd tend to agree with this. There may be some form of representation in which an audio format is more robust, though I certainly cannot think of any. Her deciding to immediately jump to social politics instead of focusing on and describing such a method makes me think she also probably cannot think of any, but wants to engage in 'parsing' in science nonetheless.

    • (Score: 1) by Ethanol-fueled on Sunday January 05 2020, @02:16AM

      by Ethanol-fueled (2792) on Sunday January 05 2020, @02:16AM (#939716) Homepage

      What's so difficult about imagining techniques? Shift high frequencies down to the audio domain, shift low frequencies up to the audio domain, possibly both at once. Take given duration, shift speed with or without altering pitch so that distinguishing and/or periodic events can be heard. Play a section of interest in a loop and have a few sliders to tweek all that in real-time until something interesting is heard, document audio events and transformation parameters. There are people who would do this all day for free, for fun. If the IAU provided the raw data, some kid could bang out a Python app to do the above in like a week.

      DAWs have had those abilities for decades, but if you needed to do any heavier lifting or conceptually-advanced filtering, get the DSP people to start writing some hardcore MATLAB shit. There are people who do this kind of shit as school assignments and as early as the undergrad and grad school levels.

  • (Score: 0) by Anonymous Coward on Sunday January 05 2020, @04:36PM

    by Anonymous Coward on Sunday January 05 2020, @04:36PM (#939852)

    Uuuuh.

    Are you aware that the human visual system is fundamentally flawed? In so many ways that we're still discovering and naming new "optical illusions" (aka cognitive shortcuts which can be tricked)?

    An example of failing cognitive bias in vision, as impacts finding and communicating scientific results, is how humans interpret pie chart ratios extremely poorly. Another example is how box and whisker plots are accurate to centre of mass but not to shape; compare eg. to violin plots. Fouriering might help but it also might not; audio processing is extremely good at autocorrelation-like tasks where there is a continuous but higher order shift to the beat (so there's no fundamental, but a sliding one - think of a gradually speeding calypso rhythm, which would be trivial to hear but would come out as mud post FFT); such a signal requires an explicitly included and tuned second step to make visually clear - ie the researcher has to know it's there and apply not only the bog-standard fft but a second decoding step.

    All of this is probably abundantly clearly to fyngerz, but I guess EF doesn't have a musically-attuned mind in the same way.

    So, I get that you have a hate-on for whatever political views. But visual interpretation of data *is* cognitively biased in ways unique to vision, and there *are* ways in which our brains' audio processing is clearer (and also many in which it's far worse!). Arguing otherwise is factually incorrect if you know the facts, or displays both a lack of imagination and a lack of understanding signal processing, digital or analogue, if you're just shooting the shit.