Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday August 24 2019, @07:17PM   Printer-friendly
from the music-like-background-noises dept.

In a long inteview, Neil Young mentions the effects the technological race to the bottom is having on music and our ability to appreciate it. From ear buds to compounded lossy compression algorithms, most people have lost access to anything resembling the traditional dynamic range and chromatic range that music requires. What to call the sounds that are left? Neil goes into a lot of detail on the problems and some of the, so far unsuccessful, steps he has taken to try to fix the problem.

Neil Young is crankier than a hermit being stung by bees. He hates Spotify. He hates Facebook. He hates Apple. He hates Steve Jobs. He hates what digital technology is doing to music. "I'm only one person standing there going, 'Hey, this is [expletive] up!' " he shouted, ranting away on the porch of his longtime manager Elliot Roberts's house overlooking Malibu Canyon in the sunblasted desert north of Los Angeles.

[...] Producers and engineers often responded to the smaller size and lower quality of these packages by using cheap engineering tricks, like making the softest parts of the song as loud as the loudest parts. This flattened out the sound of recordings and fooled listeners' brains into ignoring the stuff that wasn't there anymore, i.e., the resonant combinations of specific human beings producing different notes and sounds in specific spaces at sometimes ultraweird angles that the era of magnetic tape and vinyl had so successfully captured.

It's a long read, but quite interesting and he has thought about both the problem and solutions. More importantly he has been working to solve the problem, even if it may be an uphill fight.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Informative) by Anonymous Coward on Sunday August 25 2019, @02:49PM (1 child)

    by Anonymous Coward on Sunday August 25 2019, @02:49PM (#885189)

    You've got that backwards. You have to squash the dynamic range on vinyl or it will skip. The relationship between volume and skipping is somewhat more complicated. The loudness on CDs is a compression, not a volume increase as such. They've compressed things by bringing the quiet bits up, but also by clipping the drums.

    CDs do have less dynamic range now, but only because the quieter bits aren't being used.

    Starting Score:    0  points
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  

    Total Score:   2  
  • (Score: 3, Interesting) by RS3 on Monday August 26 2019, @12:39AM

    by RS3 (6367) on Monday August 26 2019, @12:39AM (#885447)

    Thank you thank you. There is SO much misinformation here it's too painful to respond to most of it. People are getting modded to +5 for complete nonsense and mental exercises. This is mostly a theory / philosophical discussion. I actually DO this work (part-time) and worked for years under a Grammy-winning recording engineer.

    Part of my reason for not wanting to comment too much more is that I know some of the secrets of the recording / mixing / mastering world. And I don't want to brag about my degree.

    There are comments above talking about recording in 24 bits for processing. WTF? DAWs have processed the internal mixing and math in the highest bit-count possible in the machine for 30 years. To anyone who might care: if you have a 64-bit CPU, you can do 64-bit math directly, even if you're running in a 16-bit OS. I have done it (in assembler).

    Since you might be sane, I'll write this one more time: If I have a 16-bit ADC, but my pre-amp level (gain / trim) is set too low so that I only use 8 of the bits, I'm recording in 8-bits. It's that simple. If I'm recording something with large dynamic range (like most things), then some of the quieter parts will only use, wait for it, 8 bits of ADC. So when I then add compression, which means I'm going to squash the loud parts, but also gain up the quiet parts (that are low bit-counts) and thereby make the low bit-count distortion much more audible than it would be if I did little or no compression. And this is _well known_ in the actual audio engineering world. We use 24 bit ADC recording (or 32) so that quiet parts still get 16-20 bits of quantization (A to D conversion).

    To dovetail with a few comments above: 16-bit playback is okay if you record in 24-32 and carefully process and "dither" (convert from 24 -> 16 bits) during render.

    We also record at 96 KHz (or 192 if we're feeling masochistic about HD space) to get far far away from any possibility of aliasing.