In a long inteview, Neil Young mentions the effects the technological race to the bottom is having on music and our ability to appreciate it. From ear buds to compounded lossy compression algorithms, most people have lost access to anything resembling the traditional dynamic range and chromatic range that music requires. What to call the sounds that are left? Neil goes into a lot of detail on the problems and some of the, so far unsuccessful, steps he has taken to try to fix the problem.
Neil Young is crankier than a hermit being stung by bees. He hates Spotify. He hates Facebook. He hates Apple. He hates Steve Jobs. He hates what digital technology is doing to music. "I'm only one person standing there going, 'Hey, this is [expletive] up!' " he shouted, ranting away on the porch of his longtime manager Elliot Roberts's house overlooking Malibu Canyon in the sunblasted desert north of Los Angeles.
[...] Producers and engineers often responded to the smaller size and lower quality of these packages by using cheap engineering tricks, like making the softest parts of the song as loud as the loudest parts. This flattened out the sound of recordings and fooled listeners' brains into ignoring the stuff that wasn't there anymore, i.e., the resonant combinations of specific human beings producing different notes and sounds in specific spaces at sometimes ultraweird angles that the era of magnetic tape and vinyl had so successfully captured.
It's a long read, but quite interesting and he has thought about both the problem and solutions. More importantly he has been working to solve the problem, even if it may be an uphill fight.
(Score: 2) by darkfeline on Sunday August 25 2019, @11:12AM (1 child)
Range != granularity Yes, humans can hear a wide dynamic range. Humans cannot distinguish between infinite gradations of volume.
You can represent an arbitrarily wide dynamic range with just a single bit: let 0 be the lowest value of the range and 1 be the highest. Oh, you wanted more granularity? Let's add more bits then.
16 bits is more than enough. Humans cannot meaningfully distinguish between 2^16 different gradations of volume. As you say, our hearing is logarithmic. We don't need to represent 32767 differently than 32768 or 32769. No one can tell the difference anyway.
24 bits is used during mixing so one has more leeway to be sloppy. If you want to keep it in the final master, fine. But claiming that we need even more bits suggests ignorance about digital audio.
Join the SDF Public Access UNIX System today!
(Score: 2) by hendrikboom on Sunday August 25 2019, @12:57PM
But with 2^16 gradations of volume we might still have to distinguish 5 from 6.