In a long inteview, Neil Young mentions the effects the technological race to the bottom is having on music and our ability to appreciate it. From ear buds to compounded lossy compression algorithms, most people have lost access to anything resembling the traditional dynamic range and chromatic range that music requires. What to call the sounds that are left? Neil goes into a lot of detail on the problems and some of the, so far unsuccessful, steps he has taken to try to fix the problem.
Neil Young is crankier than a hermit being stung by bees. He hates Spotify. He hates Facebook. He hates Apple. He hates Steve Jobs. He hates what digital technology is doing to music. "I'm only one person standing there going, 'Hey, this is [expletive] up!' " he shouted, ranting away on the porch of his longtime manager Elliot Roberts's house overlooking Malibu Canyon in the sunblasted desert north of Los Angeles.
[...] Producers and engineers often responded to the smaller size and lower quality of these packages by using cheap engineering tricks, like making the softest parts of the song as loud as the loudest parts. This flattened out the sound of recordings and fooled listeners' brains into ignoring the stuff that wasn't there anymore, i.e., the resonant combinations of specific human beings producing different notes and sounds in specific spaces at sometimes ultraweird angles that the era of magnetic tape and vinyl had so successfully captured.
It's a long read, but quite interesting and he has thought about both the problem and solutions. More importantly he has been working to solve the problem, even if it may be an uphill fight.
(Score: 5, Interesting) by Anonymous Coward on Saturday August 24 2019, @10:34PM (3 children)
That is not why they sound distorted. The best way to understand the limitation of CDs and how hard they are to mix is that PCM is linear, but our hearing is logarithmic. A sound that is twice as loud on the PCM stream (e.g. moving from the values 128 to 256 or 32767 to 65534) is not perceived as twice as loud to us. You can move the same distance linearly, but depending on where you are on the scale, it will sound different. It is the same reason why people complained about the volume knob on early versions of Windows and Mac OS (but not BeOS). It is difficult to map logarithmic values to linear and back over a large range, so compression artificially limits the dynamic range in order to try and preserve the differences that people will notice the most. It also helps to even out the sound of the ensemble and reduce unwanted sounds like heavy hits out of nowhere and sibilant noises. While adding more and more bits gives you more room to do said mapping, there is a good argument to be made that the real solution is to use a floating-point representation.
(Score: 0) by Anonymous Coward on Sunday August 25 2019, @01:15AM
Huh?
(Score: 2) by darkfeline on Sunday August 25 2019, @11:12AM (1 child)
Range != granularity Yes, humans can hear a wide dynamic range. Humans cannot distinguish between infinite gradations of volume.
You can represent an arbitrarily wide dynamic range with just a single bit: let 0 be the lowest value of the range and 1 be the highest. Oh, you wanted more granularity? Let's add more bits then.
16 bits is more than enough. Humans cannot meaningfully distinguish between 2^16 different gradations of volume. As you say, our hearing is logarithmic. We don't need to represent 32767 differently than 32768 or 32769. No one can tell the difference anyway.
24 bits is used during mixing so one has more leeway to be sloppy. If you want to keep it in the final master, fine. But claiming that we need even more bits suggests ignorance about digital audio.
Join the SDF Public Access UNIX System today!
(Score: 2) by hendrikboom on Sunday August 25 2019, @12:57PM
But with 2^16 gradations of volume we might still have to distinguish 5 from 6.