Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by martyb on Saturday August 24 2019, @07:17PM   Printer-friendly
from the music-like-background-noises dept.

In a long inteview, Neil Young mentions the effects the technological race to the bottom is having on music and our ability to appreciate it. From ear buds to compounded lossy compression algorithms, most people have lost access to anything resembling the traditional dynamic range and chromatic range that music requires. What to call the sounds that are left? Neil goes into a lot of detail on the problems and some of the, so far unsuccessful, steps he has taken to try to fix the problem.

Neil Young is crankier than a hermit being stung by bees. He hates Spotify. He hates Facebook. He hates Apple. He hates Steve Jobs. He hates what digital technology is doing to music. "I'm only one person standing there going, 'Hey, this is [expletive] up!' " he shouted, ranting away on the porch of his longtime manager Elliot Roberts's house overlooking Malibu Canyon in the sunblasted desert north of Los Angeles.

[...] Producers and engineers often responded to the smaller size and lower quality of these packages by using cheap engineering tricks, like making the softest parts of the song as loud as the loudest parts. This flattened out the sound of recordings and fooled listeners' brains into ignoring the stuff that wasn't there anymore, i.e., the resonant combinations of specific human beings producing different notes and sounds in specific spaces at sometimes ultraweird angles that the era of magnetic tape and vinyl had so successfully captured.

It's a long read, but quite interesting and he has thought about both the problem and solutions. More importantly he has been working to solve the problem, even if it may be an uphill fight.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by RS3 on Saturday August 24 2019, @07:47PM (6 children)

    by RS3 (6367) on Saturday August 24 2019, @07:47PM (#884882)

    You're absolutely correct. In fact, long ago they started using DVDs for "HD" audio- 24 bit, 96KHz. They exist, but never really took off. And I knew people who used VHS HiFi to record audio- the specs were that good or better.

    The main reason for all the mess is the CD format. 16 bits is actually not great, because in PCM (Pulse Code Modulation https://en.wikipedia.org/wiki/Pulse-code_modulation [wikipedia.org] the quieter sounds don't use all of the bits and they sound distorted. Sound engineers / mixers / masterers had to use all of the 16 bits, and it became a fairly standard practice to compress- individual tracks (raw multi-track tracks), the overall mix, then use a mastering multi-band leveler (more compression) and finally a limiter that does magic but still compresses (dynamically, and maybe a LOT where it needs to).

    And then you have YouTube, Spotify, etc., and you don't know what reprocessing they do besides obviously encode to .mp3, and even then there are many options besides bit rate.

    I've never done multi-track to magnetic tape, but I've read about the magic that some engineers used- basically you had to know the tape head and tape magnetic saturation, and what pre-amps and levels would give you the best sound, intentionally using a little head/tape saturation as a nice soft limiter. You'd think someone could emulate that in electronics or software, but I'm not sure if there are any really good emulators / plugins because I only dabble in that world.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 5, Interesting) by Anonymous Coward on Saturday August 24 2019, @10:34PM (3 children)

    by Anonymous Coward on Saturday August 24 2019, @10:34PM (#884937)

    That is not why they sound distorted. The best way to understand the limitation of CDs and how hard they are to mix is that PCM is linear, but our hearing is logarithmic. A sound that is twice as loud on the PCM stream (e.g. moving from the values 128 to 256 or 32767 to 65534) is not perceived as twice as loud to us. You can move the same distance linearly, but depending on where you are on the scale, it will sound different. It is the same reason why people complained about the volume knob on early versions of Windows and Mac OS (but not BeOS). It is difficult to map logarithmic values to linear and back over a large range, so compression artificially limits the dynamic range in order to try and preserve the differences that people will notice the most. It also helps to even out the sound of the ensemble and reduce unwanted sounds like heavy hits out of nowhere and sibilant noises. While adding more and more bits gives you more room to do said mapping, there is a good argument to be made that the real solution is to use a floating-point representation.

    • (Score: 0) by Anonymous Coward on Sunday August 25 2019, @01:15AM

      by Anonymous Coward on Sunday August 25 2019, @01:15AM (#885007)

      Huh?

    • (Score: 2) by darkfeline on Sunday August 25 2019, @11:12AM (1 child)

      by darkfeline (1030) on Sunday August 25 2019, @11:12AM (#885135) Homepage

      Range != granularity Yes, humans can hear a wide dynamic range. Humans cannot distinguish between infinite gradations of volume.

      You can represent an arbitrarily wide dynamic range with just a single bit: let 0 be the lowest value of the range and 1 be the highest. Oh, you wanted more granularity? Let's add more bits then.

      16 bits is more than enough. Humans cannot meaningfully distinguish between 2^16 different gradations of volume. As you say, our hearing is logarithmic. We don't need to represent 32767 differently than 32768 or 32769. No one can tell the difference anyway.

      24 bits is used during mixing so one has more leeway to be sloppy. If you want to keep it in the final master, fine. But claiming that we need even more bits suggests ignorance about digital audio.

      --
      Join the SDF Public Access UNIX System today!
  • (Score: 4, Insightful) by Anonymous Coward on Sunday August 25 2019, @12:14AM

    by Anonymous Coward on Sunday August 25 2019, @12:14AM (#884989)

    People who want the quality can get it

    You're absolutely correct.

    You two are way off the mark. If you read TFA you'll notice eg:
    -recording studios don't record nor keep the same quality of recording
    -production tools and standards are discarding meaningful data
    -production tools make resulting tracks which don't correspond to what a live performance could ever sound like

    Now, these aren't inherently bad. "Pull the cymbals up a bit they're too faint" might make sense. But Neil Young is arguing that, like the 70s pile carpets and mustard colours fad, the aesthetic delivered is bad, and that the monoculture due to major labels and publication streams is ensuring that if you want 50s baby blue or 90s beige, you're NOT able to get it - it's simply not being made.

    TFA is literally claiming (and others in this thread discuss the technical realities of those claims) that no you cannot get arbitrarily high quality - not without going to a live performance - not because we don't have the tech, but because we've settled on bad technical and social standards.

    Someone else pointed to the Loudness Wars, which alone refute the idea that arbitrary quality up to human perception is available. Oh I have the perfect metaphor! It's like watching a movie through a fish eye lens! The data is transformed and recognizeable, but some details are expanded and some are reduced past human discernment, and the experience is distorted. Watch through it long enough and seeing a movie without that lens on would seem weird, and bad!

  • (Score: 5, Insightful) by shortscreen on Sunday August 25 2019, @01:22AM

    by shortscreen (2252) on Sunday August 25 2019, @01:22AM (#885009) Journal

    I disagree entirely. 16 bits is more than adequate for playback. The purpose of greater bit depths (ie. 24-bit) is to prevent data loss during the mixing process, because when you amplify, attenuate, or combine two 16-bit samples you need additional bits to represent a precise result.

    16 bits gives you a signal to noise ratio of 96dB. Can you hear white noise at -96dB? Only in audiophile fantasy land. It doesn't really matter if one part of a recording is quieter than the rest, the noise floor is still at -96dB (in theory... but if your equipment is rubbish and adds a massive amount of noise itself, that is a separate issue which also can't be solved with more bits) which means it's not audible. Unless the listener cranks the volume WAY up for that section... and then turns it down again before the next track starts so they don't go deaf.

    Dynamic range compression causes distortion. At low bit depths it could mitigate noise problems... but at 16 bits there is no noise problem to begin with. Instead it's being abused for the sake of the loudness war, so that CDs from the '00s are heavily distorted compared to CDs from the '80s.