Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by mcgrew

Every now and then I see something on the internet, usually at Soylent News, about pre-digital music recording. It's almost always incorrect; something someone just thought up or heard it from someone else. Some of these people are actually pretty cognizant about most technologies.
        First, one needs to know the difference between analog and digital. Of course, one is computer codes and the other is an analogy, but when it comes to analog music, the more money you spent on equipment, especially speakers but all of it, the more it would sound like real instruments rather than an analogy. This was called High Fidelity when it was actually accurate enough that you couldn’t tell a recorded timpani from a real drum. With digital equipment it doesn’t matter as much. There are tricks that have been developed in the last few decades to fool your ears; no, actually, to fool your brain.
        So I thought I'd start at the beginning, with the birth of recorded sound and dispel all the falsehoods while I'm at it; or at least, the ones I’ve heard.
        I have personally lived through the last seventy years of innovation and change. When I took a physics class on sound and its recording, digital sound recording had yet to be invented.
        Ever since the 1940s or possibly earlier, all albums were copies. One difference between analog and digital is with every child copy, an analog signal degrades, but a copied digital signal is identical to its parent, because it is no longer a signal. It’s a series of numbers, measured voltages. In analog, as the signal from the microphone gets stronger, the voltage feeding the tape head gets stronger.
                In 1877, a century before I attended that class, Thomas Edison invented the phonograph, named with the Latin for “sound writing”, writing with sounds. The first recordings were on tin foil. In 1896 and 1897 he mass produced his phonograph players and wax cylinders. You can hear some of them here at the National Park Service website.
        At one point he developed a talking doll, with a phonograph inside. It was a commercial flop; women had to scream into the recorder, as electronics wouldn't exist until 1904 when his labs developed the vacuum tube (called the “valve” in Britain; both names are accurate). They had one of the dolls on the TV show Innovation Nation. It was a commercial flop. I imagine they would have scared the hell out of little girls.
        In 1900 he patented a method of making his cylinders out of celluloid, one of the first hard plastics. Cylinders had been produced in France since 1893, but were not mass produced as Edison’s 1900 cylinders were. Dictaphones used wax cylinders until 1947.
        Alexander Graham Bell is often credited with inventing the gramophone, probably because of its name, but it was patented in 1887 by Emile Berliner, who named it. He manufactured the disks in 1889. He came up with the lateral cut, where the needle moved side to side rather than up and down as with Edison’s phonograph.
        Records were 12.5 cm (about five inches) and are now recorded at 8 1/3, 16 2⁄3, 33 1⁄3, 45, and 78 RPM. Berliner's associate, Eldridge R. Johnson improved it with a spring loaded motor and a speed governor, making the sound as good as Edison's cylinders. However, it would be a few decades before high fidelity.
        The first records were “about 70 RPM” and standardized at 78 RPM from 1912 to 1925, the year all companies standardized. Modern turntables still play them.
        I have seen comments saying you can’t do deep bass in vinyl because the needle would jump out of the groove, which is one of those things that’s partly right while still being completely wrong.
        This was solved by a “rollover frequency.” Records were recorded with the bass attenuated when recorded, then returned to full volume on playback. However, it created another problem: The records you produced, when played on a record player you produced, sounded pretty good. But played on anybody else’s record player you would have to adjust the tone control to make it sound any good at all.
        This is why the Recording Industry Association of America (RIAA) was formed; to standardize the “rollover frequency”. It’s described well in Wikipedia. Since then, anyone’s record will play on anyone else’s player, and the quality depended on the quality of the disk and the equipment it was played on.
        However, the curve wasn’t standardized until the middle 1950s, when I was a child and high fidelity, usually called “hi-fi”, came about. Its aim was to reproduce the sound as accurately as possible, so good that a blind person couldn’t tell the difference between a recording and a live performance. They never quite got there, but they got really close. They gave up on fidelity when they invented the Compact Disk.
        An old, pre-digital myth presented itself when I was a teenager. My dad’s friend was an audiophile, and once asked me if I thought he should buy a more powerful amplifier.
        “What’s the loudest you turn it up?” I asked.
        “About three.”
        “Nope,” I then answered. “more watts doesn’t make it sound any better, only louder.”
        Some folks think the more watts, the better it will sound. It’s a myth. Or that you need a lot of watts for deep bass. Also a myth; a 1974 Kenwood 777 speaker with its fifteen inch woofer had plenty of deep bass, low enough to feel, with a portable monophonic cassette recorder powered by C batteries. Hardly high fidelity, but deep bass, and treble as good as cheap stereos. With a high fidelity receiver they would fool most into thinking it was live.
        Today’s “sub woofers” are magic; magic as in David Copperfield magic. They fool the brain into thinking there’s deep bass, because they transmit subsonics you can feel, making it seem like the bass is good, but play it with real high fidelity speakers on the same equipment and you’ll hear what a real woofer can do as opposed to a subwoofer. If you need a subwoofer, you don’t really have much bass at all. It’s a trick. There’s a lot of sound on that record that simply doesn’t come out of those cheap speakers that you can hear clearly with a pair of quality speakers with real woofers.
        By the 1950s the sound was good enough, if you could afford the high fidelity speakers, that the the only way adult ears could tell the difference was noise; tape hiss and dust on the final record. Tape hiss was minimized and even eliminated by speed; the faster the tape passed the heads, the higher the frequency of the hiss. At about 16 IPS (inches per second) the hiss was inaudible, as it was above the range of human hearing.
        The best high fidelity home tape decks were at 16 IPS (inches per second), and very expensive. Studio recordings were made at 32 IPS, twice as fast as hiss removal. Fidelity can’t get much higher than that unless they vastly increase the sample rate of digital recording, or get the ferrite grains on the tape smaller.
        It was about this time that stereo was invented. Stereo tape was easy, simply have two separate coils in the tape head, each sending a signal when recording, and receiving it when playing back. These would play both channels mixed together on monophonic tape machines. However, playback is slightly different than recording, so all but the cheapest recorders have separate heads for recording and playback.
        But how can you have two signals in a single groove of a vinyl record? How do you maintain the backwards compatibility that had existed since the Gramophone was invented? I found out in a physics class in the late 1970s.
        As mentioned earlier, the needle wiggles side to side in the same shape as the sound waves. For stereo, this motion carried both channels in the side to side motion, and a single channel in the up and down motions. These two channels are combined out of phase to remove one of the two channels from the side to side motion.
        I couldn’t remember which channel was which, so I googled, and wow! The internet is certainly full of nonsense. One site with “labs” in its name gave an explanation that was very complicated, was believable, and completely wrong, with images that could fool you.
        Even if it’s published in a bound book it may be bullshit. I have a half century old paperback titled Chariots of the Gods that “proves” that the earthen lines in Peru are evidence of extraterrestrial visitation, but it’s obvious to me from looking at them that they were ART. We artists do things like that, even though normal people don’t understand. The book was nonsense, the type of nonsense we call “conspiracy theory” in the 21st century. Way too many people think if a thing could be, that it must be. Occam’s Razor and my college professors’ teachings say they’re artworks.
        I’ve seen comments that claimed that in the fifties and sixties they made records with attenuated bass and treble so they would sound okay in car radios, which is patent nonsense. They weren’t recorded with attenuated bass and treble, you simply can’t get bass from a three inch speaker, and radios were AM only back then. AM radio and its tiny speaker is the limitation, not the music they played.
        They always strove for the highest fidelity possible in the uber-expensive stereo systems that cost thousands of dollars; if you bought a record that made your expensive stereo sound like a Fischer-Price toy, would you buy another record produced by that company?
        Car radio sucked because cars then had abysmal acoustics, and AM has never been remotely possible to produce high fidelity. Even analog FM falls short, due to bandwidth constraints. Radios were all amplitude modulation (AM) in cars, frequency modulation (FM) was new, and not much used until the 1960s, and car radios were all AM only until after 1970. AM radio has a very limited frequency response and unlimited noise; hisses and crackles from things like lightning in Tierra Del Fuego that frequency modulation lacks.
        I’m not going into detail about radio broadcasting here, perhaps in a later article. But if you had a copy of an early record from Jerry Lee Lewis or Chuck Berry, or even something silly like “My Boomerang Won’t Come Back” (it’s on YouTube, I’m sure), on a high end stereo it will sound like Mr. Lewis or Mr. Berry are in the room with you, except that the dust on the record will sound like it’s raining, with an occasional hailstone.
        Now, my dad bought a furniture hi-fi stereo that he paid hundreds of dollars for after his friend introduced him to high fidelity stereo classical music back in the early 1960s. He worked over his vacation to pay for it. This was when a McDonald’s hamburger was fifteen cents and the minimum wage was a dollar (note that the burger’s price stayed the same after the minimum wage went up to a buck fifty, despite politicians’ lies that raising the minimum wage causes inflation, a non-music, non-tech debunking).
        Even Dad’s expensive stereo wasn’t high enough fidelity to fool you, but I bought a stereo system when I was stationed in Thailand that would; sound equipment was expensive in America because of crazily high tariffs. I would have spent ten times as much on that stereo in America, but GIs could import duty-free. A Chuck Berry record played on that stereo sounded like Chuck Berry was in the room with you, with rain from the dist and scratches.
        I don’t remember exactly when Dad bought that furniture stereo, which now sits in my garage, but it was probably a couple of years before the cassette was invented in 1963 by the Dutch. Originally for dictation, the earliest ones were far from high fidelity. The eight track was invented a year later by a consortium of companies, wanting to bring stereo music to the automobile; no car had FM or stereo then.
        The cassette was an eighth inch tape, the eight track was quarter inch, which should have made the eight track superior, as well as its 3 IPS speed, twice as fast as a cassette.
        I never had an eight track, unless you count the player in the stereo my wife owned when I married her. I’d had cheap reel to real portables since I was twelve, and bought a portable monophonic cassette recorder when I started working in 1968.
        One myth wasn’t a myth to begin with. In 1964, the eight track was indeed superior to the cassette, due to its size and speed, as I mentioned. But eight tracks have disadvantages, and their possible advantage wasn’t followed.
        Cassettes got better and better fidelity until factory recorded cassettes surpassed factory eight tracks; they had invented eight tracks for cars and cassettes for dictation. But cars had abysmal acoustics back then, far worse than even today. Plus, nobody but the very, very richest had air conditioning in cars, so the stereo had to compete with wind and road noise, so producers didn’t bother with fidelity.
        By 1970 the studios had started producing pre-recorded cassettes, which sounded better than pre-recorded eight tracks because eight tracks were designed for cars, but people still thought eight tracks were superior despite their terrible habit of cutting off songs in the middle. Relatively few had cassettes; most folks had eight tracks, because of the myth. I busted that myth for a buddy in the Air Force in 1971 by simply playing a cassette.
        I always thought that designating eight tracks for cars and cassettes for homes was incredibly stupid, completely backwards. You could fit a cassette in a shirt pocket, but a cartridge was exactly four times as big as a cassette but held exactly the same amount of music as a cassette.
        The eight track was called as such because there were four stereo tracks, taking the tape size advantage away from them, instead of one or two. This allowed more tape to fit in the cartridge, but made four changes as opposed to cassette and vinyl’s two. And if it was “eaten”; pulled from its cartridge and wrapped around inside the player, it was almost impossible to repair, unlike a cassette, which was relatively easy.
        Dolby noise reduction was developed for recording studios’ master tapes in 1965, and introduced to cassettes in 1968. It worked in a similar fashion to the RIAA cutoff for vinyl; when recording, higher frequencies are greatly boosted, and reduced on playback. As the treble is attenuated, the hiss is, also.
        A twenty year old high end cassette deck is cheap. With the best, high priced equipment, a cassette can sound as good and have almost as good a frequency response as a CD, (up to 18 kHz compared to CD’s 20 kHz), although not CD’s dynamic range, which is even better than vinyl. But a CD can’t match vinyl’s frequency response, being capped at 20 kHz because of the Nyquist limit, which I’ll discuss shortly.
        “Quadraphonics” was introduced in the early 1970s, what we call “surround sound” today. There were four separate channels, two in the front and two in the rear, and the movie studios and theaters got it entirely wrong. Those two rear channels often detract from the movie, removing the magic and bringing you back to reality when the moronic director stupidly makes everyone’s head twist around to see what made that sound behind them. The four speakers should be positioned at the four corners of the screen, so sound can move up and down as well as side to side.
        Quadraphonic stereo was easy to make with eight tracks and cassettes. You simply added two coils to the tape head, each coil feeding a separate channel. This actually improved eight tracks, since there was only one track change. Cassettes had none, because they could be recorded on one side only, since a cassette only has four tracks. That’s all that could fit on a tape that narrow, so quadraphonic cassettes had to be rewound.
        An album was a different matter. I remember that once I had a stereo album that I had to replace; I don’t remember why, but its replacement was quadraphonic and didn’t sound as good as the stereo version on my turntable. Something was missing, and I couldn’t tell what. It sounded the same, but it didn’t. At the time, I had never heard a song in quadraphonic stereo. I didn’t know why it was different until I found out in that physics class later.
        They solved the problem of how to get four channels out of a single groove with electronics. They modulated the rear channels with a 40 kHz tone and mixed it with the front channels, and on playback the front channels were limited to 20 kHz and the rear channels demodulated.
        What was missing was the supersonic harmonics, over the 20kHz cutoff. Very few speakers back then and none today were good enough to tell the difference, but the pair I had went all the way to 30 kHz. You can’t hear tones above 20 kHz. For most people it’s closer to fifteen, especially older people. However, those high frequency harmonics affect the audible tones, and sound engineers can’t seem to understand that, insisting that sounds higher than you can hear can’t affect sounds you can, but I heard the difference with my own twenty five year old ears and learned what was missing the next year after the professor explained how quadraphonics worked.
        I say test it. Get thirty or forty children and teenagers and high quality headphones capable of faithfully reproducing super high frequency tones, and feed a 17 kHz sine wave to the headphones, with instructions to the kids to push a button when the sound changes. After a short time after the trial starts, change the tone from a sine to a sawtooth. I say the majority will press the button right after the tones change, the engineers say I’m full of shit. Stop making assumptions like a conspiracy theorist and TEST it scientifically! Science, bitches! Aristotle was a long time ago.
        This, I say, is what’s wrong with “digital sound”, which is actually a misnomer. All sound is analog; an analog of the original sound comes out of the speakers regardless if the recording is stored in analog or digital. It was invented when the biggest, most expensive computers on the planet were finally fast enough that they could finally record sounds up to 20 kHz, past the limits of human hearing, and cheap computers were capable of playing them back.
        The way digital sound, invented by the Dutch again, works, is periodically recording the voltage coming out of the microphone. With a CD, the voltage is tested 44,000 times a second and those numbers are stored on CD. They then discovered the Nyquist limit, named for the man who discovered it.
        Back to our teenager test with the sine and sawtooth wave, a 17 kHz sine wave sampled at 44,000 samples per second and cut off at 20 kHz is the same as a sawtooth wave, as there are only three samples per wave, far too few to discern between a sine and a sawtooth. But that untested theory says if you can’t hear it, it can’t color what you do hear. But a 17 kHz tone will audibly affect a 1000 Hz tone, even if your old ears can no longer hear a 17 kHz tone.
        Double the sample rate and that 17 kHz tone has six or seven samples. Multiply it by five and the differences should be striking, and digital should beat analog. But not at its present sample rate.
        The reason for the cutoff is that without the cutoff, ugly noise is introduced in a digital recording. It’s the computer’s bane, the rounding error. One too many samples in a wave changes its shape completely.
        That is why I earlier said that sample rates and bits per sample could be high fidelity and even surpass vinyl if they vastly raised the sample rate. They couldn’t when the CD was invented, they certainly can now that CPUs are thousands of times faster.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by mcgrew on Friday March 24, @01:41PM

    by mcgrew (701) <publish@mcgrewbooks.com> on Friday March 24, @01:41PM (#1297950) Homepage Journal

    S/N's code obviously changed, so that in "plain old text" it mangles a link. Here's the link to the audio:
    https://www.nps.gov/edis/learn/photosmultimedia/the-recording-archives.htm [nps.gov]

    --
    Carbon, The only element in the known universe to ever gain sentience
  • (Score: 2) by JoeMerchant on Friday March 24, @02:38PM (6 children)

    by JoeMerchant (3937) on Friday March 24, @02:38PM (#1297966)

    >the more money you spent on equipment

    I get the occasional audiophile "story" (aka advertisement press release) in my news feed. Well over half of these "ground breaking new advances in sonic quality" are clearly high price tags hung on snake oil. The other half, it's not so clear, but I'd be willing to bet it's also true of most of them as well.

    Even in live orchestra performances, the sound you hear(d) is(was) highly dependent on the acoustics of the room, your seating position, whether or not there were people seated next to you - even silently holding their breath they modify the available audio paths between the instruments and your ears. So, ideally, two recording microphones are placed at an ideal location within the room (on either side of a simulated head) if you want "true reproduction" of a sonic experience. As you say, what really happens is an analogy of that, with microphone placements that give nicer sounds, without the rustling of your neighbor's clothing or program. They're going to be level adjusted, maybe delay modified, probably shaped by equalizer (which also introduces phase shifts), etc.

    Of course, "pop" music will also pitch-correct at least singer(s), and make all kinds of other un-natural modifications to the sound that makes it more appealing to the audience.

    Point being: nobody really tries for an exact replication, the "best" sound reproduction has modifications built in and what audiophiles are seeking isn't really what the original sounded like, but some analogy that they like the best. The EQ applied to most vinyl records being one of the most common examples of audiophile preferences for less than perfect reproduction of the original audio.

    Higher sample rates cost more, but actually, today, you could easily oversample to 384KHz without much increase in costs if you wanted to. For me, the main motivation of such absurdity would be to place a gentle 1st order input filter with a -3dB corner frequency somewhere around 50Khz, before a more aggressive (maybe 10th order) cutoff filter hitting -80dB closer to 180KHz. Of course, that's for younger me who could hear things that less than 1/10,000 people could hear, over 24KHz at one time. Today it seems my perceptual rolloff is closer to 12KHz. And, even with such "faithful reproduction" of the artists' output, what's the point if the artists themselves can't hear anything over 18KHz and that higher frequency "color" on the sound is lost on them when they are making the original recordings? I mean, it's cool that you the listener might hear things that the artists can't (especially artists who have been playing onstage with loud amplifiers for decades), but... is that really high frequency information really "essential" to the appreciation of the art?

    If you listen to the popular music of the 1960s and early 1970s, you'll hear a certain tonal quality to it - there's a simple explanation for that: AM radio. The songs that got popular sounded good on low bandwidth AM. There was other music made at the time that didn't get as popular, and some of that probably missed the chance to get popular because AM listeners couldn't appreciate it. FM changed that in the early 1970s and pop music quickly expanded its bandwidth. Then they tried to sell quadrophonic and the market told them that there aren't enough four eared people ready to spend double again for the extra speakers - although Dolby eventually did start successfully marketing 5.1 / 7.2 etc. some decades later. Anyway, even FM bandwidth is only around 12KHz in practice, so it sounds about the same to me today as it ever did.

    So, yeah, kinda like buying a 1080p screen back in 2008 when there was no content available at that resolution, I could build my 384KHz recording and playback system, but what content could I get on it that actually uses that full bandwidth? When I used to collect vinyl records, I would buy the record and a protective plastic sleeve, because the paper sleeves they were shipped in would leave more dust in the grooves. I would play it once, recording that playback to cassette tape, then put it away until the tape playback quality started to degrade, then I'd pull it out for a 2nd playback to make a new cassette copy. Yes, there was some loss of quality in the vinyl to cassette transfer, but not as much as the loss in quality from multiple passes of the playback needle. Then there was the periodic replacement of the playback needles, and the fact that you can't play an LP in a moving car...

    Some of the "CD harshness" bemoaned by audiophiles was just additional bandwidth that they weren't used to hearing. Yes, 44,100 is pretty tight on the 20KHz theoretical bandwidth of human hearing, especially after the application of the (necessary) anti-aliasing filters. But let's be real: almost nobody actually wants to hear music 'like I was there' - they want something idealized that can't really be experienced live - particularly with "perfectionist artists" who play and replay and replay small segments of their compositions until they get it "just right" for the release.

    --
    Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
    • (Score: 0) by Anonymous Coward on Friday March 24, @05:30PM

      by Anonymous Coward on Friday March 24, @05:30PM (#1298011)

      Some of the "CD harshness" bemoaned by audiophiles was just additional bandwidth that they weren't used to hearing.

      The majority of vinyl cutting lathes from the mid 70s to 90s fed the direct analog signal to the computer, the signal fed to the cutting stylus was delayed... via a 12bit digital delay line. I would have more friends if I'd kept that factoid to myself over the years.

      Yes, 44,100 is pretty tight on the 20KHz theoretical bandwidth of human hearing, especially after the application of the (necessary) anti-aliasing filters.

      Decca had 48kHz, 3M / Soundstream 50kHz. The original Phillips proposal for CD was 43kHz, it was revised to 44.1kHz because that was the sampling frequency of the Sony PCM1610/1630 video decks which would be repurposed for CD mastering. 44.1 kHz is a stupid sampling frequency for audio.

      The difficulty of designing analog filters steep enough to remove out of band frequencies while keeping artifacts (ringing) out of the audible spectrum was huge. This is the source of the harshness complaints and it could have been avoided by selecting a higher sample rate. Nonetheless, oversampling convertors were quickly to render the complaint moot for those of us in the real world. Audiophools still repeat the "harsh" myth to this day, in reality it was solved by the mid 80s.

    • (Score: 2) by mcgrew on Friday March 24, @08:29PM (3 children)

      by mcgrew (701) <publish@mcgrewbooks.com> on Friday March 24, @08:29PM (#1298045) Homepage Journal

      I get the occasional audiophile "story" (aka advertisement press release) in my news feed. Well over half of these "ground breaking new advances in sonic quality" are clearly high price tags hung on snake oil.

      With digital, they are snake oil. Of course, there have always been snake oil salesmen, like the one who told my dad's friend that a higher power amplifier would sound better.

      Point being: nobody really tries for an exact replication

      Not any more, no. That died with the birth if the CD.

      If you listen to the popular music of the 1960s and early 1970s, you'll hear a certain tonal quality to it

      If you're talking about music that was only made for the money, you may be right. That's the junk my younger sister listened to. The Beatles or the Stones? Their earliest albums sound no different than their latest albums. However, the radio stations, out of necessity, had to adjust the sound for the limitations of their media, so yes, listening to it on an AM radio won't sound much like a high fidelity system.

      Then they tried to sell quadrophonic and the market told them that there aren't enough four eared people ready to spend double again for the extra speakers - although Dolby eventually did start successfully marketing 5.1 / 7.2 etc. some decades later.

      That was because you were basically buying two $500 stereos to replace a single $1,000 stereo. That $1,000 will sound almost twice as faithful as the $1,000 quad. They fooled my physics teacher with it, I showed him my Kenwoods and he changed his opinion. Surround sound only was accepted because of two things, the dropping of tariffs and the subwoofer; speakers are the most important piece of any stereo.

      almost nobody actually wants to hear music 'like I was there' - they want something idealized that can't really be experienced live

      Yes, that's what the music industry have brainwashed most into wanting. Personally, I prefer live music performed by real human beings. That's in the novel I'm working on, hundreds of years from now in this fiction, art, music, and literature are all produced by computers, and nobody knows how to program them any more (Asimov had that as a central theme of one of his short stories, where calculators had made basic arithmetic obsolete). The only famous live musician goes on the trip.

      --
      Carbon, The only element in the known universe to ever gain sentience
      • (Score: 0) by Anonymous Coward on Friday March 24, @10:24PM

        by Anonymous Coward on Friday March 24, @10:24PM (#1298062)

        However, the radio stations, out of necessity, had to adjust the sound for the limitations of their media, so yes, listening to it on an AM radio won't sound much like a high fidelity system.

        It was always mastered for the media, so early CD releases used recordings mastered for vinyl and sounded terrible. Engineers learned to tease every last drop of performance out of each format. The radio formats used pre-emphasis to improve signal to noise ratio. For tape machines, it was noise reduction - a combination of emphasis and compounding. The original Dolby A would split, eq and compress to tape and reverse (expand and eq) for playback. The original 12 bit digital audio systems from the 70s all used de-emphasis. There was a standard proposal for 16bit digital audio but aside from Japanese broadcast manufacturers it was ignored. So we rapidly got 18 bit professional recordings and 16bit consumer. Now it's 24bit professional and 16bit consumer and that is fine.

        almost nobody actually wants to hear music 'like I was there' - they want something idealized that can't really be experienced live

        Yes, that's what the music industry have brainwashed most into wanting

        Yeah, 16bit audio is fine. The dynamic range between a pin dropping and a jet engine is painful. People demanding 24bit audio for home playback are quite mad. In the late 80s Public Enemy discovered they could limit their record before it hit FM broadcast stations end of line limiter. Their records had smaller dynamic range and sounded bigger and badder than anything else. That trick was abused and People forgot how good audio used to be. People complaining about adverts being louder than their movie? Inaudible dialog in movies - that we never had in 4ch surround days? Oh yeah, the medium is outpacing the message. A naturally presented hyperreality is what every audio experience should be. Standing next to a cranked Fender Twin is approaching jet engine volume, it's painful and nobody (or their neighbors) wants that in their home.

      • (Score: 2) by JoeMerchant on Friday March 24, @11:14PM (1 child)

        by JoeMerchant (3937) on Friday March 24, @11:14PM (#1298069)

        >like the one who told my dad's friend that a higher power amplifier would sound better.

        All else being equal, they will sound the same until you reach the limits of the lower powered amp. All else is rarely equal and the amps with the higher power ratings often lower their other spec limits to get that power number.

        That being said, my first receiver was a 25W/ch Akai and it was just too puny for an environment with noise (like a dorm room) the 40W/ch Onkyo that replaced it was much better and served me well for many years. Later on I started paying attention to speaker sensitivity ratings which also make a big difference in the overall capacity of a system to put out loud and clean sound.

        >exact replication ... died with the birth if the CD.

        Oh, I think it was always a small niche interest and probably has as many acolytes today as ever

        >I prefer live music performed by real human beings.

        I would argue that the two experiences are as different as theater and movies. Both are enjoyable, but filmed theater and recorded live music, with rare exceptions, aren't enjoyable the same way that produced recordings are.

        While I like live performances, the opportunity to enjoy playbacks of produced recordings are much more frequent, and for recorded music I can mix that experience with many other activities from work to driving to sailing. Yes, I also like to sail, drive, and work without music many times, but they also mix well on many occasions, and that music mixed with other experiences isn't nearly as critical about the audio quality.

        However, I have been doing day-job work on the sailboat lately (tied up to the dock) and the little waterproof Bluetooth speakers were getting to me, so I splurged on this: https://www.marshallheadphones.com/us/en/stanmore-ii-bluetooth.html [marshallheadphones.com]

        At home we have a (hand me down) Bose soundbar that we play jukebox MP3s through a lot. It sounds great by itself but back to back comparison with the Stanmore makes the Bose sound thin and very lacking in midrange (as most Bose are...). The Stanmore II fits nicely on the boat and is just powerful enough to fill the cabin with any volume you like. It puts the old Bluetooth mini speakers to shame at any volume level, and they will get almost as loud as it.

        --
        Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
        • (Score: 2) by mcgrew on Monday March 27, @07:26PM

          by mcgrew (701) <publish@mcgrewbooks.com> on Monday March 27, @07:26PM (#1298375) Homepage Journal

          All else being equal, they will sound the same until you reach the limits of the lower powered amp.

          Indeed, and a well engineered amplifier won't get loud enough to distort, the distortion happens when it's overdriven. A quality amp will sound as smooth at 10 as 3, just a hell of a lot louder. Of course, you can overdrive the input of any amp, but a turntable isn't going to overdrive anything.

          Oh, I think it was always a small niche interest

          Not a niche interest, niche affordability. Most people can't afford to spend the cash necessary for high fidelity, even if it still existed.

          I would argue that the two experiences are as different as theater and movies.

          I doubt many would argue with that.

          --
          Carbon, The only element in the known universe to ever gain sentience
    • (Score: 0) by Anonymous Coward on Monday March 27, @10:26AM

      by Anonymous Coward on Monday March 27, @10:26AM (#1298318)

      But let's be real: almost nobody actually wants to hear music 'like I was there' - they want something idealized that can't really be experienced live - particularly with "perfectionist artists" who play and replay and replay small segments of their compositions until they get it "just right" for the release.

      I'm fine with "like I was there" in a virtual session where they played it perfectly and the sound people got it perfect too.

      A good set of headphones can give you better stereo and "3D I was there feel" than most speakers: https://youtu.be/IUDTlvagjJA [youtu.be]

      Whether they record and mix the music or whatever to produce that effect is a different matter - they may mix it for two speakers instead of two headphones.

  • (Score: 2) by JoeMerchant on Friday March 24, @02:51PM (2 children)

    by JoeMerchant (3937) on Friday March 24, @02:51PM (#1297969)

    >They solved the problem of how to get four channels out of a single groove with electronics. They modulated the rear channels with a 40 kHz tone and mixed it with the front channels, and on playback the front channels were limited to 20 kHz and the rear channels demodulated.

    That could work... my EE professor explained it as a form of phase modulation - maybe his explanation was how they did FM broadcast quad. Anyway: stereo transmits L+R and then L-R, if you can't get the wider bandwidth to receive the L-R, you still have the L+R available. Now... the explanation of how that was extended to quadrophonic using phase modulation was something I grasped 35 years ago, then never used in practice, so I'm unlikely to do it justice here, but again it started with something like LF+RF+LR+RR, then LF-RF+LR-RR, then there was a third channel that somehow allowed teasing out the four individual channels from those first two... It seemed surprising to me that a fourth channel wasn't required... maybe one was, 35 years is a lot of forgetting time.

    --
    Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
    • (Score: 2) by VLM on Monday March 27, @04:44PM (1 child)

      by VLM (445) on Monday March 27, @04:44PM (#1298353)

      the explanation of how that was extended to quadrophonic using phase modulation was something I grasped 35 years ago, then never used in practice, so I'm unlikely to do it justice here,

      Unfortunately there were about 4 more or less mostly incompatible systems, at least per wikipedia, and that's what probably killed the format.

      I had a heathkit amp from that era that decoded at least two quad formats, or tried to, and I don't have any idea which system it used even though I had the manual LOL (I did not assemble the kit). I had no internet access to media in the 80s so I had nothing quad to play, and the only quad vinyl I could find was my parents generation not mine...

      A large part of the trick of quad sound was people REALLY like high separation ratios for left and right, but what if we tossed that to save money and bandwidth for quad and accepted, perhaps as little as 3dB of separation between front and back? Quad did not have high F/B separation ratios until digital surround sound around the turn of the century.

      I find that 'in the market' the idea of surround sound seems pretty dead, hifi audio in general seems pretty dead. I mean 'in the market' as in joe 6 pack goes to Best buy or shops on amazon, not people super into hifi who will always be into high technical quality music. I'd blame some of that on shitty mp3 encoding and some on the 'loudness wars' where there's no point mixing recordings or reproducing them to better than $3 earbud quality. I would propose that the highest SNR the general public would be exposed to was in the 80s and its been in fast decline since.

      • (Score: 2) by mcgrew on Monday March 27, @07:16PM

        by mcgrew (701) <publish@mcgrewbooks.com> on Monday March 27, @07:16PM (#1298371) Homepage Journal

        Unfortunately there were about 4 more or less mostly incompatible systems, at least per wikipedia, and that's what probably killed the format.

        What strangled the format in its infancy was cost. A thousand dollar stereo sounded almost twice as good as a thousand dollar quad from the same manufacturer.

        Unfortunately there were about 4 more or less mostly incompatible systems

        Had to look at wikipedia to see what you were referring to. Derived (2–2–4) formats aren't really quad, as they really don't have four channels. It's trickery, used to convert Elvis Presley to quad from a stereo master. Matrix (4–2–4) formats were recorded originally as four discrete channels on tape, mixed down from a 16 or 32 channel master.

        I had a heathkit amp from that era that decoded at least two quad formats, or tried to...

        I built two Heathkits, a ham radio receiver and a guitar amplifier. Both the the instructions, illustrated with drawiings, and the schematic diagram was wrong for the amplifier. This was before polarized wall plugs, and its power switch had "off" as the center position, with up and down opposite polarity. The way the instructions had it, you still had to plug it in right or it would hum. Only 16, I was pretty proud of myself for the hack (or repair, or whatever you want to call it).

        Like I said in the article, high fidelity died with the birth of the CD, which simply doesn't have a good enough frequency response, being cut at 20 kHz. You can faithfully record a dog whistle on a fully analog LP.

        The loudness wars weren't possible with analog because LPs and cassettes don't have enough dynamic range for loudness "normalization". And you couldn't have the commercials twice as loud as the program in analog TV for the same reason.

        I would propose that the highest SNR the general public would be exposed to was in the 80s and its been in fast decline since.

        That's been mostly as I've seen it through the years.

        --
        Carbon, The only element in the known universe to ever gain sentience
  • (Score: 3, Informative) by DannyB on Friday March 24, @03:01PM (2 children)

    by DannyB (5839) Subscriber Badge on Friday March 24, @03:01PM (#1297972) Journal

    Have you ever noticed how some songs have hidden messages if you play them backwards? Well, some people believe that these hidden messages are actually intentional, and are part of a sinister plot by the government to control our minds.

    The government has been secretly working with major record labels for decades to insert subliminal messages into popular songs. These messages are designed to be undetectable by the conscious mind, but are picked up by the subconscious when the song is played backwards.

    These messages are being used to brainwash the public into being more compliant and obedient to authority. With the advances in digital audio technology the government was able to encrypt its subliminal messages into the digital signal so that it was no longer necessary to play the song backwards. The recording industry was compliant with this because they could also embed secret alien DRM technology into the digital signals along with the government messages. These secret messages using digital audio technology are being transmitted through radio stations and streaming services around the world.

    Some people insist that vacuum tube analog equipment sounds better. There is in fact, truth to this, because there is no digital signal to encrypt secret messages. Thus it is necessary to propagate falsehoods that digital audio sounds just as good as analog, if not better. Furthermore, copies of digital audio do not degrade the government's secret messages. Only teenagers and young children can distinguish the difference if you can get them away from their phones. That is why the phones are so addictive. TikTok must not be allowed to interfere as a matter of national security.

    Most people are completely unaware of what is happening in secret and would dismiss this as nothing more than a bizarre urban legend. But for those who can program Java, every song is a potential tool of mind control, and the government's influence permeates the world of music.

    --
    How often should I have my memory checked? I used to know but...
    • (Score: 2) by mcgrew on Friday March 24, @08:43PM (1 child)

      by mcgrew (701) <publish@mcgrewbooks.com> on Friday March 24, @08:43PM (#1298046) Homepage Journal

      Have you ever noticed how some songs have hidden messages if you play them backwards?

      Yes, that was a big deal a few decades ago. Some were obviously accidental, like the spooky Satanic verse in Stairway To Heaven. Aleister Crowley's "autohagiography" (Ozzy Osbourne's Mister Crowley and the Diary of a Madman) states that a prayer to God said backwards is a prayer to Satan. I read that book, it was creepy.

      Then there are ones that were obviously on purpose, backwards when the album plays forwards, like ELO's Fire On High where it starts with someone saying "the music is reversable, but time is not. Turn Back! Turn Back!" backwards.

      And possibly the Beatles' Helter Skelter, that sings "I like smack" when played backwards.

      The government has been secretly working with major record labels for decades to insert subliminal messages into popular songs.

      If bit's been done secretly, how do you know about it? Sounds like bullshit to me.

      --
      Carbon, The only element in the known universe to ever gain sentience
      • (Score: 3, Funny) by DannyB on Sunday March 26, @03:16PM

        by DannyB (5839) Subscriber Badge on Sunday March 26, @03:16PM (#1298234) Journal

        If bit's been done secretly, how do you know about it?

        One time in the back of a record store my friend saw them pouring a 5 gallon plastic bucket full of subliminal messages right into the records.

        --
        How often should I have my memory checked? I used to know but...
  • (Score: 3, Informative) by Tork on Friday March 24, @04:20PM

    by Tork (3914) on Friday March 24, @04:20PM (#1297988)
    This was a fun read, thank you. :)
    --
    Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
  • (Score: 2, Informative) by pTamok on Friday March 24, @04:56PM

    by pTamok (3042) on Friday March 24, @04:56PM (#1298000)
  • (Score: 2, Informative) by Anonymous Coward on Saturday March 25, @10:19PM (2 children)

    by Anonymous Coward on Saturday March 25, @10:19PM (#1298152)

    > change the tone from a sine to a sawtooth

    If anyone does this experiment, please control for the RMS level--sorting out how to account for the harmonics above the audible range may take some thinking. At any rate, the same peak amplitude sine and sawtooth contain different amounts of energy.

    Here's my story -- a friend set up a comparison for power amps, from a 15 w/channel Technics receiver to a Phase Linear 700 with some nice tube amps (Mac, Dynaco) in the mix as well. A motor-starting contactor was rigged to switch the various amps to the same set of studio monitor speakers (UREI, well known at the time) with a push button. For source material there was a high quality turntable and a few of the excellent Sheffield Labs direct-to-disk LPs (this was before CDs were available).

    tl;dr -- Below clipping (modest volumes) all the amps sounded the same...but only if the outputs were matched to about 0.1 dB from one amp to the next. A lab grade RMS voltmeter was used with a pink/white (forgot which) noise source for the level matching.

    Yes, I know 1 dB is generally defined as the minimum audible change in level. Based on what we concluded that day, even if you couldn't hear that one amplifier was louder than another, one that was slightly louder (per the RMS voltmeter) would sound "better" in some hard-to-describe way. The jury (several people that worked in professional sound reinforcement) was consistent--the amplifier that was playing slightly louder always won. The only way to get a fair comparison was to level match to better than 0.1 dB.

    Of course, as soon as the volume went up, the little amps started clipping and sounded terrible...

    • (Score: 1) by pTamok on Wednesday March 29, @06:07PM (1 child)

      by pTamok (3042) on Wednesday March 29, @06:07PM (#1298681)

      "Greater volume sounds better" is a trick known to 'Hi Fi' salesmen everywhere. It is amazing how people subconsciously pick up the difference. Emphasizing the even-order harmonic distortion to sound 'warmer' is another good one, which is why many people like the distortions introduced by valve/tube amplifiers.

      • (Score: 3, Informative) by mcgrew on Wednesday March 29, @11:29PM

        by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday March 29, @11:29PM (#1298738) Homepage Journal

        Indeed, the difference of how tube amps and transistor amps overdrive differently is why local bands will have a small ten watt tube amp cranked to distortion, with a good microphone feeding a high powered solid state amp. You can hear the difference as well as see it on an oscilloscope trace.

        --
        Carbon, The only element in the known universe to ever gain sentience
(1)