Analog Myths

Edison's phonograph

Every now and then I see something on the internet, usually at Soylent News, about pre-digital music recording. It's almost always incorrect; something someone just thought up or heard it from someone else. Some of these people are actually pretty cognizant about most technologies.
First, one needs to know the difference between analog and digital. Of course, one is computer codes and the other is an analogy, but when it comes to analog music, the more money you spent on equipment, especially speakers but all of it, the more it would sound like real instruments rather than an analogy. This was called High Fidelity when it was actually accurate enough that you couldn’t tell a recorded timpani from a real drum. With digital equipment it doesn’t matter as much. There are tricks that have been developed in the last few decades to fool your ears; no, actually, to fool your brain.
So I thought I'd start at the beginning, with the birth of recorded sound and dispel all the falsehoods while I'm at it; or at least, the ones I’ve heard.
I have personally lived through the last seventy years of innovation and change. When I took a physics class on sound and its recording, digital sound recording had yet to be invented.
Ever since the 1940s or possibly earlier, all albums were copies. One difference between analog and digital is with every child copy, an analog signal degrades, but a copied digital signal is identical to its parent, because it is no longer a signal. It’s a series of numbers, measured voltages. In analog, as the signal from the microphone gets stronger, the voltage feeding the tape head gets stronger.
In 1877, a century before I attended that class, Thomas Edison invented the phonograph, named with the Latin for “sound writing”, writing with sounds. The first recordings were on tin foil. In 1896 and 1897 he mass produced his phonograph players and wax cylinders. You can hear some of them here at the National Park Service website:
At one point he developed a talking doll, with a phonograph inside. It was a commercial flop; women had to scream into the recorder, as electronics wouldn't exist until 1904 when his labs developed the vacuum tube (called the “valve” in Britain; both names are accurate). They had one of the dolls on the TV show Innovation Nation. It was a commercial flop. I imagine they would have scared the hell out of little girls.
In 1900 he patented a method of making his cylinders out of celluloid, one of the first hard plastics. Cylinders had been produced in France since 1893, but were not mass produced as Edison’s 1900 cylinders were. Dictaphones used wax cylinders until 1947.
Alexander Graham Bell is often credited with inventing the gramophone, probably because of its name, but it was patented in 1887 by Emile Berliner, who named it. He manufactured the disks in 1889. He came up with the lateral cut, where the needle moved side to side rather than up and down as with Edison’s phonograph.
Records were 12.5 cm (about five inches) and are now recorded at 8 1/3, 16 2⁄3, 33 1⁄3, 45, and 78 RPM. Berliner's associate, Eldridge R. Johnson improved it with a spring loaded motor and a speed governor, making the sound as good as Edison's cylinders. However, it would be a few decades before high fidelity.
The first records were “about 70 RPM” and standardized at 78 RPM from 1912 to 1925, the year all companies standardized. Modern turntables still play them.
I have seen comments saying you can’t do deep bass in vinyl because the needle would jump out of the groove, which is one of those things that’s partly right while still being completely wrong.
This was solved by a “rollover frequency”. Records were recorded with the bass attenuated when recorded, then returned to full volume on playback. However, it created another problem: The records you produced, when played on a record player you produced, sounded pretty good. But played on anybody else’s record player you would have to adjust the tone control to make it sound any good at all.
This is why the Recording Industry Association of America (RIAA) was formed; to standardize the “rollover frequency”. It’s described well in Wikipedia. Since then, anyone’s record will play on anyone else’s player, and the quality depended on the quality of the disk and the equipment it was played on.
However, the curve wasn’t standardized until the middle 1950s, when I was a child and high fidelity, usually called “hi-fi”, came about. Its aim was to reproduce the sound as accurately as possible, so good that a blind person couldn’t tell the difference between a recording and a live performance. They never quite got there, but they got really close. They gave up on fidelity when they invented the Compact Disk.
An old, pre-digital myth presented itself when I was a teenager. My dad’s friend was an audiophile, and once asked me if I thought he should buy a more powerful amplifier.
“What’s the loudest you turn it up?” I asked.
“About three.”
“Nope,” I then answered. “more watts doesn’t make it sound any better, only louder.”
Some folks think the more watts, the better it will sound. It’s a myth. Or that you need a lot of watts for deep bass. Also a myth; a 1974 Kenwood 777 speaker with its fifteen inch woofer had plenty of deep bass, low enough to feel, with a portable monophonic cassette recorder powered by C batteries. Hardly high fidelity, but deep bass, and treble as good as cheap stereos. With a high fidelity receiver they would fool some into thinking it was live.
Today’s “sub woofers” are magic; magic as in David Copperfield magic. They fool the brain into thinking there’s deep bass, because they transmit subsonics you can feel, making it seem like the bass is good, but play it with real high fidelity speakers on the same equipment and you’ll hear what a real woofer can do as opposed to a subwoofer. If you need a subwoofer, you don’t really have much bass at all. It’s a trick. There’s a lot of sound on that record that simply doesn’t come out of those cheap speakers that you can hear clearly with a pair of quality speakers with real woofers.
By the 1950s the sound was good enough, if you could afford the high fidelity speakers, that the the only way adult ears could tell the difference was noise; tape hiss and dust on the final record. Tape hiss was minimized and even eliminated by speed; the faster the tape passed the heads, the higher the frequency of the hiss. At about 16 IPS (inches per second) the hiss was inaudible, as it was above the range of human hearing.
The best high fidelity home tape decks were at 16 IPS, and very expensive. Studio recordings were made at 32 IPS, twice as fast as hiss removal. Fidelity can’t get much higher than that unless they vastly increase the sample rate of digital recording, or get the ferrite grains on the tape smaller.
It was about this time that stereo was invented. Stereo tape was easy, simply have two separate coils in the tape head, each sending a signal when recording, and receiving it when playing back. These would play both channels mixed together on monophonic tape machines. However, playback is slightly different than recording, so all but the cheapest recorders have separate heads for recording and playback.
But how can you have two signals in a single groove of a vinyl record? How do you maintain the backwards compatibility that had existed since the Gramophone was invented? I found out in a physics class in the late 1970s.
As mentioned earlier, the needle wiggles side to side in the same shape as the sound waves. For stereo, this motion carried both channels in the side to side motion, and a single channel in the up and down motions. These two channels are combined out of phase to remove one of the two channels from the side to side motion.
I couldn’t remember which channel was which, so I googled, and wow! The internet is certainly full of nonsense. One site with “labs” in its name gave an explanation that was very complicated, was believable, and completely wrong, with images that could fool you.
Even if it’s published in a bound book it may be bullshit. I have a half century old paperback titled Chariots of the Gods that “proves” that the earthen lines in Peru are evidence of extraterrestrial visitation, but it’s obvious to me from looking at them that they were ART. We artists do things like that, even though normal people don’t understand. The book was nonsense, the type of nonsense we call “conspiracy theory” in the 21st century. Way too many people think if a thing could be, that it must be. Occam’s Razor and my college professors’ teachings say they’re artworks.
I’ve seen comments that claimed that in the fifties and sixties they made records with attenuated bass and treble so they would sound okay in car radios, which is patent nonsense. They weren’t recorded with attenuated bass and treble, you simply can’t get bass from a three inch speaker, and radios were AM only back then. AM radio and its tiny speaker is the limitation, not the music they played.
They always strove for the highest fidelity possible in the uber-expensive stereo systems that cost thousands of dollars; if you bought a record that made your expensive stereo sound like a Fischer-Price toy, would you buy another record produced by that company?
Car radio sucked because cars then had abysmal acoustics, and AM has never been remotely possible to produce high fidelity. Even FM falls short, due to bandwidth constraints. Radios were all amplitude modulation (AM) in cars, frequency modulation (FM) was new, and not much used until the 1960s, and car radios were all AM only until after 1970. AM radio has a very limited frequency response and unlimited noise; hisses and crackles from things like lightning in Tierra Del Fuego that frequency modulation lacks.
I’m not going into detail about radio broadcasting here, perhaps in a later article. But if you had a copy of an early record From Jerry Lee Lewis or Chuck Berry, or even something silly like “My Boomerang Won’t Come Back” (it’s on YouTube, I’m sure), on a high end stereo it will sound like Mr. Lewis or Mr. Berry are in the room with you, except that the dust on the record will sound like it’s raining, with an occasional hailstone.
Now, my dad bought a furniture hi-fi stereo that he paid hundreds of dollars for after his friend introduced him to high fidelity stereo classical music back in the early 1960s. He worked over his vacation to pay for it. This was when a McDonald’s hamburger was fifteen cents and the minimum wage was a dollar (note that the burger’s price stayed the same after the minimum wage went up to a buck fifty, despite politicians’ lies that raising the minimum wage causes inflation, a non-music, non-tech debunking).
Even Dad’s expensive stereo wasn’t high enough fidelity to fool you, but I bought a stereo system when I was stationed in Thailand that would; sound equipment was expensive in America because of crazily high tarriffs. I would have spent ten times on that stereo in America, but GIs could import duty-free. A Chuck Berry record played on that stereo sounded like Chuck Berry was in the room with you, with rain from the dist and scratches.
I don’t remember exactly when Dad bought that furniture stereo, which now sits in my garage, but it was probably a couple of years before the cassette was invented in 1963 by the Dutch. Originally for dictation, the earliest ones were far from high fidelity. The eight track was invented a year later by a consortium of companies, wanting to bring stereo music to the automobile; no car had FM or stereo then.
The cassette was an eighth inch tape, the eight track was quarter inch, which should have made the eight track superior, as well as its 3 IPS speed, twice as fast as a cassette.
I never had an eight track, unless you count the player in the stereo my wife owned when I married her. I’d had cheap reel to real portables since I was twelve, and bought a portable monophonic cassette recorder when I started working in 1968.
One myth wasn’t a myth to begin with. In 1964, the eight track was indeed superior to the cassette, due to its size and speed, as I mentioned. But eight tracks have disadvantages, and their possible advantage wasn’t followed.
Cassettes got better and better fidelity until factory recorded cassettes surpassed factory eight tracks; they had invented eight tracks for cars and cassettes for dictation. But cars had abysmal acoustics back then, far worse than even today. Plus, nobody but the very, very richest had air conditioning in cars, so the stereo had to compete with wind and road noise, so producers didn’t bother with fidelity.
By 1970 the studios had started producing pre-recorded cassettes, which sounded better than pre-recorded eight tracks because eight tracks were designed for cars, but people still thought eight tracks were superior despite their terrible habit of cutting off songs in the middle. Relatively few had cassettes; most folks had eight tracks, because of the myth. I busted that myth for a buddy in the Air Force in 1971 by simply playing a cassette.
I always thought that designating eight tracks for cars and cassettes for homes was incredibly stupid, completely backwards. You could fit a cassette in a shirt pocket, but a cartridge was exactly four times as big as a cassette but held exactly the same amount of music as a cassette.
The eight track was called as such because there were four stereo tracks, taking the tape size advantage away from them, instead of one or two. This allowed more tape to fit in the cartridge, but made four changes as opposed to cassette and vinyl’s two. And if it was “eaten”; pulled from its cartridge and wrapped around inside the player, it was almost impossible to repair, unlike a cassette, which was relatively easy.
Dolby noise reduction was developed for recording studios’ master tapes in 1965, and introduced to cassettes in 1968. It worked in a similar fashion to the RIAA cutoff for vinyl; when recording, higher frequencies are greatly boosted, and reduced on playback. As the treble is attenuated, the hiss is, also.
A twenty year old high end cassette deck is cheap. With the best, high priced equipment, a cassette can sound as good and have almost as good a frequency response as a CD, (up to 18 kHz compared to CD’s 20 kHz), although not CD’s dynamic range, which is even better than vinyl. But a CD can’t match vinyl’s frequency response, being capped at 20 kHz because of the Nyquist limit, which I’ll discuss shortly.
“Quadraphonics” was introduced in the early 1970s, what we call “surround sound” today. There were four separate channels, two in the front and two in the rear, and the movie studios and theaters got it entirely wrong. Those two rear channels often detract from the movie, removing the magic and bringing you back to reality when the moronic director stupidly makes everyone’s head twist around to see what made that sound behind them. The four speakers should be positioned at the four corners of the screen, so sound can move up and down as well as side to side.
Quadraphonic stereo was easy to make with eight tracks and cassettes. You simply added two coils to the tape head, each coil feeding a separate channel. This actually improved eight tracks, since there was only one track change. Cassettes had none, because they could be recorded on one side only, since a cassette only has four tracks. That’s all that could fit on a tape that narrow, so cassettes had to be rewound.
An album was a different matter. I remember that once I had a stereo album that I had to replace; I don’t remember why, but its replacement was quadraphonic and didn’t sound as good as the stereo version on my turntable. Something was missing, and I couldn’t tell what. It sounded the same, but it didn’t. At the time, I had never heard a song in quadraphonic stereo. I didn’t know why it was different until I found out in that physics class later.
They solved the problem of how to get four channels out of a single groove with electronics. They modulated the rear channels with a 40 kHz tone and mixed it with the front channels, and on playback the front channels were limited to 20 kHz and the rear channels demodulated.
What was missing was the supersonic harmonics, over the 20kHz cutoff. Very few speakers back then and none today were good enough to tell the difference, but the pair I had went all the way to 30 kHz. You can’t hear tones above 20 kHz. For most people it’s closer to fifteen, especially older people. However, those high frequency harmonics affect the audible tones, and sound engineers can’t seem to understand that, insisting that sounds higher than you can hear can’t affect sounds you can, but I heard the difference with my own twenty five year old ears and learned what was missing the next year after the professor explained how quadraphonics worked.
I say test it. Get thirty or forty children and teenagers and high quality headphones capable of faithfully reproducing super high frequency tones, and feed a 17 kHz sine wave to the headphones, with instructions to the kids to push a button when the sound changes. After a short time after the trial starts, change the tone from a sine to a sawtooth. I say the majority will press the button right after the tones change, the engineers say I’m full of shit. Stop making assumptions like a conspiracy theorist and TEST it scientifically! Science, bitches! Aristotle was a long time ago.
This, I say, is what’s wrong with “digital sound”, which is actually a misnomer. All sound is analog; an analog of the original sound comes out of the speakers regardless if the recording is stored in analog or digital. It was invented when the biggest, most expensive computers on the planet were finally fast enough that they could finally record sounds up to 20 kHz, past the limits of human hearing, and cheap computers were capable of playing them back.
The way digital sound, invented by the Dutch again, works, is periodically recording the voltage coming out of the microphone. With a CD, the voltage is tested 44,000 times a second and those numbers are stored on CD. They then discovered the Nyquist limit, named for the man who discovered it.
Back to our teenager test with the sine and sawtooth wave, a 17 kHz sine wave sampled at 44,000 samples per second and cut off at 20 kHz is the same as a sawtooth wave, as there are only three samples per wave, far too few to discern between a sine and a sawtooth. But that untested theory says if you can’t hear it, it can’t color what you do hear. But a 17 kHz tone will audibly affect a 1000 Hz tone, even if your old ears can no longer hear a 17 kHz tone.
Double the sample rate and that 17 kHz tone has six or seven samples. Multiply it by five and the differences should be striking, and digital should beat analog. But not at its present sample rate.
The reason for the cutoff is that without the cutoff, ugly noise is introduced in a digital recording. It’s the computer’s bane, the rounding error. One too many samples in a wave changes its shape completely.
That is why I earlier said that sample rates and bits per sample could be high fidelity and even surpass vinyl if they vastly raised the sample rate. They couldn’t when the CD was invented, they certainly can now that CPUs are thousands of times faster.

Artificial Insanity
Stopping the Gun Violence
Microwave Cooking
Those Pesky Kids!
Number Systems
Classified Transparencies
Make America Great Again
Teaching an Old Dog New Tricks
Last year's postings

Share on Facebook

You can read or download my books for free here. No ads, no login, just free books.