top of page

 





         

         Before 1834, pitch was more of a suggestion than a rule. The invention of the tuning fork in 1711 by John Shore was arguably the first step toward the idea of nailing down exactly what we mean when we say C#, but because it was invented, rather selfishly, for the purpose of keeping the royal British trumpeters in order, a rigid standardization was probably not in the forefront of its creator's mind (Pinch 202). Pitch is expressed in hertz, which is a measure of the number of vibration cycles a wave goes through per second. Human hearing ranges, in its extremes, from 20Hz to 20,000Hz (or 20kHz - but this is rare. Tests of dubious accuracy are available on youtube; my limit was 16kHz). Western musical pitch is expressed using the hertz measurement for "middle A," sometimes rendered a' , or the fifth A on a standard 88-key piano.During this 123 year period wherein there existed a simple tool to ascertain pitch, but no consistency in its application, pitches varied wildly. At the beginning of the 19th century, not even was tuning constant within cities: in the mid-1820s, Dresden Opera tuned to 435, while the Dresden Catholic Church organ bumped along at 415, making for rather dull services, no doubt (Pinch 207). It wasn't until a German musician/scientist named Heinrich Scheibler decided upon 440 because this was "the middle of the extremes between which the pitch of Viennese pianos rises and falls due to change in temperature," which is, of course, a completely arbitrary decision (Pinch 208). Naturally, this pitch stuck, and A 440 is an immutable truth. Of course, one need not be perfectly on pitch to be in tune, but the introduction of a standardized system through machines, even those as simple as tuning forks, has a definite and increasing hold on music, as Theodor Adorno predicted, both in "musical reproduction [and] the actual production and composition of music itself" (Adorno in Braun's words, 9). Did music really become "machine-ridden" from "the moment that man ceased to make music with his voice alone," as Jaques Barzun would have it (Braun 9)?

        Without a doubt, the most controversial "machine" in modern recording technology, as Greg Milner agrees, is Autotune, the latest in the long line of magic tricks used to carve out vocal perfection. A tuning program first created by Antares and later bought out by Digidesign, Autotune began "steadily infiltrating the recording world" in the late '90s, gradually coming to be industry standard, used, as producer Tom Lord-Alge points out, on "'pretty much every fuckin' record out there'" and "'obviously used to make singers out of people who cannot actually sing'" (Milner 343). Not only can Autotune simply tune vocals, however, it can also essentially create them. 

          To illustrate both of these ideas, it is necessary to witness the program in action. To this end, the following sound clips are, unfortunately, mine, recorded on the latest version of Protools HD, for an intentionally fairly unpolished-sounding record. I have a naturally high voice, but the part required was at the absolute top of my "squeaking" range: I cannot really sing this part, as you can hear. I'm taking massive breaths in between phrases to try to hit these unhitable pitches, audibly running out of air at the phrase end and trailing off; the tone is tortured, the pitch is all over the place. In short, it's unlistenably horrible, but listen (at low volume) anyway.

​                                                          







Now, just to highlight the way Autotune colors the sound, making it seem "'so fake'" like that "'it sounds like a car horn,'" as the Lord-Alge brothers kindly point out (not that they don't use it, of course), here is a track that has been tuned, but not             , which is useless for the purpose of sounding accurate, but it does allow a direct A-B comparison between autotuned/unautotuned sound quality. 









Even without comping, this track is vastly improved in terms of pitch, but the tone has also been purified or rarefied, coloring my voice a good deal. To be fair, a lot of this effect is achieved through the endlessly forgiving addition of reverb, which was added live onto my voice when we originally cut the track, both to be sung "to," and so I wouldn't throw the microphone across the room in self-digust.

Just for context, here is the finished chorus, everything tuned, comped, quantized, beat detected, and a tricked out in a million other ways to make it sound nice.







Immediately noticeable, or, rather, percievable, is the apparent discrepancy in volume between the solo vocal tracks and the full band track. Supposedly, this is some sort of audio illusion, caused by the addition of many more frequencies inhabiting the same "space," and, of course, compression. I say supposedly because I fail to comprehend this, but, in terms of decibel levels, it's undeniably true, if fairly irrelevant. My vocal track has been lowered in the mix to the point at which all tone is fairly indistinguishable, and yet it sounds a good deal less than natural.

Is Milner right, then, when he says that all this time we've "been listening to distinctly inhuman voices and thinking of them as human" (13)? Is it that, as the critic Edward Rothstein claims, "that we all learn a 'language of recording,' so that when a recording is played, 'the listener translates, mapping the heard sounds into a real world, placing the distorted frequencies into a mouth or instrument, and reconstructing sound and its intention'" (via Milner,14)? And yet, the final track does sound, well, very cool. I know that that improbable voice isn't strictly mine, but I yowl along with the machine anyway, which is, I suppose, what everyone else does too. 

bottom of page