Fix-It-In-The-Mix: Cheats and All…
DAW has widened the goal posts, but that hardly tells the whole story.
One thing is sure: DAW mixing and editing is more relevant (and culpable) than ever.
Truth be told, when I first set out to conduct this interview with Tim Jessup, my primary objective was to expose the technical dos and don’ts — the tools used to enhance the performance without crossing the line into musical fraudulence. As it turned out, that was a pretty low bar. The wider relevance is something often disregarded, something a techno-artist — and yes, that’s what a DAW based mix engineer often is — may wrongly assume immunity to. Namely, the politics and personalities of it all.
If you don’t feel this is relevant because you’re not a big time producer, mixer, or engineer, bear in mind that there is virtually no “fame curve” to assuage such perils. Your band, friends, and smaller clients — even freebies — will always contain an inherent creative bias, and that can easily place risky terrain in your path, if not a hidden mine field.
The best thing to do is to be viewed as part of the creative team, a virtual band member. And no. You don’t need to be Sr. George Martin. Just showcase your talent and the effective options you can bring to the mixing challenges that will invariably present themselves! Like everyone else involved, voice for your ideas. Do the mix they expect to hear, and you can always offer your own version as an option the Artist may not have considered. Then you can find compromises that work for everyone. If you heed this process, you should never get fired. And who knows? Maybe some channeling of Sr. George will make it into the final mix.
As you will discover, my summation is but one dimension of a wider Q&A for which no article abstract can do justice.
Here it is…
Jessup: This is some very provocative and potentially dangerous territory John. It has many different perspectives, but the most important aspect to consider, as an engineer or a producer, is ‘what does the artist prefer’… and what will best serve the artist’s “established” sound. It can be a very fine and precarious line to walk. To illustrate, I knew an engineer (who shall remain nameless) that I have long had the utmost respect for, one of the very best in the business.While mixing in an entirely analog environment, long ago, he was working with a lead vocal track for a major artist, which had some “issues”. The engineer took it upon himself to fix the problems by doing some minor comping of vocals from other takes, rather than using the single “flawed” take printed on the multitrack tape. In spite of the obvious problems with the original performance, the artist, for whatever reason, decided she really loved THAT performance, and was highly insulted that the engineer had “fixed” what she had created. In spite of his stellar reputation, his impeccable skills as a mixer, and all of his multi-platinum selling albums, this engineer was fired on the spot and would never work with that artist again. Ouch! (and no, it wasn’t me, but it could easily have been)Today, in the DAW environment, it is possible to make every performance perfect. But perfection is often over-rated. It can actually suck the life and the passion out of a performance when it is manipulated to a great degree. It can sound as if machines are making the music, and the human element has been ‘homogenized’ right out of it. Just turn on your car radio and you’ll hear it, homogenized music, over-compressed, pitch-corrected, vocorder-laden vocals, that merely emulate human passion, but seem plastic, lifeless and meaningless. They don’t connect to the human heart of the listener. Is this what we are becoming? I’ll take Aretha Franklin or Sheryl Crow any day, over the lifeless, highly manipulated performances I so often hear in the public landscape of contemporary music.The use of digital technology as a “crutch” or a replacement for actual talent, is taking the human soul, “the juice” as Joe Walsh calls it, out of the music.
I’ve actually heard young kids singing, imitating the affectations of auto-tuners! They are learning to sound like machines and their real, authentic voice is being intentionally subverted to imitate the crutch, rather than to be set free and nurtured to express itself naturally. I think the whole phenomenon is the result of music corporations manufacturing artists because of their marketability, rather than their actual talent. Anyone can be made to sound like a pro with enough editing and manipulation, but no one can sound like Christina Aguilera unless they actually have tremendous talent.In certain ways, it’s sometimes better to have limitations on what capabilities we have access to in the studio. Look at what the Beatles managed to create on two 4-track tape recorders… Sgt Peppers Lonely Hearts Club Band. Having too many choices and too much control can be detrimental in the wrong hands.
On the other hand, I wonder what George Martin and the Beatles would have created with a modern Pro-Tools rig. An interesting answer to this question came in the form of Julian Lennon’s new album “Everything Changes”, which often harkens back to his father’s musical sensibilities, at times appearing to channel John from the great beyond. Lennon spent a full ten years producing this album, with a full spectrum of digital tools at his disposal, and yet managed to create what often sounds like a reincarnation of the sounds and studio tricks that the Beatles invented. He was pre-disposed to use the tools in a deliberate, tasteful, musically appropriate way, and it shows.In my own work, I like to always keep in mind the bounds of what is possible in the analog environment, and defer to the sonic richness that these limitations instill with regard to both their impact on the performance, as well as the behavior of the plug-ins that I choose. While working entirely in-the-box, my intention is to create results which are indistinguishable from those of an entirely analog signal path and a classic mixing desk. Though I may substantially edit a track, or use ten plug-ins on a particular instrument, I use them in a way that maintains the integrity and the feeling of the original performance, and perhaps make it just a little larger-than-life, so that it sits in the mix nicely. In the case of mixing for Chicago, I have an obvious template to reference, in the form of 36 albums they have recorded over nearly 50 years.
I’ll take a live recording of the band and give the tracks the tonal signature of the original studio recordings. In this scenario, it is a perfectly effective and appropriate use of the technology. Often, I’m using digital replicas of the analog processors used on the original albums, Pultec Eqs, 1176 compressors, LA-3 compressors, Neve console channels, an Ampex tape machine emulation… for the 80’s power ballads, I’ll use SSL emulations and the UA Lexicon 224 reverb to capture the authentic sound of the era.But working with other artists, or exploring my own creative muse, I may take an entirely different approach. The music will tell you what it wants to be, and what processors will get you there. Essentially, there are no limitations any longer in how we can manipulate sound. So as a producer, I must actually choose which basket of limitations I want to impose on myself, to steer a production in a certain direction. This can apply to a whole mix, a specific song on the album, or just a particular track.
If you were editing a movie scene and you want it to look like a Clint Eastwood film from the early 1970’s, you might intentionally choose a digital effect that looks like 16mm film, grainy and washed out. In a similar way, choosing what tools to use on a mix depend entirely on the artist’s intention and conception for the final product. I choose the tools that will get me there, with all of their limitations and sonic glory.
Question: Are there still any purists out there who still want to do everything the old-fashioned way? That may be an oversimplification. I know that John Mellencamp insisted on 100% analog “recording”. Even KISS’ last two albums were recorded on nothing but analog tape machines. But do these technique foreclose post production digital wizardry vis-a-vis fix-it-in-the mix? I mean can these purists attestations be mere marketing fodder to disguise or even hide digital gymnastics in post production?
Jessup: Creating music in a pure analog environment and producing in the DAW environment are really very different approaches to making music. Recording in an analog studio is a very “organic” process. It is real-time, and it depends largely on the talent and the skill of the musicians to achieve professional results. (what a concept!!) In a way, recording in an analog studio is like making love. It can be very spontaneous, in the moment, and full of delicious aural delights, as you slam the signal into the sweet spot of harmonic distortion on the console and saturate the tape at +9. You can be mixing right from the start, or as you add new parts, because there is no plug-in latency to deal with, which can cause severe coitus interruptus when both tracking and in an overdub situation.
So recording in the analog environment generally offers more instant gratification, as your monitor mix can easily become the basis of your final mix. In terms of editing, musicians such as John Mellencamp and KISS as you mentioned, are used to “punching in” on their parts as they record. They are consummate musicians and can easily pick up a part they are playing, virtually anywhere in the track, while a cigarette smolders in the headstock of their guitar. For them, its a natural part of the process and they don’t need a billion takes to go back later and decide which parts of which takes to comp; that’s just over-thinking the whole process and it takes the vibe right out of the performance. They just make decisions on the spot about the part, commit to it, and move on, no looking back.If you are tracking through a console to a DAW, the process is really much the same, but generally without the extra juice of tape saturation. Tracking and overdubbing in-the-box entirely, can be a more frustrating experience on most DAWs, because the signal is NOT passing in realtime to the disk and back through the monitors. You have to adjust your recording buffer between tracking and mixing stages, to eliminate basic processor latency, and often, disable any plug-ins on your tracks to eliminate latency, which also diminishes the sound of the monitor mix.Not a very inspiring way for someone like John Mellencamp to work. For him, analog just suits his creative process better. It has a free flow in the exchange of ideas, takes, and it always sounds great at any stage of the recording process.When I must work entirely in-the-box, I find myself printing monitor mixes with full plug-in instantiation and a high buffer setting, to a stereo file, which I then import into a new “over-dub session” without any plug-ins and a low buffer setting, so that we’re hearing the current state of the mix, without having to deal with latency while tracking vocals, solos, etc. It is a work-around to provide the same experience you would have in an analog environment. Then I import the new tracks back into the main session for the mix and “spot” them on the timeline, so they automatically sync right up.Thankfully, now there are companies such as Universal Audio and Waves, that are designing interfaces and processors to eliminate digital latency issues. The new UA Apollo interfaces self-power their plug-ins with zero latency, so that one can work freely with plug-ins on the input side, as though they were recording in an analog environment. Unfortunately, the plug-ins themselves are so hungry for DSP that you can only use one or two plug-ins per channel before you’ve maxed out the interface’s capabilities.Though this is a step in the right direction. John Mellencamp and KISS will soon be able to work in a digital studio with the same comfort, ease and instant gratification that they get in an analog studio. Motorola has recently announced a new Sharc chip which has double the power within the same footprint as the original Sharc chips used by UA, so I imagine we will soon see Apollo interfaces with at least double the DSP power and external DSP Thunderbolt satellites with the power of 16 Sharc chips or more. We’re growing ever closer to having a seamless transition between the methods of working in the analog environment and the digital one.Of course, working in a pure analog environment today does not preclude accessing “digital wizardry” for your mix, as the entire multitrack tape can simply be imported into a DAW. Or, specific analog tracks needing the surgical enhancements of a DAW can be transferred individually, modified, and then moved back to the analog tape for the mix.It’s not either, or anymore.I don’t think that traditionally analog artists such as John Mellencamp or KISS or Neil Young for that matter engage in, as you put it: “purists attestations to be mere marketing fodder to disguise or even hide digital gymnastics in post production”. They just don’t care about the public perception of analog vs digital when it comes to making their music. They are simply passionate about the way their method of recording sounds and they insist on working within their comfort zone. It’s just part of their traditional creative process. Its how they’ve always done it. It’s really just that simple.The raw, unabashed music of such “analog” recording artists is their art-form, warts and all. Surgical perfection of their performance, as is easily achievable on a DAW, is not necessarily the goal, but rather the feeling their music evokes in the listener and in themselves. A high degree of perfection could actually diminish that goal by de-humanizing the song.Whatever degree of perfection they require is easily achievable through “punching in” on the tape and through their own actual musical skills. For them, nothing more is needed. Neil Young himself is so passionate about the sound of his analog recordings, that he has created his own version of an HD digital player, called Pono, to better translate that experience for his listeners in a portable format. PCM digital audio at 192 kHz/24 Bit comes as close to the character of analog as anything I’ve ever heard. There are a lot of arguments for and against this, I just trust what my ears tell me.
Jessup: There have been occasions where I have had to go to extreme measures to resolve technical issues with recordings. For instance, a bass guitar part that had so much mechanical noise that it could not at all be edited out and cleaned up. Fret buzzes, pops and clicks, transients of strings hitting the pick-ups, basically an unusable track. So much of this noise would cut right through the mix, and of course, mastering engineers have to notate these problems in their QC reports when they make the final DDP for pressing. The solution was to import the bass part into Melodyne and use the software to generate a MIDI track that replicated the physical bass performance exactly, without all of the noise. Then, I triggered a bass sample from the Trillian library with the new MIDI track.
Technically, I used the bass player’s actual performance, I used a sample that was very close to the sound of his own bass, and no one could tell that it was not his actual bass in the final mix. As long as I am not departing from the actual physical performance of an artist, I’ll use whatever tools I have at my disposal to enhance it sonically, so that it does what the performer intended for it to do. It is not my job, as an engineer, to manipulate a performance in a way that is beyond what the artist intended, whether its simple timing changes or pitch corrections. The artist must first be on board with any changes, and I must ultimately respect what is “their expression”. When I change hats, and I am the producer, or the composer, the rules may change. I may give myself a lot more latitude when it is my own expression I am manipulating. So it comes down to what is acceptable to a given artist and situation. It is different for everyone and its important not to assume that you have more latitude than what the artist is comfortable with giving. Communication is essential, with a clear understanding of just how far you can go in terms of perfecting parts.
Jessup: I’ve always said that you have to use the right tool for the job. Technology is becoming an integral part of live performance. But there is a caveat for me. Using technology as a crutch to make a performer seem more accomplished or talented than they actually are is just lame. But technology also has the ability to bring some amazing phenomena to live performance. It was science fiction author Arthur C. Clarke who wrote “any sufficiently advanced technology is indistinguishable from magic”. We are making some pretty amazing “eye-candy” and “ear candy” happen on stage these days. But as far as performances go, I personally prefer to hear the un-manipulated raw talent of the performer, because it makes more of a connection for me. It is inspiring to hear someone who is genuinely gifted, authentically excelling at what they do. It is a high to hear that.
Even though age has shortened his vocal range and I have to listen to Elton John modify the high notes on the chorus of Tiny Dancer, I still want to hear his own interpretation of the song. After-all, I am older too. If someone is using auto-tune live on-stage, in my opinion, they should not be on that stage. It is not an authentic performance. It is part of the corporate manipulation and manufacturing of an artist for marketing purposes, not for the sake of making great music. But that is just my personal preference, because I grew up listening to authentically talented musicians who could genuinely perform amazing feats, live on stage.
The younger generation has been fed a steady stream of manufactured “talent” that can’t authentically reproduce on-stage, what was manufactured in the studio.
To me, it’s a sad state of affairs and reflects the contemporary corporate model for recording artists that are created, not necessarily born for making music.While Queen was interjecting sections of the studio recording of Bohemian Rhapsody into their live performances, I’m sure the audience didn’t mind as they were singing along feverishly. Likely, it was done in a very tasteful way. It has always been amazing to me that Freddy Mercury layered all of those vocal parts on the original recording of Bohemian Rhapsody with only 24 tracks to work with! Even in the studio, it was a Herculean feat to produce that song. It would be quite impossible on stage, without a 30 voice choir. If Freddy were alive today, they would simply sample all of the parts from the original multitrack tape to a solid state drive on a keyboard.
Jessup: When I was a young man, I found it incredibly inspiring to hear performances that were so masterful that I couldn’t ever hope to be that good myself, even if I practiced my instrument everyday for the rest of my life. Some of those bands were just mind boggling to see live, from the progressive rock of YES and Emerson, Lake and Palmer, to the rich vocal harmonies of Crosby, Stills and Nash, to the jazz infused esoteric arrangements of Steely Dan, the delicate piano musings of Dave Grusin, and the full-on badass funk grooves of Spyro Gyra, or the soaring vocal arrangements of Dan Fogelberg, who often played all of the instruments on his songs in the studio, without quantizing his performances. Every one of these artists represented authentic talent, that could reproduce on stage, what they created in the studio. No amount of non-linear editing, auto-tune, or stringing loops together with Ableton can ever begin to approach the actual genius that these artists displayed, in the studio or live on stage.And no amount of fixing-it-in-the-mix can replace authentic talent and the hard work and long hours that it takes to develop it. In the end, these artists will be remembered for their unique and masterful contributions to our world of music, long after they are gone. Those who rely on auto-tune will not. Long live Prince, may he forever burn his guitar solos in the hearafter.
Stay tuned for the next segment in the Tim Jessup exclusive interview series: “In Search of the 5-Tool Artist Amidst the Chaos of Democratization…And why “talent” itself is experiencing a most strange metamorphosis”.
About Tim Jessup
Tim Jessup is the mix engineer for Chicago, wearing many hats which include: co-producer, film sound designer, dubbing mixer, recording engineer, studio designer, and studio manager. He is a 40+ year veteran of the recording industry, having worked as a staff engineer for a number of iconic studios, such as Kendun Recorders in Burbank, Artisan Sound Recorders in Hollywood, Wally Heider Studios in Hollywood, Glaser Sound Studios in Nashville, Bearsville Sound Studios in New York, and Olympia Studios in Munich, Germany.
Tim has gone full circle in his career, from recording hits with Quincy Jones, James Ingram, Christopher Cross, DeBarge, The Gap Band, the Isley Brothers, Gladys Knight, Devo, The Doobie Brothers and Chicago, to ADR for movie sound tracks from Disney, Fox Animation, Buena Vista and Paramount, to sound design and original music composition for major advertising campaigns from BBD&O, McCann-Erickson, Leo Burnett, and Saachi & Saachi, video game sound design with animators Don Bluth and Gary Goldman for Nintendo and Sony Playstation, to mixing front of house sound for Bobby Womack, Ashford and Simpson, and Domenic Miller (Sting).
In addition to his long studio career, Jessup is also a multi-instrumentalist and an arranger and has produced original music scores for film, advertising, and many recording artists. He is the recipient of more than 100 national awards, including the CLIO, numerous Telly and Addy Awards, The London International Award, and a Grammy nomination for Stanley Jordan’s album “State of Nature”. Jessup and director Peter Pardini recently won Best Picture for Chicago’s new documentary film, “Now More Than Ever, The History of Chicago”. For more information, go to Tim Jessup‘s website.