Hi! In reference to this thread ...
/forum/recording-live-or-studio/talents-headphone-bleed-during-tracking
... I'd like to start a thread on what constitutes a good headphone/monitor mix. Let's have some discussion on the practicalities of putting things into the cans, as well as a collection of tips from practices you've tried that worked. Here are some examples:
- How close should the instruments in the cans match the final product? This is esp. important when re-amping/modeling guitars.
- Should the drums have reverb/compression on them?
... and that just scratches the surface a little. There is so much more to this, as I've been finding out lately.
Discuss and enjoy!Thanks!
-Johntodd
- How much bass guitar?
- What about reverb on a singer's voice while he is tracking the vocals?
- What about rhythmic/syncopated instruments? How much complex rhythm is needed to get "the feel" of the song vs so much rhythm that it gets confusing?
- What about layering vocals? How much of the pre-recorded stuff should go back into the cans?
- What about people who hate cans or can't stand to use cans?
- What about click tracks bleeding when tracking a quiet instrument?
- One earcup? Both?
Comments
I never know what the final mix will be so can't put it in the m
I never know what the final mix will be so can't put it in the mix even if I wanted. I give them what they need to play or sing well. For singers it will be chordal stuff to help their pitching with some rhythm but for bass it will be kick and snare to start with then whatever else helps. I've never had a rule just a feel for the song. Some like click others hate it. Other times I might give them a guide track to play to.
Good thread, John, and what Paul just said is exactly my thinkin
Good thread, John, and what Paul just said is exactly my thinking too.
I'll punch a bunch of thoughts in this to also help get it started.
Once we start digging into this subject, I bet is will go on for some time.
Cue and monitoring is one of the most important area's to my recording approach. To my way of thinking, it doesn't matter whether you are a home studio or a large commercial facility, cue is the creative connection.
When I think of Cue, I think (consistency, bleed, inspiration, cause and effect, communication, gear). Its a big part of the studio.
For my workflow, there is also a correlation , how cue levels and what you put in it , the consistency of this and how this influences final mix sounding smooth, not all chopped together. If the cue is changing, so is your talents creative approach.
Evolving vocal takes through overdubs will change sonically if I change levels to far off the print.
I also find people sing better when your cue isn't louder than their audible acoustic environment. Meaning, I work hard to keep the balance between the headphone level(s) and what the talent hears naturally in the room. To my experience, if all you hear is the headphone level, the level is too loud. When there is a balance, this seems to give me the most accurate overdubs and pitch.
The importance of gear is also key. I mean, if you don't have good sound and the ability to send a good signal to the talent, how can this be inspiring.
The list goes on!
Learning how to capture and inspire the talent via what they need to hear is a game changer. Our gear and approach influences everyone throughout the creative approach. Cue quality, the needs and execution is unique to each studio and engineer so this subject is also quite subjective.
My cue mixes sound like gold for the talent. I do use reverb and a touch of delay. My settings for vocals are pretty close to the first print, then, I follow those settings (or keep a log) throughout the session.
The cue during the tracking process, has a direct correlation to how a mix glues on final mix.
What I hear is what the talent hears or what I hear is what I need to hear.
Where this gets interesting, is once again, overdubbing. A lot
Where this gets interesting, is once again, overdubbing.
A lot of us are able to make great songs one track at a time. This is when the Cue mix becomes more complicated. Changing faders and matching overdubs can change how tracks sound over the course of a session. The easy part is just turn up the levels but level changes (performance changes, holding back when volumes are too loud(s)including dynamics) can influence your overdubs sonically.
Vocals proximity, headphone bleed, tracking level consistency and cue levels all effect the capture.
I track into a mix and mix into a master , My cue is between these two sections while keeping latency low and sometimes not so low .
But, I never EQ, normalize a thing until the song is done. If you change a level, that level change influences something else. Low level cue on vocals will produce a more natural capture. High cue levels on vocals always adds upper mid problems, as I mentioned in the Headphone Bleed Concerns Thread.
I cue whatever is needed to inspire the talent and avoid changing the tracking levels once they are printed. I suppose this is another reason why I use a console with faders but there is also many ways to skin this cat.
The key for me has always been to keep levels consistent and that includes cue levels. I NEVER normalize tracks on the mix because you don't want the cue levels to be louder and louder forcing you to change, thus effecting overdubs consistencies.
Cue levels and headphone bleed is a really big deal for me. I don't obsess over it but I will say I put as much attention into this as I do tuning an instrument.
(related to the Cue) Headphones emit a sound that will accumulate in a very very bad way which does destroy the upper freq sweetness. It not only adds a metallic sound, it also creates phasing that you can never fix after. I hear this on the best recording so none of us are exempt to it. Being aware of all this keeps vocals sounding sweet and full.
I'm certain people blame their mic, pre,s and converters on this when its actually "headphone bleed" : :)
The art of overdubbing is a topic on its own. Its a respected art that takes know how.
Some people pull one phone off and only listen to half the cue. Sometimes you know the song will never be high quality because the talent is clueless to what the cue is doing to their vocal track. I guess this might be where a nulling process would be helpful. I'm thinking that will also create an ugly dynamic phasing from that though?
Everyone has their way and not one of us will have the best answer. Its a good topic.
Good read! Hey, do any of you ever make "fake tracks" to help s
Good read!
Hey, do any of you ever make "fake tracks" to help singers?
There was this one song I was working on where the lead vox was in unison with the bass guitar. I was having pitch problems, so I took the MIDI bass track and transposed it up 3 octaves (or so) so I could hear it better and sing with it. Any thoughts about that?
Thanks!
-Johntodd
Sounds good to me. What ever works! Assuming their is low late
Sounds good to me. What ever works!
Assuming their is low latency too. Singing with a lower headphone level is where I look when this is an issue. Usually people have increased pitch problems, increased trouble is when the cue levels are overpowering their ability to hear their natural voice. That also helps tolerate a tiny amount of low latency too.
bottom line to me, whatever gets the best performance, is what you do.
I see this issue of latency coming up. I don't understand. I c
I see this issue of latency coming up. I don't understand.
I can track a mic with a DAW/Interface latency of 40ms, and the mic tracks always end up lined up just right. I use Cubase, and it does automatic latency comp. Do other DAWs lack that feature? Lordie, IDK how I'd track at 2MS latency with all the MIDI and FX running.
EDIT: Of course, 40ms latency means I can't put a slapback echo in the cans thats less than 40ms!
I think everyone probably has a different approach; I think it a
I think everyone probably has a different approach; I think it also depends greatly on the song. I've worked with producers who give the artists bare bones cue mixes, I suppose the mindset with that is that they believe the artist can hear their own parts better, but I tend to let the performer tell me what they want to hear.
Personally, I like to give the artist whatever they feel is necessary for them to give the best performance possible. If they want to hear reverb or delay on their voice, I'll give it to them. If a guitar player wants to hear delay, well then, that's fine too.
Who am I to dictate what they should or shouldn't hear while they are performing? Everyone is different, some like to listen to just a fundamental accompanying instrument like piano or acoustic guitar, others want something that resembles a full mix in their cans, because for them, it inspires them to give a more passionate performance. I don't believe there is any right or wrong way... it's whatever works best for the person you are tracking at the time.
When I'm recording a project that I know I will also be mixing, I don't wait until all the parts are recorded before I begin the mixing process. I start getting the basic semblance of a mix from the very first take recorded. Of course, it will morph and change along the way, but I'm always mixing, at least getting it pointed in the right direction of where it will end up, so by the time the final embellishment tracks for the song are being recorded - guitar solos, vocal overdubs, etc. - they are monitoring fairly closely - or at least in the ballpark - to what the final mix will be.
JohnTodd, post: 422627, member: 39208 wrote: I see this issue of latency coming up. I don't understand.I can track a mic with a DAW/Interface latency of 40ms, and the mic tracks always end up lined up just right. I use Cubase, and it does automatic latency comp. Do other DAWs lack that feature? Lordie, IDK how I'd track at 2MS latency with all the MIDI and FX running.
I'm pretty sure that most DAWs will do this - as I recall, PT and Sonar had that feature, although it was up to the user to set the increment. I'm fairly sure that many of the new i/o's and their included mixing software also have a similar feature, which allows you to adjust for latency delay.
d.
There is a fine line between too much of a good thing. I don't u
There is a fine line between too much of a good thing. I don't usually process a mix (broad step> EQ, compressing, normalizing, effects etc etc) while I'm tracking for a bunch of reasons. Which becomes more evident through the two DAW approach.
Through my experiences, broader step processing while you are tracking creates an unrealistic ability to sound like the mix, which throws us off.
This includes why I prefer a two DAW system and at least, an independent monitoring system that connects your "Cue' in multiple ways.
What you hear on the second DAW is how the recording is being finished. What you hear on DAW1, is the basics and where I keep consistencies (the real world per-say). It makes tracking a breeze and keeps each part of the project better contained and glued in steps rather than the standard approach where you start out clean and end up trying to emulate overdubs to cooked tracks prematurely ready for mixing and summing.
To inspire the talent, I will switch the Cue to the summing DAW to give them a taste of whats happening in my world and why a certain approach may work for the better of the whole. Hearing is believing.
This approach is much like producing a movie and me being the producer/ director. I direct the talent to walk this way because I know whats coming later in the movie. Having the ability to prove to them, to educate the talent as its being created through two DAW's becomes an serious asset and is incredibly inspiring. :) Its a win win.
Thus, why I prefer an independent monitoring and the two DAW system.
Mastering engineers have been the greatest education for me . Mastering backwards, or mixing into a master system has helped me "hear" what not to do better in steps.
The fine line is defined by when it begins to effect performance
The fine line is defined by when it begins to effect performance. If your singer gets pitchy when listening with reverb, then either reduce the effect to where they don't sing pitchy, or, get rid of it altogether.
Everyone is different. I've worked with singers who are pitch perfect when working with slight reverb, but who sing flat or sharp when the mix is completely dry, and it goes the other way too - singers who are pitchy with dry tracks (including their own) tighten up when they get some effect. I can't explain why this is, other than to say that if you do this gig long enough, you'll find different individuals do things better - or worse - under very different conditions than others; and you don't really know what their preferences are until you've had a chance to work with them a time or two.
It's also dependent on the song. If a track is very intimate and upfront-sounding on the whole, then it's probably best that they hear things in a way that is similar to how the song will end up sounding.
I've learned two very important things in the 35 years I've been doing this... the first is to expect nothing but the unexpected, and to never assume anything in regard to any one artist's performance style and nuances.
The second is that coffee is the most-often used tool in the studio, and it goes bitter and bad after sitting in the pot for more than an hour, and no amount of cream or sugar will fix it, so... brew often. ;)
I'll weigh in on this. The second best thing a small studio doi
I'll weigh in on this. The second best thing a small studio doing commercial work for clients can have is a great headphone system. Period.
Its all about the ease the musicians/clients feel in hearing what they are reproducing. I bought a Behringer P16 system. Yep, the "B" word. There are several others that do this but I gotta tell you that the "B" system actually sounds a little better.....once you get through getting all the defects worked out. Three main distro systems back to repair...but this last one looks like its going to work perfectly. I now have a spare just in case...
The modules allow 16 different sources to be adjusted to the artists' content. All levels, pans, even compression if they want to hear that (no one does even if they say they do....listen to their request, pretend to turn a knob.. tell them it sounds better...move on) And NO LATENCY from the system.
Latency is the KILLER of comfort in a studio...tracking...over dubbing....THIS is where ProTools HD is the king of all DAWS. Zero latency...many many many tracks with all sorts of plugs....the ONLY one that you have to mute or inactivate is the one on the master. While tracking.
I take the direct outs right off the converter into the main brain of the headphone system. Everything gets assigned through the I/O on the PT mixer. I have 5 mix stations. I can run 6 individuals and then with the supplied wall wart I can cascade an endless number from one to another. When I'm not cascading, they all get their power through the cat5 cable. The artist can then set it up any way they want. And there is no formula for anyone at this point. Everyone in my current project mixed their phones completely differently than anyone else.
As for tracking....when you work on PT HD you can send things to the phone mix for anyones benefit. Some singers like that polished finished sound of what their voice is going to sound like in a mix. You assign a DSP effect of some kind...verb,delay,comp, whatever...theres no latency and nothing prints.....I DO NOT recommend this. Time based elements in a capture will always screw with the pitch. I always try to slip in a nice plate verb and thats usually enough. As far as monitoring whats already down, they get it as the mix is going to get it if thats what gives them the drive to perform the part at its best. I basically am assembling a mix as I go. When a part gets completed it gets whatever treatment its going to have (sometimes) for the end product. And thats where it stays until it doesnt.
I don't know if the rest of you do this, but I like to take the assembled pieces and keep building it while at the same time I make another session of the raw tracks and mix a session I like to call the NFX MIX. This one gets all the basic tracks including all edits and time corrections, pitch corrections etc, and all faders go to unity and I use only pan and EQ to get the balances. If I've done my job at tracking everything will sit well together with only small incremental changes to balance things. You'll be surprised how big everything sounds. Its this mix that you can slam the crap out of the 2-bus to get "that sound". You also start to appreciate and understand which EQ's have phase problems that get masked in a dense mix but always bug your ear.
For fun...recently....I went back to a few songs I had done and had completed for a client. I started new sessions off the completed mixes and tore out every plug and automation I had done. I built another mix using only UAD plugs on one and only Waves API bundle on another and SSL bundle on another. All I can say is it was interesting and really REALLY opened my ears up to what to listen for when using plugins. I have my favorites but I'm only moving the dials a small bit at a time and then living with these changes for a while before moving on. You start to hear the interfacing of these things and you learn to avoid what would seem to be a no-brainer "Hell, I been doin that for years" kinda mind set.
Ramble.
So, having a quality and versatile phone system is the second best thing you can put in your studio.
The first best is controlling the room. The rest is icing and frills
I'll be interested in this. I have used a Samson with just headp
I'll be interested in this. I have used a Samson with just headphones off a single mix output for years. With the system I have now, the potential to actually give separate mixes per output is finally available to me.