hey guys ive been reading this book on computer music and
from what ive read if this author has more then 6 vocal tracks
he does normalization to his vox @ -12db now I'm just wondering
your opions on this and i do this before a do any processing
does this seem like the right thinh to do to vocal tracks?
Comments
Why ruin your tracks by taking away the headroom? I guess I neve
Why ruin your tracks by taking away the headroom? I guess I never really seen it this clearly before, to be honest this may change the way I record forever.
You may take the time to read this post, IMO its spot on. Be sure to read the article on Massive Mastering's website, it brings up a valid point about how engineering in recording is often overlooked by "cooking" the levels (gain structure) while recording, having been done way to hot. He is suggesting a level of -18DbRMS on most tracks.
I love this forum!
http://recording.org/mastering-sound-forum/39798-questions-about-track-levels.html#post298080
" He is suggesting a level of -18DbRMS on most tracks. " not s
" He is suggesting a level of -18DbRMS on most tracks. "
not sure that makes sense (referring to the units)
but
if it means -18dbFS average level then thats cool
-20dbFS and -18dbFS are typical alignment levels in recording and broadcast (Sony setting for a DVW etc)
and it may still have peak levels at -12dbFS or even -6dbFS
so
if you normaled to -12dbFS you may well have a -18dbFS average
it's not that simple to take one spec and stick to it
natural said
" I can assume a couple of reasons for doing this, but would prefer to not clutter the topic it it's not needed. "
good call
to yougkuzz
more info please
"it's not that simple to take one spec and stick to it" I agree
"it's not that simple to take one spec and stick to it"
I agree with you Kev, its not an easy thing to do. Levels jump all over the place during a performance, this has always been a difficult task to judge.
I would imagine the best news is that when your working with 24 bits that leaves us with quite a bit of room to work with.
I guess the "why" is so your faders give a good visual cue as to
I guess the "why" is so your faders give a good visual cue as to mix level. I can see where this could be helpful when you have a large number of vocal tracks to mix. (As long as you don't fall into the trap of mixing with your eyes rather than your ears.) Since normalization (at least if my understanding is correct) is based on peaks levels, -12dBFS seems OK. If the goal is to get equal apparent volumes, the better strategy would be to try to get everything to 18dBRMS (or -21 or something similar). But, that's not a simple task like using a normalization routine. (At least I'm not aware of a simple way to do this.) I guess as far as advice, I'd just try to mix the tracks without normalizing. (First do no harm.) If it gets to be confusing or you aren't satisfied with your mix, or you just want to experiment, duplicate the tracks, normalize, and remix. Let us know how it goes.
perhaps there's a bit more info that we're missing? Is there a W
perhaps there's a bit more info that we're missing? Is there a Why factor? What's his reasoning for this?
(I can assume a couple of reasons for doing this, but would prefer to not clutter the topic it it's not needed.)