Skip to main content

Pretty straight forward question... as I cut vocals, I often find that later phrases or phrases that descend to lower notes have a drop in relative volume and are hard to hear -- I've been creating volume envelopes to compensate, and I also know I can use limiters with threshold and ceiling settings or compressors to do something similar (just starting to do this)... but my question is --- is this how it's done? Or should I be developing an adaptive mic technique while singing ? I'm just not sure what the standard practice is. Is the need to normalize "normal"?

Comments

anonymous Fri, 01/09/2015 - 04:01

It depends... on several things. Generally, most engineers steer clear of the normalize function, (not all, some will use it) but there are many who prefer to use compression instead, using varying degrees of reduction to compensate - or "level out" - vocals, by keeping the loud sections in check, but allowing you to turn the track up so that you can then hear the softer passages.

That being said, you don't want to overdo it, either, because too much gain reduction can render a performance lacking in energy and dynamics. If a vocal track is exactly the same level all the time, then there's no charisma to the louder parts, and no poignancy to the softer parts. (Of course, this depends greatly on the style of the song you are working on).

Mic technique matters a lot too. If you find this issue to be a consistent pattern, then perhaps you need to re-approach your performance technique when using a mic... knowing when to back off a few inches, or come in, etc.

Volume envelope editing can be a great way to achieve this, too, (as can automation). It can take some time to do it, but it can be very effective - at both lowering louder sections or, keeping them level, but also allowing you to bring up the softer passages so that they are more audible.

There is no "general practice", because every song and every performer is different. What works for one singer on one song may not work well for another singer on another song, or, for that matter, even the same singer on another song.

You need to take each performance in context of the song, and use whichever method works best for you. I will say this... over-compression is a very common mistake made by new/novice engineers. While itcan be intentionally used for effect, it's not an effect in the sense that we generally accept, like verb or delay.

Often, you don't have to hear a gigantic difference in order for the compression to be effective. This is one of the most common mistakes made by people who are new to the craft ... they feel that in order for the compression to be working, that they think that they need to hear it working, and very often, this is not the case. You'd be amazed at what subtle compression can do to a track, or an entire mix... it can "glue" a mix together, make it sound more cohesive....and you don't have to use very much of it to accomplish that, either.

As you continue to do this, you will refine your listening skills, and hone your ears, (be patient, this takes time...) and, assuming that you have decent monitors and a balanced listening environment (your room's acoustics) you will probably start to hear things in your mixes that you've never heard before, and one of these things is that you'll begin to be aware of lighter amounts of compression.

It may help you to understand - I mean really understand - what gain reduction is, what it does, and how it does what it does, using the adjustable parameters involved; like ratios, thresholds, attack and release times, etc.

This might help:

http://www.soundonsound.com/sos/sep09/articles/compressionmadeeasy.htm

At this point, as someone newer to the craft, here's a good starting hint to follow:

If you can hear the compressor working
, then you're probably using too much of it. ;)

FWIW
d.

pcrecord Fri, 01/09/2015 - 06:51

The Normalization term came from the first daws and was a fonction that analysed a wave file to Identify the highest peak and then it pushed up the volume of the whole file so that the highest peak is near 0db. It was rapidly identify as futil fonction because the file need to be recompiled and a new file is created and therefor some quality would be lost. It might have been of service to those doing mastering at some point but it is not a common practice today.

The best thing to do is automation because it does not undermine transients and keep the voice to a more natural form. But for vocals, many engineer uses compression to soften those transients and controle volume because it ends up in a 'in the face' result that most crave these days.

A more respectfull technic is using paralelle compression. You can use automation alone on your vocal track and then send the signal to an aux bus and compress it there. Then you combine those signal to taste. You could also use 2 compression in a row or 2 compressor (track and aux)

Time to get creative ; you can put a delay on an aux and compress the heck out of it, then bring that slowly in the mix.

If you have an idea of what sound you are looking for, it's easier to choose the technic accordingly. ;)

paulears Fri, 01/09/2015 - 09:27

Good singers use mic technique to reduce the dynamic range which helps the engineer. Less good ones, mimic what they see, and cause huge problems for the engineer because when they get it wrong, dynamic range increases, needing more compression and all the artefacts it causes. If you work with people you can trust to do it well, then your finger on the fader during recording can prepare the way for the sudden belter that is coming, with the quick restore for the next quiet line. Knowing the song makes it easier, and as long as you don't go over, under are quiet simple to fix. Where I can I prefer NOT to adjust input level while recording, but sometimes it's the only way to tame a wild vocal.

audiokid Fri, 01/09/2015 - 10:08

To add to the already excellent suggestions,
I'll use the word cheap loosly.
Cheap gear , either way you look at it, adds excessive mid range, spikes or inconsistencies or lack thereof, top and bottom which produces or contributes to the frustrations of uneven performance.

As a musician and performer/ I recall the days when I had to sing into a cheap PA system, cheap micpre's, starving headroom/ power amps etc. . I'm a baritone and need a good system, especially when I want something to be present and gentle down low freq.... ;)
The guys singing bass always sound like mush through cheap gear and are lost in the mix. Add a punchy bass, what a nightmare.
You can hardly hear the low freq in comparison to other ranges. The people with mid range vocal are always cutting through and get by on cheaper everything. They may sound like crap but you can always hear them which is the main goal regardless. Recording isn't much different to live for me. I save up for quality gear because I have no choice. I am full blown survivor who depends on income coming from music one way or another. Gear matters and is a direct correlation on how we learn, improve right or wrong. If someone was always hanging onto your leg in a race, you would get rig of the problem if you want to win.

Some people also use vocal riders but they aren't something I would be happy about using either. The can sound phasy and unnatural.

I always use Samplitude > Object Editing , Cross Fade Editing or side chain compression somewhere in a session to manually add or improve volumes.

DogsoverLava Fri, 01/09/2015 - 12:06

Thanks guys - I had a scratch vocal I was working with with a descending arpeggio that basically disappeared in the mix as the notes got into my lower register... I used it as an exercise to see what I could or should do to help it in my mix -- also wanting to prepare for cutting the keeper track and anticipate mic technique wise what I should be doing. I've tried different things - I'll recut the vocal and apply some of your specific recommendations to it and see how I do. More project based learning.... thanks.

Matt Fri, 01/09/2015 - 14:24

There seems to be a lot of misconceptions about what normalizing is. All normalizing does is bring up the signal amplitude of your entire track by an equal amount. In a mix, it has the same effect as increasing the track fader. If you normalize to make quiet parts louder, then your loud parts will be even louder. I did not read all of the replies, but it seems like compression with a decently long release time is closer to what you are looking for.

audiokid Fri, 01/09/2015 - 14:54

Matt, post: 423492, member: 48561 wrote: There seems to be a lot of misconceptions about what normalizing is. All normalizing does is bring up the signal amplitude of your entire track by an equal amount. In a mix, it has the same effect as increasing the track fader. If you normalize to make quiet parts louder, then your loud parts will be even louder. I did not read all of the replies, but it seems like compression with a decently long release time is closer to what you are looking for.

Indeed.

fwiw, I avoid normalizing tracks while tracking. This insures and consistent sonic print during the gain stage on all counts, pres to headphone, overdubs for an over all cohesiveness of a mix. I will often normalize a track(s) to zoom in on specifics during the mixing stage but rarely do I change the individual channel levels of the actual print. "It is what it is".

I'd much rather do level changes on groups and bus's > OTB even better. Sessions mix better when I track in the green -20 ish and don't move too far off of where the tracking fader originated. This was more of an issue during the early days of Pro Tools but I still think like this out of habit.

The first print of a session sets the level for all the others to follow. Hybrid and faders on a summing amp or console helps avoid the 2-bus mash up on a one DAW system. But, if I follow these steps regardless of the DAW, I rarely end up having too much gain at finishing/ mastering time no matter what session I am on.

I'm not exactly sure what [[url=http://[/URL]="http://www.dogsover…"]DogsoverLava [/]="http://www.dogsover…"]DogsoverLava [/]was getting to on normalization but I think he used it to increase the soft prints.

Davedog Thu, 02/05/2015 - 09:05

Never normalize. Ride the fader or create volume envelopes. Its the only way to make the track sound the same. Some will use a comp but what happens with this in this situation is...as the volume of the voice declines so does the tone..simply because its a voice. A compressor doesnt care about the declining tone and pushes up the volume as well as increasing or decreasing the attack and the release depending on your settings. So you wind up with a loud enough section that sounds different from the tonality of the other parts. Think this sounds crazy? Try it on purpose and see. The human voice has so many harmonics and unlike an instrument, when the energy fades on a voice things change. The shape of the mouth and the angle of the head in relation to the mic...all this makes a difference.
Or recut it and have some serious thought about mic technique simple because of the things I have pointed out.

x

User login