Skip to main content

hi - was wondering whether any of you nice guys could help me out ; i want to know:

- what is the main purpose of normalizing audio

- when and how it is mainly used

- the pros and cons,

- the do's and don'ts :)

Comments

anonymous Tue, 02/17/2004 - 06:53

"- what is the main purpose of normalizing audio..."

to make my music better than yours! Normalizing makes it louder, and louder is better.... right??

"- when and how it is mainly used..""

too much and poorly?

"- the pros and cons,- the do's and don'ts.."

Nothing wrong with trying to bring a finished track up in level, but if you've properly recorded it, you should not need to do that. I prefer to do this at the mastering stage.

There is also little value in processing all your tracks to top level, then mixing them together to a final mix and having to turn the master levels down... remember, when you add the volumes from multiple tracks, you end up with more total volume with each track that you add; and you have a finite amount of headroom within which to work. One is better served to record good, clean tracks at a good level.

I still use the old analog zero standard, which is almost a requirement for film, and is still common in well made video..., for tracking. That leaves plenty of headroom for working at 24 or 32 bit float (it IS an urban legend that you NEED to use ALL the bits...)for most mixes, and if I need to fill up any space I do it at the mastering stage.

The 'trick' to good digital recording (I.E., making the format invisible...) is to use good mics, pres, and converters, listen and mix in a quality monitoring system, and diddle the data only when needed. Let the TALENT create the push that makes a mix stand out... you just have to know how to stay out of the way and accurately capture that talent.

Bill

anonymous Tue, 02/17/2004 - 07:56

the funcion of "normalize" as referring to a DAW:

to bring the peak value of the program material (track, mix, etc.) to a specific level (ie, 0db).

Usually this isn't a problem as you aren't altering available dynamic range, you are simply raising all audio to a specific level. The way samples are taken in digital audio, however, can lead to some issues and I rarely find a need to "normalize" any audio. First of all, if you know your current peak value and the peak value you would like to be at, you can just raise a fader by that much as a "virtual normalization".

The way audio samples are taken you might possibly have values that exceed what the sample is actually displaying. This might seem kinda weird, but samples are just that... samples of the waveform. The actual waveform when reconstructed might possibly exceed the value that the sample is representing. Therefore in those situations if you normalize the sample to 0db you may have peaks in between sample values that are exceeding 0db. This can especially be a problem when burning a cd. All of a sudden even though your peak meters or analysis in your DAW read 0db, your cd is crapping out the d/a converters upon playback because the values between the samples were actually greater than you thought.

Strange stuff this digital audio...

stalefish Tue, 02/17/2004 - 09:14

alright this is the part where i get more specific (and make myself sound even more amateur).

i normally would work on composing my own music where i'd be mixing a number of different tracks and volume would probably best be dealt with by compression and limiting.

but recently i've had to work with someone on a pilot show for a local radio station and one of the problems was the levels of the voice-overs was not consistent.

now he went on to normalize that single <2 hour long> track which was just the voice-over itself. what i want to know is whether or not that method was wrong or if there's a standard way you'd do it - i was thinking go manually edit the gains to the 2 hour long thing but in doing so made me contradict myself to thinking that normalizing it would save us a tremendous amount of time.

for now (after normalizing), he said the levels sounded good and we're sticking with it. but we still want to know what the 'correct' way of doing it is....... (or at least the way you pros do it :D )

anonymous Tue, 02/17/2004 - 09:43

that *can* be correct...

It really depends on the source material. All it would take is one good pop on a "p" for it to be somewhat ineffective. This is because that plosive will normalize to the set level (as it would be the loudest point (the peak) of the material). This could still leave the resulting material low in volume.

Radio uses so much compression/limiting that it probably won't make much difference anyway. Less is always best in my book when it comes to compressing and limiting, but for speech it can sometimes be neccessary. My rule of thumb is not to do more than what is required. This will vary according to the material.

With two hours of voiceover that can be really troublesome, especially if they weren't recorded in the same take. All of a sudden you have sections that are recorded at different levels. The way I would handle this situation is to divide the recordings into regions based upon recording level. I would then work on each of these regions to reach a compromise of neccessary dynamics and peak level. Then maybe a final limiter across all regions in the final editing to catch any overs and to even things out a bit.

As always there are a million different ways to reach the same result, but for me the simplest is usually the best. The less processing you can do, and the more you can do with the recording in the first place (mic, technique, etc.), the better.

Normalization can be a good tool to use as it preserves the original dynamics. You have to be careful that you are not exceeding 0db even if your peak meters are saying that you aren't (as stated above, your peaks may still be going as much as 3db over although this is usually not the case). If there are any extreme peaks (spikes way above any other peak in the material) then normalization won't help much. Better to edit that section and try again...

Modest use of compression and limiting is my friend. Overuse or abuse can be bad... or good... depending upon effect.

Cheers,
Brock

anonymous Tue, 02/17/2004 - 16:55

" he went on to normalize that single <2 hour long> track ...now (after normalizing), he said the levels sounded good and we're sticking with it. "

Boy, I'm confused.

Compression might change the dynamics, the dredded "Maximize" or "Finalize" might change the dynamics, but normalization shouldn't really do anything but make the whole track louder, which should not fix the wandering relative levels.

Bill

anonymous Tue, 02/17/2004 - 22:09

that's right...

but normalization can be quite useful in certain instances...

Instead of using a limiter perhaps you just need the program material louder so that it reacts in the same manner as other material that's played on the station. In other words, you just need the material louder... Normalization would be a quick process to perform on a two hour piece of material as long as average peaks are pretty consistent.

Normalization can certainly have it's uses...

And if it makes the client happy learn all you can and move on...

stalefish Tue, 02/17/2004 - 22:58

Originally posted by Bill Park:
" he went on to normalize that single <2 hour long> track ...now (after normalizing), he said the levels sounded good and we're sticking with it. "

normalization shouldn't really do anything but make the whole track louder, which should not fix the wandering relative levels.

Bill

haha true - we had to edit the levels abit more after that (right/wrong?)

anonymous Wed, 02/18/2004 - 00:11

Originally posted by Brock Stapper:
that's right...

but normalization can be quite useful in certain instances...

... In other words, you just need the material louder... ...as long as average peaks are pretty consistent.

Normalization can certainly have it's uses...

But the original message is complaining of wandering levels as the reason for using Normalization in the first place.

Bill

x

User login