1. Register NOW and become part of this fantastic knowledge base forum! This message will go away once you have registered.

Normalised Loudness

Discussion in 'Composing / Producing / Arranging' started by Sean G, Jul 29, 2015.

  1. Sean G

    Sean G Well-Known Member

    Here is an interesting video by Bob Katz discussing increased loudness, its cause and effects in music production.

    Interesting viewing for those that may not have seen it already. Are we coming to the end of the cycle with normalised loudness ???

  2. TomLewis

    TomLewis Active Member

    As brilliant as this guy is, I have an issue with this.

    Bob talks about the accelerants that helped push overcompression, such as car CDs and the iPod (which itself has plenty of dynamic range), but where is the accelerant that he seems to think is causing things to return to a world without this? I don't see them. I see no reason why things would change, because producers are going to be just as stupid in 2020 as they were in 2000.

    Also the terms are confusing. He seems to define 'loudness normalization" as simply taking an uncompressed or not limited recording and making the highest peak 0 dBFS. That seems to be the pipe dream world he is predicting, but the upcompression of today is anything but that, and it is unclear how he is defining that.

    Not only does music after about 2004 seem unlistenable because the actual music is horrible, it is also fatiguing. You can't spend an evening listening to music any more, especially good older music that has been reengineered for iTunes, without them having ruined the original dynamic range and added clip distortion, and with it being the same monotonous level all the time.

    But he's right about that. Even listening to his example was instantly fatiguing. When I hear music produced like that, even it might be esthetically really good, I can't wait for it to be over. This is why SiriusXM will die, no dynamic range, no stereo imaging, no actual high end without mushy cymbals and no actual distinct bass in the low end. Where's the remote?
  3. bouldersound

    bouldersound Real guitars are for old people. Well-Known Member

    Loudness normalization matches the overall loudness of all the songs played, not peaks. Peaks are allowed to fall where they may up to a point, at which they are limited by the playback system. There are several different specifications providing different amounts of headroom over the normalized average level to accommodate different categories of program material.
  4. TomLewis

    TomLewis Active Member

    Well, that may be your definition, but you are actually conflating two different things.

    Normalization is technically exactly what I described above, raising the digital value of all samples in a file by adding a constant value to each of them so that the highest peak, the largest sample, is now all ones, or at 0 dBFS.

    This has nothing at all to do with compression or dynamic range, which is untouched, other than the digital noise floor is raised the same level (which actually preserves the dynamic range and unfortunately can't increase it). It is equivalent to simply turning up the volume. And as a matter of fact, adding/subtracting a value to each sample is exactly how a digital volume control works.

    Normalization is nothing more than math. Mastering is not math, it is an art form. It is editorial, in that it shapes the quality of the content to a particular end.

    Matching apparent or perceived loudness levels of multiple tracks (and typically increasing the perceived loudness of them) by using up compression, parallel compression, multiband compression and limiting is a mastering process that sadly, typically does limit dynamic range and typically does make everything sound as loud as possible and just as loud as everything else. It is completely different than 'normalization' as that is defined.

    Using that term to describe what has happened in the loudness wars is a complete misnomer, and only clouds the issue.
  5. bouldersound

    bouldersound Real guitars are for old people. Well-Known Member

    There are other uses of "normalization" besides peak normalization.

    Loudness normalization mostly attenuates highly compressed audio to match the loudness specification. The headroom provided means that only rarely, on a very dynamic mix, does the system have to apply limiting.
  6. TomLewis

    TomLewis Active Member

    Oh, I'm not trying to troll you on that. I agree, there can be more than one definition for a word. But to have clarity, there should never be. Normalization has a very clear and understandable definition. But yes, I guess there may be more, which only tends to muddy the waters.

    The only thing I was disheartened about in Bob's wonderful piece is that he did not clearly define his terms, and that made it confusing. Well, that and I still don't know why he thinks the pendulum is swinging back.
  7. bouldersound

    bouldersound Real guitars are for old people. Well-Known Member

    Well, if you follow what's going on with LUFS etc. it's not an unfamiliar use of the term.
  8. TomLewis

    TomLewis Active Member

    As a Broadcast Engineer for decades, directly responsible for installing, designing, and maintaining the systems regulating commercial loudness and program loudness daily, even before LUFS was ever even a gleam in anyone's eye, I am quite intimately familiar.

    And yet no one in the professional broadcast audio industry ever refers to it like that.
  9. bouldersound

    bouldersound Real guitars are for old people. Well-Known Member

    It's only recently that I've heard it used for loudness rather than peak level, but it's sort of always had a generic sense. It seems reasonable to "normalize" sources to a target LUFS.
  10. TomLewis

    TomLewis Active Member

    I'm out, but the point I was trying to make is that when a well-defined technical word, such as normalization, which has a very clearly-defined meaning in digital audio, is co-opted for a different meaning, which is what the gentleman in the video has done, it causes much confusion. And the confusion is pretty evident in all of the posts here.

    Part of the reason it has been so easily co-opted is probably because the actual meaning of normalization is not intuitive and the word implies something different than what the technical definition states. IOW, it is a confusing term to use for that definition in the first place. Raising all levels so the highest level is 0 dBFS is what 'normalization' means technically, but that does not 'normalize' loudness, or perceived loudness at all, unless merely by accident. It's a poor definition. The co-opted definitions actually make more sense and seem more logical. Except they aren't. That is not what the term actually means.

    There is no real advantage to normalizing audio in this way, either, because simply raising all levels raises the noise floor by the same amount, so the dynamic range stays the same. It may be helpful for gainstaging during production, but not very. It also has the downside of interpeak distortion.

    Adjusting levels between songs or keeping dynamics reasonable by avoiding heavy compression and clipping are very useful goals, even more useful and effective than normalization itself, but actual normalization is something completely different than matching levels or setting a level loud but not overprocessed.

    If I record/print an instrument track that is low for whatever reason I raise the amount so the highest peak is about -3 dBFS, but the only real advantages of doing this are that when you remove DC offset later it is accurate (IOW, raise the level first) and if there are blank areas between groups of notes I can flatten them to keep the noise floor back down where it should be, and that does improve dynamic range and noise handling just a little better than not doing that. But neither that nor actual normalization changes the actual character of the sound in any way, the way limiting and compression might.

    Normalization is one thing, and the loudness wars are a separate thing. Normalization as it applies to a process used in digital audio has no relevance to the loudness wars, so it is confusing when someone tries to imply that it might.
  11. DonnyThompson

    DonnyThompson Distinguished Member

    And which is why I've never used it.
    I've always considered normalizing to be this definition, and no other. You can call a pig a cow if you want, but that doesn't mean it's gonna "moo" at you.

    In regard to the other part of this conversation, DR in music is making a comeback - albeit very gradually. But... it has to start with the mixing engineer; if you send mixes out to a mastering engineer with a total DR of -4db - beyond destroying the dynamics of the music - you are leaving nothing for the M.E. to work with. There's simply no point.

    The last project I produced had a LUFS of around -21db (and a DR of around 15 db) when sent to mastering. When I got it back, it was -12 db LUFS with peaks at -0.5. I was satisfied. I don't care if it doesn't crack glass or make people's ears bleed. That wasn't the intention of the album to begin with. If they wanna listen to it VERY LOUD, then let them turn it up on their end. That's what the volume control is for... but unless they add their own processing, it's still gonna have the same DR.

    There are still a few of us who remain proponents for dynamics in music. ;)
  12. DonnyThompson

    DonnyThompson Distinguished Member

    As an addition - more of an afterthought, really - I think it's valid that you do need to consider the market you are mixing and mastering for.

    Obviously, some genres are "hotter" than others are. Madonna's last album clocked in at an eye-watering 4db DR; but ... I don't think that this stopped it from selling to plenty of fans. OTOH, we all know about the whole Death Magnetic thing - and how there were fans of the band, whom after listening, thought that there was something actually physically wrong with the CD.

    Mixing and mastering for specific genres does come into play - whether we like it or not. You're not going to mix/master a current pop song in the same way that you would mix/master something closer to a genre where Steely Dan resides. And you're not going to mix or master an album like Aja' or Nightfly in the same way that you would a recording of The Cleveland Orchestra, either. At this point, it's genre dependent. Many of us may not like that it is - but it is all the same, and we have to work based upon certain genres.

    I dunno... just thinking out loud I guess, and maybe I'm wrong about all of that. I don't believe that there's anyone here on RO who would think that I was anything other than a proponent for dynamics in music and mixes... perhaps I was just playing devil's advocate for a moment, because I also understand that while audio production is indeed an art form, it's also a business, and if your client is paying you, you need to do what your client wants. You don't have to put your name on it, but if you want to get paid...

    My personal opinion is that the popular trend of increasing volumes to extremely hot levels - and at the same time, decreasing the dynamic range of music, came about largely as the result of CD players for cars.
    Road noise, rolled-down windows, blasting air conditioners or heaters... all of these things interfered with the listening experience, and forced people to turn their car audio systems up in an effort to hear those softer parts over the extraneous noise of being in a moving car...

    M.E.'s started upping the overall volume, and at the same time, decreasing the dynamic range.

    The problem is, that when listening to this same music in a "normal" environment - somewhere like a living room, or a bedroom, where you can listen without having to filter through all that extraneous noise - the dynamics are then completely wiped out, and everything is the same perceived volume level, and in extreme cases ( but it has happened!) RMS and Peaks become indistinguishable... and that's where the beauty, the depth, the ethereal, all gets lost.

    IMHO of course.
  13. miyaru

    miyaru Active Member

    First of all, I don't want to pretend to know it all, but for the music I record and master, I leave plenty of headroom. I refuse to join the bandwagon of loud mastering. Maybe the things I produce in my little homestudio won't make it to the radiostations, but I don't care - I don't do it for a living.

    I remain true to myself, and don't compromise what I am playing, composing, recording, mixing and mastering. That's the benefit of doing it all in your sparetime, and not as occupation.

Similar Threads
  1. Amighetti Ronnie
  2. JoeH
  3. Dr_Willie_OBGYN
  4. audiokid
  5. audiokid

Share This Page