Skip to main content

Novice mix engineer here.

I want to seperate instruments in the frequency spectrum, so that I have a 'good' EQ in the mix.

At the frequency boundaries of each instrument can I assume that I leave a short frequency band that would act as a 'headroom' or empty zone? If I let one instruments frequency spill into another instruments frequency range, will this cause a phase cancellation or any other kind of side effect? For example a kick drum might sound flat in the bass frequency range, but I might want to add a little punch in the mid-low band, that already is occupied by another instrument.

Am I on the right track?

BTW I have not pre-mixed or produced a final mixdown yet; just building up on a methodology, So I know what I am doing.

Comments

GZsound Tue, 06/14/2011 - 23:29

I think you are looking at it wrong. You don't want to put each instrument in it's own frequency range and in fact, as you said, you may not want to or be able to do that.

Use EQ, panning and effects as a way to place each instrument in it's own sonic space. For example, two guitars might be of similar frequency so you can pan them apart to make two distinct instruments.

Your kick drum may occupy the same frequency range as your bass but you can increase certain frequencies on the drum and cut them on the bass guitar and open up some space.

I do mostly acoustic music and frequently record big boomy Martin guitars that make the sound muddy because they cloud the bass.

I normally brick wall the guitar at 150hz and start rolling off the acoustic bass above 250hz. The mud goes away. Do the same thing with the kick and bass. If you want the bass drum to have more bottom end, increase those frequencies on the drum while rolling off the bass guitar. But also cut around the bass drum so the fundamental frequency you want to hear is there, but not a lot of other frequencies that could cloud other instruments.

Experiment and spend a thousand hours trying..

Mo Facta Wed, 06/15/2011 - 03:48

GZsound is on the right track and has given you some good advice. To that I would like to add the following.

First, there are no blanket remedies for anything in audio so therefore, the concept and method of "separating" instruments in the stereo mix is a misinformed method and mostly counter intuitive. Actually, if you think about it, a large part of mixing music is to make all the parts of a mix fit together in some sort of musical cohesion.

The example above GZsound gave of the frequency conflict of the kick drum and bass is indeed a common problem but it also happens to be one that is difficult to remedy with EQ and other studio trickery if the original sounds were recorded that way. In other words, you can not add what was never there in the first place. Granted, EQ can sometimes help the situation but whenever possible, the best remedy is to simply record the bass again.

To answer your question regarding phase cancellation, the answer is, no, not exactly. What you are dealing with the the [[url=http://[/URL]="http://en.wikipedia…"]masking effect[/]="http://en.wikipedia…"]masking effect[/] and it has to do with how the ear and brain process sounds in similar frequency ranges and level.

I am a firm believer that all mix related problems can be resolved with the right considerations in tracking. The ideal mix, after all, is to merely push up the faders and to already be 99% there sonically. Although, if you're solely a mix engineer and have no control over what people send you, you're kind of stuck between a rock and a hard place and left with no choice but to pull as many tricks out of your hat as you got. This is where the tools of the craft come in and they serve two purposes:

To correct a problem
To supply an effect

That's it, with the former mostly transmuting to "To ease a problem" in real life.

If I may offer some more practical advice that will help you make the best and most informed decisions that your mixes will thank you for:

1. Buy the best DA converter you can afford (like a Benchmark DAC-1).
2. Buy the best monitors you can afford and stick with them until you know them inside out.
3. Research acoustics and maybe even get professional advice on how to best utilize your workspace/control room/bedroom and acoustically treat it to be as spectrally flat as possible.
5. Mix. Mix. Mix. Ad infinitum.

Only once you know that what you're hearing from your speakers is completely true can you even remotely start to make decisions that will affect the mix in a positive way. I can also guarantee you that taking these steps will make your journey as a mix engineer as smooth as possible.

Hope that helps.

Cheers :)

audiokid Fri, 06/17/2011 - 22:40

Man, great topics these last few days. Such great advice.

The OP touches on something I'm very interested in. He's actually hitting on something that people are doing with summing amps. Grouping/ sending stems that share similar tonalities or ( L/M/R) to an analog summing amp.
I've posted this a few times, sorry... however, everytime we discuss this, I personally pick up more info.

Fab Dupont explains this better than I could ever and I would like to know why in theory. After watching this video, what do you all think about this? What do they group?

I've wondered if this would be a benefit without the analog summing amp as well? I think this is where the OP is going.

nolimore Sat, 06/18/2011 - 00:08

Thanks everyone for their responses, its informative to hear how others EQ their mix.

I just downloaded Bluecat's FreqAnalyst (free bundled with other plugins) allows me to monitor the bandwidth of multiple tracks visually so that I can watch where each overlap each other.

Here is the info page:

[="http://www.bluecataudio.com/Products/Bundle_FreqAnalystPack/"]Blue Cat's FreqAnalyst Pack - Real Time Spectrum and Frequency Analysis Plug-ins Bundle (VST, DX, AU, RTAS)[/]="http://www.bluecata…"]Blue Cat's FreqAnalyst Pack - Real Time Spectrum and Frequency Analysis Plug-ins Bundle (VST, DX, AU, RTAS)[/]

Download bundle link (free):

[[url=http://="http://www.bluecata…"]Blue Cat's Freeware Plug-ins Pack - Download Freeware Audio Plugins (VST, RTAS, Audio Unit, DirectX) (Freeware)[/]="http://www.bluecata…"]Blue Cat's Freeware Plug-ins Pack - Download Freeware Audio Plugins (VST, RTAS, Audio Unit, DirectX) (Freeware)[/]

audiokid Sat, 06/18/2011 - 11:42

Re FreqAnalyst,

I have tried to appreciate a spectrum analyzer to help graph out live rooms back in the day but have never used this sort of thing for mixing tracks. See it for pinging out a room for sure.

For mixing, I just hear it and do it. Maybe a downfall and something I'm missing?
There are so many tricks I'm guessing that are beyond a simple mans way of mixing.
I would like to know how will you use this or if others use a FreqAnalyst in mixing?

nolimore Sun, 06/19/2011 - 01:12

audiokid,

In short, this a learning curve for me. I respect the experience of posters here; this is why I asked.

I have little experience of EQ in the mix. But I try to approach logically and reasonably.

For me the FreqAnalyst and EQ are coupled. FreqAnalyst visual output is a frequency spectrum map, so I can navigate to areas to areas that might be of contention. Listening at the same time. Get the right sound on the pre-mix (overdubbing) and on the final, get the global mix, adjust (boost/cut).

It's nice to have a frequency spectrum analyser to monitor this visually.

My monitor system (speakers and headphones) are different to the end-users in frequency response and Db loudness, the master copies might not sound as what was intended on different systems. The Frequency Analyzer might be useful in mastering e.g. adjust for a standard end-user comaptible product. I have not got this far yet!

I am using this as a genral guide as a frequency map:

  • Sub-Bass - The very low bass between 16 and 60Hz which encompasses sounds which are often felt more than heard, such as thunder in the distance. These frequencies give the music a sense of power even if they occur infrequently. Too much emphasis on this range makes the music sound muddy.
  • Bass - The bass between 60 and 250Hz contains the fundamental notes of the rhythm section so EQing this range can change the musical balance, making it fat or thin. Too much boost in this range can make the music sound boomy.
  • Low Mids - The midrange between 250 and 2000Hz contains the low order harmonics of most musical instruments and can introduce a telephone like quality to the music if boosted too much. Boosting the 500 to 1000Hz octave makes the instruments sound horn like, while boosting the 1 to 2kHz octave makes them sound tinny. Excess output in this range can cause listening fatigue.
  • High Mids - The upper midrange between 2 and 4kHz can mask the important speech recognition sounds if boosted, introducing a lisping quality into a voice and making sounds formed with the lips such as ‘m”, “b,” and “v” indistinguishable. Too much boost in this range, especially at 3kHz, can also cause listening fatigue. Dipping the 3kHz range on instrument backgrounds and slightly peaking 3kHz on vocals can make the vocals audible without having to decrease the instrumental level in mixes where the voice would otherwise seem buried.
  • Presence - The presence range between 4 and 6kHz is responsible for the clarity and definition of voices and instruments. Boosting this range can make the music seem closer to the listener. Reducing the 5kHz content of a mix makes the sound more distant and transparent.
  • Brilliance - The 6 to 16kHz range controls the brilliance and clarity of sounds. Too much emphasis in this range, however, can produce sibilance on the vocals.

If you get what I mean.

Guitarfreak Sun, 06/19/2011 - 09:21

It is good that you are approaching this with an open mind and some logical reasoning, but there is nothing logical about mixing audio. If you read my writeup which I posted a few posts back, I touch on this idea because I used to do it myself. If you have a problem with a track and want to see where the problem is, then an analyzer can help you. If you simply call up the analyzer screen with the only intention of "hmm, lets see how I can make this better" then run, go home and don't look back, audio is not for you. :D

joking of course... but if you fall into category number two of what I just stated, then you are indeed approaching it wrong, and I will explain it to you. Instruments have certain harmonic structures, which is to say that certain frequencies/partials or harmonics are accented when a fundamental note is played. The difference gives each instrument its own unique sound and timbre. So what? If you go and boost the strong ones (like lets make this piano sound more like a piano to make it jump out of the mix) then you will get an awkward sound because you are boosting frequencies which are already strong to begin with. Now if you go and cut those frequencies, you are reducing the punch of the instrument. Logical, I think not! The same thing goes for all of the other harmonics/partials. They all fall into place because some all-powerful instrument creating being wants them there. Bottom line, since I started EQing VERY sparingly my mixes have become much better. When I do have to EQ I always do it while listening to the entire mix and not the track solo'd.

I will leave you with one final thought... Parallel EQ.

...that is all.

nolimore Sun, 06/19/2011 - 14:12

Thanks for Parallel EQ. I was always thinking in series. The link was interesting too on gain structure.

multiple signals EQ filtered in parallel then joined back into a processed signal put through a serial EQ filter at the output. The band is contained in the parallel filter but the harmonic series gets cut? Each band is distinctly contained.

I think I've heard Parallel EQ before used on a track; very dry, isolated.

x

User login