Skip to main content

I keep seeing recommendations on the net for using two mics on a guitar cab at once. One dynamic mic close to the speaker cab while an ambient room mic is 4-8 feet away. When I try this, I run into phasing issues as expected. I suspect there is no way around the phase issue aside from correcting it in the daw, perhaps that is the point?

I am using an SM7B + m-audio solaris on a mesa cab.

Edit for more relevant information:

The ambient mic is about 6 ft from the cab, and 3 ft off the floor

The SM7B is about 7 inches from the cone and 3 ft off the floor as the cab is raised to reduce reflections from the floor

Topic Tags

Comments

bouldersound Thu, 01/16/2014 - 21:24

audiokid, post: 409741 wrote: I had to look for a tutorial on this :

It takes little more time to do it by hand, eye and ear, and you get to be in control of how it sounds.

audiokid, post: 409741 wrote: This is fun :)

Could he have made it any more confusing? It doesn't help that he makes no distinction between phase and polarity.

kmetal Fri, 01/17/2014 - 01:46

Come on man, the whole idea of multi micing is an idea of fullness which means space. Space takes time and cancellation to develop when used w a close mic. Please take this as friendly disagreement but on a kit, it's mic'd up close too. Why is there a need to abnormally introduce phase things that aren't consistent w the original micing "assuming

Jw. Isn't the phase relationships themselves creating the sense of dimension of space?

naturally occoring oh/room mis are behind if you dare to look at them

what is wrong w this? Electronic stuff aside, I'm silky taking about drum kits and the rest.

i appreciate your thoughts.

rectifryer Fri, 01/17/2014 - 05:08

The sense of space isn't from comb filtering due to phasing effects between two mics. The sense of space comes from picking up the reverb(which does have its own comb filtering but is not the focus here) in the ambient mic.

Thus, I advance the signal on the ambient mic with a phasing tool, no need to nudge it by hand every take but go ahead if you want to waste time IMO but maybe you have an easier daw to use than mine (reaper).

anonymous Fri, 01/17/2014 - 05:09

rectifryer, post: 409729 wrote: I keep seeing recommendations on the net for using two mics on a guitar cab at once. One dynamic mic close to the speaker cab while an ambient room mic is 4-8 feet away. When I try this, I run into phasing issues as expected. I suspect there is no way around the phase issue aside from correcting it in the daw, perhaps that is the point?

I am using an sm7b + m-audio solaris on a mesa cab.

Edit for more relevant information:

The ambient mic is about 6 ft from the cab, and 3 ft off the floor

The sm7b is about 7 inches from the cone and 3 ft off the floor as the cab is raised to reduce reflections from the floor

A few questions:

Are you busing the two mics to 2 separate/discreet tracks, or are you combining both to one track?

when you say "phase issues", are you talking about slight-to-noticeable actual and audible phasing ....or are you experiencing outright cancellation? You've obviously employed the 3:1 rule, so I can't see where you'd have cancellation issues.

That being said, have you monitored the tracks in mono to listen for cancellation?

And I'm just asking here, I mean no offense nor am I insinuating that you don't know what you are doing... but are you sure you're just not hearing the natural (and expected) tonal differences between the up-close mic and the ambient/room mic? You'll certainly have some delay between the 2 mics, because you've got one direct and one ambient mic, so it's natural for the ambient mic to have some delay in time as well as ambient reflection(s) from the room in relation to the direct/close mic...

I guess what I'm asking is if this is a concern of what you are actually hearing? Or, is it more about what you are seeing on your track's waveform(s)?

rectifryer Fri, 01/17/2014 - 05:13

I would never employ, or attempt to employ, or care to pay attention to the 3:1 rule when ambient micing a single source. There is no ratio that is going to minimize comb filtering from phase effects. You can only shift nodes of cancellation as a function of multiples of the distance of the mics in relation to the speed of sound. That is not what the 3:1 ratio is for.

Also, I went with the voxengo plugin. Sorry for the confusion. Thanks everyone for the help!

bouldersound Fri, 01/17/2014 - 10:52

rectifryer, post: 409758 wrote: Polarity is a specification of phase.

And yet with a complex musical signal you can't treat them as the same or even interchangeable. On a practical level they are drastically different. You can't truly fix a phase problem with a polarity inversion. Any improvement is subjective, not objective. Of course with music the subjective is pretty important.

bouldersound Fri, 01/17/2014 - 10:56

rectifryer, post: 409759 wrote: I would never employ, or attempt to employ, or care to pay attention to the 3:1 rule when ambient micing a single source. There is no ratio that is going to minimize comb filtering from phase effects. You can only shift nodes of cancellation as a function of multiples of the distance of the mics in relation to the speed of sound. That is not what the 3:1 ratio is for.

I'm in total agreement with this. The 3:1 rule of thumb is for multiple sources.

audiokid Fri, 01/17/2014 - 11:19

bouldersound, post: 409749 wrote: It takes little more time to do it by hand, eye and ear, and you get to be in control of how it sounds.

Could he have made it any more confusing? It doesn't help that he makes no distinction between phase and polarity.

lol, but the best part, he did say he isn't a sound engineer hehe No mention of checking mixes in mono either.

So many mixes I hear have some to severe phase issues. I use my ears while moving or completely removing things until it sounds better. Drum tracks seem to be the worst.

Transient smear.

audiokid Fri, 01/17/2014 - 11:42

imho, other than midi and special effects, repair automation is way over rated to me, especially plug-ins that claim to ride or repair a wave that cannot analyze the entire mix before it does its thing. Its like a Pandora box. When you put the whole thing together, did it work?

When it comes to phase and repairing I'd much rather do as much as I can manually because there is often the same problematic bleed that is effecting more than the suspect tracks.

but, this is a cool plug-in just the same. Thanks for sharing.

rectifryer Fri, 01/17/2014 - 14:54

bouldersound, post: 409766 wrote: And yet with a complex musical signal you can't treat them as the same or even interchangeable. On a practical level they are drastically different. You can't truly fix a phase problem with a polarity inversion. Any improvement is subjective, not objective. Of course with music the subjective is pretty important.

Yes I concur to your point. You are saying that that wasn't explained. Fair enough.

Thanks everyone for all the feedback!

Paul999 Sun, 01/19/2014 - 08:53

There is a lot of good info here on multiple mics. My thought on this was that the original intent was to make the guitar sound bigger. My general rule is to use as few mics as possible. One mic can get a guitar plenty big. As soon as you add the room mic you are not only getting into phase issues which makes it smaller sounding you are pushing it back in the sonic spectrum.

MadMax Sun, 01/19/2014 - 09:09

rectifryer, post: 409735 wrote: I am certainly not criticizing it; it's just tedious work thats all :D

BINGO!

THIS is the whole gig of mixing... in a nutshell...

Any two signals will interact in some fashion... that's what we do...

When signals interact, it's a heterodyning operation. Sometimes it's results are constructive, sometimes destructive.

Destructive is not always a bad thing. It CAN be used in your favor... e.g. you can invert and add two signals to eliminate a common portion of a complex signal.

Constructive is not always a good thing... as in, you can get too much of a good thing.

It's your job, as an engineer, to know what signal, (or part of one) will balance what other tonalities you have in that signal.

And don't forget that panning knobs are available to enhance, or to de-emphasize phase relationships.

Boswell Sun, 01/19/2014 - 10:14

Agreed.

However, you only get heterodyning when signals are multiplied or otherwise interact in a non-linear way, and not when the process is purely additive, as it is when mixing the signals from multiple microphones. Because of differing phases, the addition may result in a reduction of amplitude, so that can be thought of as subtractive.

Although heterodyning produces sum and difference frequencies, these are not to be confused with the beat frequencies that appear when two similar waveforms at slightly different frequencies are linearly added. I'm sure Jack could give us the detail of using beats when tuning piano strings.

audiokid Sun, 01/19/2014 - 11:51

Boswell, post: 409814 wrote: Agreed.

However, you only get heterodyning when signals are multiplied or otherwise interact in a non-linear way, and not when the process is purely additive, as it is when mixing the signals from multiple microphones. Because of differing phases, the addition may result in a reduction of amplitude, so that can be thought of as subtractive.

Although heterodyning produces sum and difference frequencies, these are not to be confused with the beat frequencies that appear when two similar waveforms at slightly different frequencies are linearly added. I'm sure Jack could give us the detail of using beats when tuning piano strings.

Bos, you always make me think! Thus, the beauty of analog in a more musical and interesting way. And the beauty of a well tuned piano so un perfect from the next note, but a universe of harmonic movement perfectly pleasing in comparison to a perfect sine wave. Did that hit home hehe. Sorry!

There used to be a time where I though digital imaging was a good comparison to digital audio but i think the human hearing is far more acute or sensitive to change in comparison to sight. Thoughts? I'm going off on a tangent...

TheJackAttack Sun, 01/19/2014 - 20:41

On my phone, but could get into the beat explanation if needed. The human ear can reasonably decipher up to around 14 bps. These beats are caused by the sounding partials nearly lining up. In equal temperament, a major third beats around 7 bps in the third octave of the piano. To new tuners it is often described as a pur.

Sent from my SCH-I535 using Tapatalk

rectifryer Mon, 01/20/2014 - 14:11

I understand beats as being as litterally just being the amount of hertz difference between two sounds. Sometimes they are percievable when the notes are close enough. I know you all probably realize this because everyone here has tuned a guitar before or play two semitones together.

I am not sure how that plays into phase issues, but I am sure it could. If you have a standing wave then yes that could sound like a beat but I am not sure that is really the right term for this?

Davedog Mon, 01/20/2014 - 14:21

Man you guys are cool. I wanna come mix at all your rooms.

The relationship of phase within a single source and two points of capture could fill a big book of numbers and formulae. THE device for this at capture is the Little Labs IBP. No, it wont completely eliminate all the anomalies but I'm with Paul and Max on this. Sometimes the second mic meant to enhance and increase size simply doesnt. Like Max thats not always bad at mix. This is where doing the 'tedious footwork' becomes an ally in achieving the sound you're looking for. Like Chris said, 'fixing it in the DAW doesn't always present the best solution'.....(paraphrase)

Since your title indicates that this "Always" occurs then it seems to me the simple solution is to rethink your micing positions and your NEED for these. I don't always get the BIGGEST sound from a guitar by multiple mics. I get a BIG sound from a big sounding tonal and relative volume from the source. And control of the environment around it. Sometimes, goboing off a loud amp (I am assuming its loud because of your sig and the Mesa reference) can build in a room size larger than an actual large space can achieve. I find a LOT of clarity in NOT exciting various nodes and phase anomalies in a room. All of these additional reflections IN TIME with your source can tend to detract from the assumed size of the source. And they are shizitts to mix around. I also find that a clean and clear signal to any number of mics will increase the ability to size the sound at mix. If you add a bunch of harmonic content NOT ASSOCIATED with the original source, ie: phase, comb filtering. room nodes, you are shooting yourself in the foot from the git-go. If you try and convince me that a loud and harmonically rich source like a Mesa guitar track is capable of doesn't create these situations I can only say Bull-o-knee.

MadMax Mon, 01/20/2014 - 17:36

Davedog, post: 409842 wrote: Man you guys are cool. I wanna come mix at all your rooms.

Well... c'mon.. you know where to find me...

Davedog, post: 409842 wrote: I find a LOT of clarity in NOT exciting various nodes and phase anomalies in a room. All of these additional reflections IN TIME with your source can tend to detract from the assumed size of the source. And they are shizitts to mix around. I also find that a clean and clear signal to any number of mics will increase the ability to size the sound at mix.

DING! DING! DING!


We have a WINNER!!

Paul999 Mon, 01/20/2014 - 18:48

SOMETIMES and I do mean sometimes if I want a bigger guitar sound I'll split the signal and mic a couple amps in different iso rooms mic'd close up to the amps. You still get phase issues but because you are using two independant systems it does get rid of SOME issues. Mic placement helps. Slipping it in the DAW helps further.

Alternatively I'll reamp the guitar in a couple different systems.

ALWAYS check that the tone is bigger. Take a single mic and then A/B it against the double mic'd set up and be sure it actually sounds bigger.

rectifryer Mon, 01/20/2014 - 19:45

Paul, that is exactly how I feel about this so far. If I use two dynamics in phase, the sound is really full. Its a much more appeasing image in the mix than if I use just a single. However, if I use an "ambient" mic and bump the track in time with the dynamic mic, it still isnt that great.

I literally sold my condenser (m-audio solaris) this week and bought a Royer R121. I simply have not found a use for a condenser so far beyond overheads on drums as I don't record acoustic instruments often. We will see how this goes. I might have to buy a nicer preamp now ha.

kmetal Sat, 07/05/2014 - 23:44

Well sort of but not really. In a DAW if you zoom in really close to something you will see whether or not the first mic is in phase if when it starts out, it's w a positive excursion, or starts by going up, not down. Kick drum is a great example. If your close mic start w a negative excursion your speaker will suck in at first when the kick hits, instead moving out, like it should.

So now if you do this, then add another mic, and get them in phase w each other, you have 2 mics in phase w each other, but still out of phase w the signal. Add a bass that's phased properly to this, assuming its in time, you now have a kick telling the speaker to go in, and a bass saying go out, at the same time. That is a recipe for a mess.

Another good reason to get phase correct right away, is why would you want to track/have someone track a whole part listening to a weak possibly in or out of phase signal. Lol I'm the top of dude to be like oh i gotta remember to put the trim pluggin on and check phase, then completely forget.

Also phase flip is cool, I use it all the time, but what about if your signals aren't completely 180 degrees out of phase? The chances of that are about as good as two mics just randomly 100% in phase. That's where moving a mic around a little bit can can make significant differences.

I think the general consensus is to try to get it right at the source. If your truly unsure, especially w drastic eq or compression settings, then sure tame it down and use additional processing. But something technical like phase coherency really is part of the micing stage of a recording, it's certainly something I find annoying, so I wanna get it done and forget a bout it. Just my opinion.

paulears Sun, 07/06/2014 - 04:09

Fads come and they go. Getting them right needs experience (or luck). The notion nowadays that there is a kind of prescription for doing things just doesn't work because nobody can say "move the mic two inches to the left and it will improve". Each circumstance is different.

I love the modern approach to that little button with a circle with a line through it! For years it was always described in the manuals as 'phase', now we talk polarity - because pressing the button simply swapped pins 2 and 3. It never changed phase, because to change phase involves a shift in time, not electrical polarity. Nowadays we can shift in time quite simply, but most people still prod a real or virtual button. I've always implemented processes when I need them, not as a matter of course. If I need to use two mics on a cab, and it sounds wrong, then I'll fix it if I can, but if it doesn't work, I scrap it and do something else - not spend ages faffing around trying to make it work. Sometimes, it will work, sometimes it won't. Evaluate, react, move on!

anonymous Sun, 07/06/2014 - 05:49

LOL.. I remember those days... while Pin 2 is the most widely accepted standard now, (even AES now dictates this as the "standard") there was a time when the hot and cold pins on XLR cables varied.

And then, to add to the confusion, there were some cables that considered the actual location of the pins on the cable as being different. Canon first came out with the XLR Pin 2 hot configuration, but when they sold the design to Switchcraft in the early 60's, Swithcraft decided to name the pins differently as far as the numbers went. So, pin 2 was still hot, but Switchcraft labeled pin 2 as pin 3. o_O LOL

Oh yeah, it was a special time to be a studio or live sound engineer. Up until the late 80's, we were still coming across Pin 3 hot XLR cables very often.

There's even some gear that was wired Pin 3 hot - and I'm not talking about some obscure Soviet-made limiter or EQ, either - For example, The original Eventide H3000 had pin 3 hot, and they made them that way right up to around 1994. While the newer H3000 re-issue models did swap the pins to the standard Pin2 hot, there are still more than a few original models floating around with Pin 3 hot.

My brain hurts. :confused:

d/

RemyRAD Sun, 07/06/2014 - 08:18

I am a big polarity freak. Always have been. Always will be.

When it comes to actual phase manipulation, ain't nothin' better than software. Where you zoom all the way in down to the sample and line up your peaks. And go for the positive modulation. Not the negative modulation.

Though I think I have discovered some things about observing the waveforms in software? While we all want to see the peaks, first going positive, before they go negative, I'm actually starting to believe that the DAW software is actually inverting the waveform? I've heard my polarity go positive, with the peak going negative, first, in the software display. And I know what I'm hearing. And I'm not hearing that peak go negative. When inverted to go positive, I'm not getting the same punch. So the display is misleading. With all the software's. Yet I can get no one to speak intelligently about this? I know when my woofer is punching at me. And that's not what the display is showing.

Some of this centers around AM radio in the US. Back in the day, most folks thought that amplitude modulation could only accomplish ± 100% modulation. But it's not restricted to that. Originally, one was restricted to 100% negative modulation. While at the same time, they could go to 200% positive modulation. A few years later, this was reduced to 125% positive modulation and 100% negative modulation. Which is where it stands today. And even at 200% positive modulation with 100% negative modulation, it wasn't distorted sounding. So I believe we are seeing this backwards in our DAW software? Because I know what I'm hearing. I've been there. I've been around the block a few times. And I know when my monitors have the right polarity as I have been a specialist in that. I have corrected more than one half-dozen other control rooms that thought they were wired to their monitor speakers correctly. They weren't. I left them scratching their heads. And the polarity through the consoles isn't much different.

So what I'm hearing and what I'm seeing in the waveform in the software, don't match. I know I'm not wrong. Which is another reason why I have serviced so many other studios. Which I've been doing in the Baltimore/Washington DC area for the better part of over 30 years. When these weren't your average basement studios either. They were the real deals. Real commercial studios. Substantial studios. Top shelf studios. I've also been in the manufacturing business of pro audio. And not everything is right in Denmark. Other than the Danes themselves.

Bottom line is, you can't believe all that you see or all that you read. Sometimes... you've just got to listen.
Mx. Remy Ann David

paulears Sun, 07/06/2014 - 11:27

I've heard this before. Some people can detect polarity which is a pretty impressive phenomenon! A friend of mine plays trumpet and he often moans that many trumpet recordings sound wrong. I guess that the first blast from the horn always lowers the pressure at the players ears first while from the front it rises first. Much as I've always been a non believer there is done physics behind it it seems. Weird!

audiokid Sun, 07/06/2014 - 11:56

RemyRAD, post: 416753, member: 26269 wrote:

Bottom line is, you can't believe all that you see or all that you read. Sometimes... you've just got to listen.
Mx. Remy Ann David

I tend to agree on this one. I think plug-ins do something weird like this. I don't know what it is, but I hear the effect that accelerates smear and phase. I think its the bit quantization of transients that fall between a stereo image introduced by an effect in a bus or, on the master bus. Something is shifting and as track count increases, so does the accumulative aliasing distortions and smear. 20 years ago we had cross talk but it didn't shift the audio, today we have this.

I'm tending to discover mono mono mono and somewhere towards the very end of the mix is where I step on the stereo gas. I recently discovered something in my mixing process that am really excited about.

Boswell Sun, 07/06/2014 - 12:48

Some manufacturers are very lax about phase preservation through their gear. This can range from stupidity in simple audio interfaces (microphone and line inputs in-phase but DI input appears inverted at the output) to large-format mixers that invert if you use the insert loop. Inversion can usually be cured in a balanced system simply by specifying the output XLR connector pins 2 and 3 to be laid out the other way round.

In the contract design work I do, I make sure that phase preservation is specifically stated in the contract even if the equipment company had not thought to put it in there. One of the other equipment design consultants I know told me that one of his careful designs was butchered by the production department of the particular company he was working for in order to save an op-amp per channel. The result was not only that the sonic character of the device changed audibly over the range of gains, but it produced an overall phase inversion. He protested both about the sonics and the inversion, but they didn't correct it because they didn't understand the need for phase preservation. Their justification was that the frequency response amplitude curves didn't show a problem, so they left it out of phase.

x

User login