Okay, I apologize in advance for what is likely to be a very long-winded post and one that will likely spark people to reply. Recently, one of my engineers brought a project to me that was so riddled with phasing issues, it was virtually unlistenable. When I asked him what happened and questioned his mic placement technique, I ultimately determined that his phasing issues came down to a common misconception about phase correction in common DAW editors.
His problem was that, when he decided to correct for phase issues based on a multi-mic orchestral recording set-up, what he did was zoomed in REAL close on the start of the recording with his editor window - found similar appearing samples of Wavesand aligned them vertically. While this sounds like a completely logical approach to many people, there are just a few problems with the concept behind this.
First problem: The picture of the wave that you are seeing represented to you has nothing to do with frequency. It is an amplitude wave only. If you line up, to the best of your ability, the amplitude of the sound, the frequencies contained within are still potentially out of line for phase. By doing this, all you are doing is ensuring that sounds of similar volume are reaching the microphone are "pretty close" to the same time. However, because of the distance between mics, you could have an equally strong signal at both mics which is completely out of phase.
In an amplitude sine wave, if you have a positive peak and a negative peak occuring at the same time - you don't get cancellation - you get re-inforcement. If you have two zero crossings at the same time, you have a null in sound. Whereas in frequency sine waves, if you have a positive and negative peak at the same time OR zero crossings at the same time, then you have cancellation.
Second Problem: Most DAW systems aren't truly capable of zooming down with enough accuracy to make these precision -Frequency- alignments necessary.
The simplest solutions to phasing problems are:
Solution 1: Know your math -
Sound travels at about 1142 feet per second in a 78 degree open air environment, or about .88ms per foot. So, if you have two mics picking up one sound source (in an anechoic chamber *of course*) and one of the mics is 10 feet further away than the other mic, you will be able to adjust one of the mics by about 9 ms and adjust for phase problems. However, because none of us work in anechoic chambers, you will have to take into consideration standing Wavesand reflections.
Of course, minimization of reflections is always the best bet, but standing Wavesare hard to combat. The best way to figure out the problem frequencies for standing Waveswill be, again, simple math. Find the measurements of your room that you will be recording in. For example, one of the concert/recital halls I just recorded in was approximately 60 feet long. The prime frequency that will resonate at 60 feet will be 19hz. (You can determine wavelength by dividing the velocity of sound - 1142 (feet per second) by your frequency.) Obviously, 19hz won't be a big problem since it won't be reproduced, but 38 hz, 76 hz, 152 hz, 304 hz, etc will all be affected (less the more you move away from your prime frequency). By measuring the distance between the performer and the two or more mics being used in recording this source, you should try to avoid placing mics on portions of this wavelength that will cause problems. An example would be, for 152 hz - an affected, and very noticeable frequency, your wavelength will be about 7.5 feet. Placing one mic at 7.5 feet from your performer and the other at 3.75, this phase will be completely opposite. Moreover, when the frequency is reflected back to the mics, it will again have influence on anything registering 152 hz and all of its multiples.
So, I guess that brings me to solution 2: Mic placement. Following the guidelines above as much as possible, find your best place to place your microphones. Take your time and do it right - some of these issues cannot be fixed once it's recorded.
By now, I'm sure a lot of you are realizing that there are a lot of frequencies and that there is no way to protect against cancellation in all of them. That's true, but if you are able to follow the rules stated above for all of your room's dimensions (which should be easy for those of you who record in the same studio space all the time), you will seriously minimize the possibility of nasty phase problems.
Now, that brings me to solution 3: Use your ears! Very few of us own the machinery and tools required to analyze phase across the entire spectrum, and fewer still know how to use it correctly. So...listen. If you hear a severe lack of bass, or muddy/cupped or scooped mids, or it sounds like your high frequency is being filtered through fan blades, there are phasing issues. Try to isolate what region they are occuring in and take appropriate measures. Whether it be with mic placement or adjusting your wave forms in your favorite editor. Don't get carried away though and begin adjusting your Wavestoo much. Remember - 1 foot = .88 milliseconds or 1 foot = .02 samples in a 44.1khz sampled track.
A quick note - try not to use headphones to detect phasing issues. In a discrete stereo recording, phasing issues can be inter-channel. If your right ear is not hearing what your left ear is hearing, then you won't hear the phasing issues. Use monitors whenever possible for adjusting phase.
Sorry for the lengthy post, but phasing is important and relatively easy to understand.
Thanks,
Jeremy 8-)
Tags
Comments
I'm totally at a loss as to why people destroy time alignment by
I'm totally at a loss as to why people destroy time alignment by manually adjusting tracks like this. My reasoning is simply because while in 100% isolated recordings - with no spill, or contamination, the actual shifting makes no real difference, as soon as the thing you are moving contains another source, even at low level, or the actual room sound you destroy things. We talk about the difference between say X/Y and ORTF, and it's the time element that matters. What's the reason for trying to align everything? I'm lost.
Lee and J-3: Ultimately, by visually lining up the waves repres
Lee and J-3:
Ultimately, by visually lining up the waves represented in your DAW window, you are going in the right direction, my point originally is that it can't be the only means taken. However, since the kick has a very strong fundamental pitch, it's waveform is easily represented by the DAW software. Despite the presence or lack of the DAWs overtones drawn within the wave, you will get a cleaner attack if these samples are aligned. When you are dealing with overtone rich sounds, where the fundamental and the harmonics are much closer in amplitude, this is when you will run into more problems. In the case of many percussion instruments, the fundamental is far louder than the overtones (particularly at it's point of transient peak).
Thanks,
J...