Calculating Delay in Spot Microphones

Discussion in 'Location Recording' started by JimboJ, Oct 3, 2005.

  1. JimboJ

    JimboJ Active Member

    Is there some formula to calculate the adjustment of delay necessary between the main pair and spot microphones, i.e. x milliseconds of delay for y distance?

    I believe the other technique is to clap hands in front the spot microphone and use the wave editor to line up the spike between the mains and the spot. Is this correct?

    Of course, the third method is to do what sounds good, but I’d like to know that there is some method to the madness.

    Thanks for your help.

    -- James
     
  2. zemlin

    zemlin Well-Known Member

    I use method A, B, & C exclusively.

    Time is roughly 1ms/ft.
     
  3. David French

    David French Well-Known Member

    D=RT is the formula. Distance (spot to main) = Rate (about 1130 feet/second) * Time.

    Yes, some people do use the clap method, but if you choose to use it, make sure youre wearing protection.
     
  4. Cucco

    Cucco Distinguished Member

    Oh Lord!! :roll:
    Either that's an awful joke, or I have an even worse sense of humor... :twisted: (or both)

    The equation is definitely true, but in general, the assumption that most things being equal (in other words, you're not at Pike's Peak in Colorado, or in Death Valley with either temperature extremes or extremes in relation to sea-level), then it's typically approximately .88ms / 1 foot

    This is mainly a good starting point, you may still have to find a more accurate method. Use this to start, but also use the clap method. This will help you to avoid accidentally lining up strong early reflections as the primary sound.

    --------

    Okay, here's a funny story - (at least to me).

    Back in 96, I took a part time job at Sears selling computers. During the holiday season, we got ALL sorts of wierd requests and questions - such as, "I need a TV for my husband while he's on the toilet..." or "I need one of those new-fangled computers that can play CDs" and so on.

    The most memorable one I ever got was when this DROP DEAD GORGEOUS girl came up to me and said "I need the clap." Stunned, I replied the only way I could -- "Huh????" She said, "I need the clap. do you have the clap?" Starting to get quite worked up with enjoyment but then realizing her folly - I asked her if she meant the "Clapper."

    She turned about as bright red as anyone I've ever seen.

    Even though I was married, I still got her phone number - just cuz I could! It was totally worth it.

    :D

    J.
     
  5. David French

    David French Well-Known Member

    Well, did you give her the clap or not Jeremy? Don't spare the details! :lol:
     
  6. Sonarerec

    Sonarerec Guest

    The DPA website explains this very well. Go to http://www.dpamicrophones.com and choose Applications>classical orchestra, multimiking.

    Generally, allow 25% more delay than simple measurement would indicate and you'll be fine.

    Rich
     
  7. Cucco

    Cucco Distinguished Member

    I tried and I tried, but not having the clap definitely stood in the way. That didn't stop me from trying over and over and over and over... :twisted:

    I've seen DPA reference this quite a few times, but I have a real problem with the "add 25%" bit.

    First, they make no attempts to explain this scientifically. They state "to maintain instrument's timbre and proper time alignment."

    As for the 25% having anything to do with timbre, I'm clueless. (True, if there's phase cancellation, there will be a wierdness to the sound... but...) Also, science would tell us that this just doesn't make sense -

    Take two wave forms of equal pitch and only minor difference in intensity (as per law of inverse squares) and delay one 25% additional to the initial delay - you will have a wave form which is out of phase with itself. Not 180 degrees, but enough to cause wierd comb filtering when multiple pitches are combined. Factor in then the other troublesome things such as early reflections and chances are, you'll have mush. In some cases, you may actually have early reflections arriving at the main array sooner than you would have the time aligned channels responding. (True, not in an ideal scenario, but I could easily see an early reflection from the lip of the stage bouncing up to the main array in that amount of time.)

    This just creates bad imaging.

    The simple math seems to make the most sense.

    If you time align and the sound from the flute hits both microphone arrays (spot and main) at the same time, the illusion of a single array and a cohesive soundstage is retained.

    When, on the flip side of that coin, the sound from the flute hits the spot array and the main array at the same time yet hits the other spot array (clarinets for example) at a different time, we rely on attenuation of the signal from one mic to the other (much like an XY setup) to determine placement or even psychoacoustically "ignore" the errant signal.

    I wish DPA would read their passage and explain. For me, it makes no sense.

    J.
     
  8. ghellquist

    ghellquist Member

    Agree on that it does not make sense.

    Psychoacoustics say that the within a maximum limit, the sound that first reaches the ear will be used to set direction. So I simply add a little bit to the spot mics to be sure that the main stereo pair comes first.

    Gunnar
     
  9. FifthCircle

    FifthCircle Well-Known Member

    I find that I usually take the rough estimate of 1 ms per foot (it really is about .9ms/foot) and then add a couple ms to the time and that seems to work for me pretty well. I think adding 25% could be rather problematic, but adding 5%-10% can work quite well.

    The only exception for that is with a soloist's spot mic. As it is usually right under the mains, I find that an exact delay is needed because you will otherwise get some comb filtering due to the couple ms difference in signals. Not a pretty sound....

    --Ben
     
  10. Cucco

    Cucco Distinguished Member

    So I assume what you're talking about is taking into account the various early reflections and standing waves, therefore providing some randomness between the 2 signals (main array and spot)?

    This I can buy. 25% seems like an awfully big stretch.

    I'm still curious about the science behind their method.

    I wrote a rather controversial post a while back entitled "Misconceptions about phasing" or something like that in which I try to dispel the myth that you can simply line up to waveforms to look similar in a DAW window and therefore have spot on alignment.

    The concept behind my argument was simple - despite the Fourier drawings in DAW windows, most are no where near accurate enough to actually match up waves at an accurate level and that most are based more on a frequency vs time plot.

    Of course, there are FFT tools which *will* allow you to do this, but math is still one of the best tools.

    J.
     
  11. larsfarm

    larsfarm Active Member

    Isn't that what they are saying? They just quantify "a little bit" to at least 25% further away from not only the direct path but also the path of the first reflection also assuming the distance between main pair and spot > 4m. Within 25%, i.e. nearly aligned waveforms, you will risk comb filtering. ( "there will be severe phasing problems if the musicians move while playing." )

    Example at 4m (their min distance) (distance (s), velocity (v), time (t))
    s = vt, so t = s/v = 4[m]/330[m/s] ca 12ms, add 25% to about 15ms

    Lars
     
  12. Cucco

    Cucco Distinguished Member

    Well, 25% isn't "a little bit." It's a lot. A whole friggin lot.

    As for the issue with "moving while playing" - the most animated player I've EVER seen in a seated, orchestral situation was the principle oboe for Baltimore. That guy nearly falls out of his seat.

    In any case, his gyrations and movements are still limited to no more than .5 meters at most in any direction. Bear in mind, this is almost entirely lateral movement. (I've never seen him hop out of his seat yet.) Considering the height of both the primary and spot arrays, I don't see .5 meters movement to cause any problems with comb filtering.

    BTW - a 3 ms shift is a HUGE shift. Try it one time. Take any one of your channels which is correctly time-aligned and shift it 3 ms in either direction. That just sounds friggin wierd.

    J.
     
  13. larsfarm

    larsfarm Active Member

    Almost 2', almost 2ms... Maybe a lot. Maybe not. I just showed that Gunnars method could well be identical to the DPA method that he claimed not to make sense. It's also close to...

    OK, if 3 ms sounds friggin weird, how does 2 ms from random movement sound (see above)?

    Lars
     
  14. Cucco

    Cucco Distinguished Member

    Well, first off, the equation is wrong.

    Again, the measurement is .88 ms per foot, not 1 ms per foot. It's a pretty big difference.

    Second - the movement of a player is lateral. If by chance, you happened to mic the player at the player's eye level, than this is a problem, however, you are often 8 or more feet higher than the player. Do the math/geometric equation - it doesn't equal 2 feet of difference between the mics. And besides, we still have to think of this as a +/- equation. Plus 1 foot from the mic, minus one foot from the mic. If an oboe player were to lurch forward 2 feet while playing, I would generally assume his ass would wind up on the floor or he'd poke a violist's eye out with his instrument.

    J.
     
  15. Sonarerec

    Sonarerec Guest

    I think we should remember a few things as we thrash this. First, the additional 25% of the typical mains-to-spot distance is not usually audible, since the audibility of all this is also dependent (in my experience) on amplitude of the spot. Spots are generally low in level. Ben's scenario is still valid because folks nearest the main array will be louder and will need more of the spot to make any difference, i.e. concertmaster in Ein Heldenleben (or dozens of other examples).

    Let's not forget that the folks at DPA/B&K have technical credentials far beyond anyone in this forum and do not put scientifically invalid data up for the world to scrutinize. The engineers at Danish Radio aren't too dull either. At the very least consider they record orchestra on a weekly if not daily basis.

    The final arbiter are the ears-- simply adjust the delay until it sounds right. If you don't like the 25% guide, do something else!

    Rich
     
  16. Cucco

    Cucco Distinguished Member

    Well, that I can live with, but I still take issue with the 25%.

    That I don't accept. I could care less who they are and what their credentials are, if they don't put some kind of science behind their statement, it could come from Mr. Einstein himself and I'd question it. As for the credentials, I personally wouldn't suggest that no one here can match their scientific credentials. I certainly didn't become a senior scientist in the Pentagon by not having worthy credentials and demonstration of technical knowledge.

    Perhaps. That is if all things else are equal. If the room is a good room, the monitoring chain is excellent and so on. Otherwise, good solid math still can't be beat. Phasing issues are VERY difficult to spot even on a good set-up. What appears as a phasing issue to one in a certain location may not be to another or in a different location.

    In other words, yes, the ears are important, but they cannot overcome physical (as in laws of physics) shortcomings.

    J.
     
  17. FifthCircle

    FifthCircle Well-Known Member

    Another thing is the distances that are involved. 3ms is a huge amount when you are dealing with spots that are only 12-15 ms delayed or less (ie soloists in an orchestral/choral situation and chamber music), but if you are talking a large orchestra with choirs, etc... where delays can reach 30 ms or more, 3 ms becomes much less of a big deal.

    I may add 1 or 2 ms to the delay for a mid-range spot (ie 12-15ms), but 3 or 4 at the 30+ ms distance.

    --Ben
     
  18. Adore

    Adore Guest

    In large concert halls sound reinforcement may be used due to the large distances involved. Up to 30 ms the human being does not recognize two identical sounds as separate ones and it would make sense to slightly delay the sound reinforcement speakers regarding the arrival of the direct sound from the stage.

    Now regarding mics they do not behave like our ears and that 25 % does not make sense to me as well
     

Share This Page