Skip to main content

Hi,

I have ETF 5 and I want to make a waterfall plot for my low frequency response. Now I am pretty sure I have done it right but I just want to know how you do it as I am not convinced.

I set up the hardware as the help file says to then ran 'normal transfer function' selected 'Low Frequency' and 'MLS' and clicked 'Start Test'. That all works but for one I thought MLS was meant to be lots of small sound blips? It sounds just the same as white noise to me! If the gaps are so small I can't even hear them then surely that isn't enough time to measure decay rate? I suppose it could measure the decay after the sound stops at the end, is this what it does?

I can make it then show a waterfall plot of the test and it looks alright (very good I think!) but I just want to know if I am doing it right?

Also, when I do a frequency response test it show some large variations (you can see on the waterfall plot). I set my system up using CoolEdit to generate white noise and analyse the results and it does not show such large variations! It doesn't sound like it has the variations either, so I am unconvinced of ETF's frequency response as well.

Am I doing it wrong? Why does CoolEdit show such very different results? ETF's results look more like CoolEdits when I have not averaged the results over the whole white noise recording sample. Is the waterfall okay? Is it a good result?

Thanks!

Comments

Tenson Tue, 09/27/2005 - 15:52

Okay thanks.

Could you give a more detailed explanation of how it measures decay/RT60?

I have as much room treatment as possible coupled with some corrective EQ. Should I adjust my EQ so I have a flat response in ETF then, rather than in CoolEdit? It does sound very flat, I'm pretty sure I could hear if there were 10dB suck outs at those points.

How does that waterfall compare to others? I seem to remember reading that most pro studios aim for between 300ms and 500ms RT60 and you can expect a slightly longer decay rate in the LF. In this case I feel very good about most of my LF decay being below 400ms!

proudtower Wed, 09/28/2005 - 03:21

MLS is a pulse train. It is a series of impulses generated according to a mathematical sequence. As the impulses are distributed in time according to this sequence, the computer "knows" where to expect the impulses. So background noise will fall inbetween the expected pulses and is calculated out. This way MLS can achieve a very high signal to noise ratio.
And David is right as usual; a pulse train sounds like white noise.

Your waterfall looks fine to me. Maybe you could try to measure on a couple of positions around the listening spot, see if you can get rid of the dip between 80 - 120 Hz.

RT-60 is very hard, or totally not, measurable in a small room with lots of treatment.
Acoustics consist of the direct sound from the speaker, early reflections from boundaries, and reverb meaning a random soundfield where all energy is distributed without information on direction.
In a small room with treatment this reverberant soundfield is attenuated by the treatment, so it is hardly measurable.
Frequency characteristic and waterfall are more important, imo.

anonymous Wed, 09/28/2005 - 13:47

proudtower wrote: ...

RT-60 is very hard, or totally not, measurable in a small room with lots of treatment.
Acoustics consist of the direct sound from the speaker, early reflections from boundaries, and reverb meaning a random soundfield where all energy is distributed without information on direction.
In a small room with treatment this reverberant soundfield is attenuated by the treatment, so it is hardly measurable.
Frequency characteristic and waterfall are more important, imo.

Yup!

ETF has parameters to window the RT60 calcs from a portion of the schroder plot curve in the EnergyTimeCurve display. This allows extrapolation to make a guess on the RT60 time.

It works pretty well :)

Paul

Tenson Tue, 12/13/2005 - 07:30

Okay so with MLS the computer knows when the next impulse is coming but surely the gaps in-between the impulses are not long enough to measure the reflections from walls? Yes it can get the impulse and gate off the rest but this isn’t what ETF shows, it shows the room reflections after the impulse as well and if the gaps between them are so small they are in-audible, I don’t see how it does this.

For example on the TACT room correction unit for Hi-Fi it uses impulse's but leaves about 4 seconds between each one so it can hear the reflections separately.

Thanks

Rod Gervais Tue, 12/13/2005 - 12:48

Tenson wrote: Okay so with MLS the computer knows when the next impulse is coming but surely the gaps in-between the impulses are not long enough to measure the reflections from walls? Yes it can get the impulse and gate off the rest but this isn’t what ETF shows, it shows the room reflections after the impulse as well and if the gaps between them are so small they are in-audible, I don’t see how it does this.

For example on the TACT room correction unit for Hi-Fi it uses impulse's but leaves about 4 seconds between each one so it can hear the reflections separately.

Thanks

Tenson,

I don't know the size of your room - but doubt it's large enough to ever develope a reverberant field.

When a space is excited (acoustically) through the use of a loudspeaker (as an example) there will be a localized sound field (L) from that device for those in close proximity to it – in other words – you can clearly identify the source directionally. If you have a steady sound source (such as a sine wave), as the listener moves farther away from the source, the direct sound level will drop, but the reverberant level will remain steady. The distance at which the 2 sounds are equal is referred to as the “Critical Distance” (DC).

At roughly 3 times the Critical Distance the sound from the original source is almost completely masked by the reverberant field, to the point that it is just about impossible to identify the origin of the source.

In small rooms (defined as less than 12,000 c.f by ITU) you never develope a reverberant field - so what you're measuring isn't the decay rate of reverb - but rather the decay rate of modal and non-modal activities taking place within your room. You can never escape from the direct sound to the point of masking - - a requirement of a reverberant field.

Now - that having been said - the delay between the pulses is irrelevant - because this is not going to tell you anything about your room.

For example - in a room with 9' ceilings - that's 14'- 6" wide and 21' long - with a single layer of 1/2" gup-board for walls and ceilings - concrete floor - and no treatments, your reverb times would calculate as follows:

125 Hz------0.49 seconds
250 Hz------1.38 seconds
500 Hz------2.52 seconds
....1 kHz-----3.06 seconds
....2 kHz-----1.86 seconds
....4 kHz-----1.43 seconds

This is based on Sabines calculation with absorption coefficients of 0.01 for concrete and 0.29 for the drywall.

As you can see- if the program had to wait for the true decay in order to perform the calculations - then it would take a very VERY long space between pulses to get it done.

However - with the measurement time adjusted to long (5 seconds) the measurements continue for 5 seconds after the signal stops.

This (the longer record time) is important for a number of reasons (one is noted above) - the most important being that you can get a transient spike as the signal stops - and you don't want to include that as a part of the analysis of your decay time........... to meet ASTM standards - you do not use the 1st 100 to 300 ms after the tranmission signal stops.

As far as MLS goes - just to take it a little further....MLS is an abbreviation for Maximum Length Sequence. It is basically a pseudo-random sequence of white noise pulses. It utilizes the measurement of white noise transmissions, and transforms this data into equivalent logarithmic sound levels equating to human hearing. IN the case of ETF - through the use of the Fourier Transform.

So it sounds (to you) like white noise because it is white noise.

The program simultaneously interprets the different frequencies through time slices....... which you can see in a Low Frequency Response Chart........ a very valuable tool with ETF........

BTW - I wouldn't average LF measurements if I was looking for accuracy - you want to see each frequency's activity - not an average of them........

I do not know how CoolEdit performs it's tasks - so I cannot comment on any variation - other than the comment above.

A question for you - how many readings are you taking?

Sincerely,

Rod

Tenson Tue, 12/13/2005 - 15:35

Rod thanks for your reply.

One thing stands out in your reply...

with the measurement time adjusted to long (5 seconds) the measurements continue for 5 seconds after the signal stops

My copy of ETF doesn’t! As soon as the signal stops, so does the measurement. It will display the results as soon as the test signal stops playing.

That is why I am so confused by it, because despite this, it can produce an impulse response from what seems to be a steady-state tone with no measurements taken after the signal.

It will produce an impulse response something like this

At around the 2ms point on that graph there is an obvious reflection arriving but in reality what it recorded is not an impulse, it was basically white noise (with a non random secquence but still..) and displayed in a graph of magnitude vs. time it would look just like white noise does in an audio editor. Now the boundary reflections may be buried in there somewhere, but it would be covered up by the predominant direct sound. Still ETF seems able to display an impulse response like the one pictured here!?

Another thing that seemed odd in your reply is that you say you can’t use 100-300ms after the signal stops but surely this is the most important part of the data for an impulse response, as that is the time frame in which most direct reflections occur.

Rod Gervais Tue, 12/13/2005 - 17:26

Tenson wrote: My copy of ETF doesn’t! As soon as the signal stops, so does the measurement. It will display the results as soon as the test signal stops playing.

That is why I am so confused by it, because despite this, it can produce an impulse response from what seems to be a steady-state tone with no measurements taken after the signal.

Tenson,

First - I apologize - I never actually watched this take place - and made an incorrect logical assumption - thus - it does what you say it does (after watching it closely).

At around the 2ms point on that graph there is an obvious reflection arriving but in reality what it recorded is not an impulse, it was basically white noise (with a non random secquence but still..) and displayed in a graph of magnitude vs. time it would look just like white noise does in an audio editor. Now the boundary reflections may be buried in there somewhere, but it would be covered up by the predominant direct sound. Still ETF seems able to display an impulse response like the one pictured here!?

OK - I am going to verify this with Doug tomorrow - however - taking another leap of faith here for a moment - I see no difficulty in any of this.

Suppose for a moment that I write a program that use an algorithm to produce psuedo-random bursts of sound.

How difficult would it be for me to read back the data gathered and remove that which I produced - leaving only background sound and gear sound?

Then - seeing as you must generate tests to account for your sound card/computer - how difficult is it to remove those qualities as well?

Not hard at all from my perspective - what remains is the room - if I begin tracking the 1st reflective from each tone generated I can determine then what the exact time is for that reflective - and I can then also mask out those remaining 1st reflectives which would effect my results - same goes for the 2nd reflectives - to simplfy this - suppose I choose to only deal with the 1st pulse........ seeing as I know exactly when I sent the same pulse the 2nd time - I can easily track all pulses to that point - which are room - and then mask out that exacly timed sequency for the 2nd pulse with it's reflections - - leaving only the room - etc, etc, etc...........

It doesn't seem that deep to me.........

Now as far as my comment on the first 100-300 ms of recording after stopping as pulse......... if you're conserned with RT60 - it doesn't matter what 60db you view decaying....... This from ASTM Designation: C 423 – 02a Standard Test Method for Sound Absorption and Sound Absorption Coefficients by the Reverberation Room Method

(I highlighted my concern in bold for your ease of review)

10. Procedure for Measuring Decay Rate

10.4.1 Turn on the test signal until the sound pressure level in each measurement band is steady (see 4.1).

10.4.2 Turn off the test signal and start measuring sound pressure level in each measurement band either immediately or after a delay in range of 100 to 300 ms (see Fig. 1). (Data collected before the first 100 to 300 ms have elapsed may be viewed or retained for informational purposes, but these data are not used in the calculation of decay curves.)

NOTE 5—The delay time period in the range of 100 to 300 ms ensures that data collected for decay rate calculation include no distortions or tansients caused by turning off the test signal. Viewing the decays on an oscilloscope, computer screen or paper chart can help avoid a number of problems, such as those related to transients.

10.4.3 Measure and store the sound pressure level in each measurement band every Dt seconds (see 3.3.1) until the level is about 32 dB below the steady state level (see 7.5).

10.4.4 Store the measured levels and repeat this procedure the number of times required by 10.2.

Tenson Wed, 12/14/2005 - 05:15

Rod Gervais wrote:
Suppose for a moment that I write a program that use an algorithm to produce psuedo-random bursts of sound.

How difficult would it be for me to read back the data gathered and remove that which I produced - leaving only background sound and gear sound?

Then - seeing as you must generate tests to account for your sound card/computer - how difficult is it to remove those qualities as well?

Not hard at all from my perspective - what remains is the room

Okay, I did consider this but I thought that as what comes out of the speakers is not the same as what the program outputted (frequency response, distortion and phase are all very different), I didn't think it would be able to simply remove it anymore. **Certainly I would think no pattern recognition system would work well enough to have a decent noise floor. This leaves simply removing the part of the recording at each pulse all together. As the majority of the recording is these pulses, I wouldn't have thought that leaves very much data about the room and its reflections... Does it?

I'm not saying you are wrong. The same thought occurred to me, but it just seems like it would be a bit unreliable. Still it obviously does it somehow!

What would happen if a reflection arrived at the same time as the impulse that had to be 'cut out'? It would be missed out all together.

** If I get a bit of music, record it by microphone from my speaker, and use it as a sample of 'noise' in an audio editor and try to remove it from the original signal... it doesn't remove the original very well!

Rod Gervais Wed, 12/14/2005 - 06:25

Tenson wrote: ** If I get a bit of music, record it by microphone from my speaker, and use it as a sample of 'noise' in an audio editor and try to remove it from the original signal... it doesn't remove the original very well!

Simon,

your trying to do this manually - and my doing it as a part of an algorithm in a computer program is 2 entirely different things.

I don't care what you work in - doing it manually is impossible.

Have you ever watched a cop show - and seen where they mask out all sorts of background noise to clearly hear things that are going on in a recording? The possibilities with computer programming is absolutely amazing.

As I said yesterday, without my releasing any information Doug might give me that's proprietary - I will relay to you later today what we discuss regarding this matter.

Rod

Tenson Wed, 12/14/2005 - 09:23

Rod Gervais wrote: [quote=Tenson] ** If I get a bit of music, record it by microphone from my speaker, and use it as a sample of 'noise' in an audio editor and try to remove it from the original signal... it doesn't remove the original very well!

Simon,

your trying to do this manually - and my doing it as a part of an algorithm in a computer program is 2 entirely different things.

I don't care what you work in - doing it manually is impossible.

Have you ever watched a cop show - and seen where they mask out all sorts of background noise to clearly hear things that are going on in a recording? The possibilities with computer programming is absolutely amazing.

As I said yesterday, without my releasing any information Doug might give me that's proprietary - I will relay to you later today what we discuss regarding this matter.

Rod

I don't really know a lot about computer programming but just to clarify on that point.. The method I was referring to is where you provide the program with a sample of the noise you want to remove, it then analyses it and removes patters that match. In what way is this different? The computer knows what the original was, but it is not the same once it has gone through the speakers and been recorded back in. The only thing it knows is where the sound will be in time and that it will resemble the original signal it sent out.

I look forward to hearing what Doug has to say. I’m only asking out of pure interest, but I just hate not knowing how things work ;)

Thanks

Rod Gervais Tue, 12/20/2005 - 10:26

Simon,

I apologize for this taking so long - but I have been very busy.

I finally had a chance to speak with Doug today - and confirmed that he is doing this the way I described to you.

This is a very accurate method of performing the task at hand.

I hope that allthis helps you to understand.

Sincerely,

Rod

Tenson Wed, 12/21/2005 - 11:19

Hi Rod,

Thanks for that. Can you confirm whether the program removes the 'pattern' of the pulse from the waveform or whether it completely cuts out the waveform at the time it knows the pulse will be? i.e. if a reflection arrived at the exact same time as a pulse was sent out, would it be removed from the recording along with the pulse, or would just the pulse be removed?

I did a little reading of the ETF website myself and noted that they use a Hadamard transform to convert the MLS recording in to a impulse response.

Any chance you could explain how a Hadamard transform works in regards to getting impulse response from MLS? I found a few sites that do, but they all just say the maths of it and I am not a mathematical person! I found a few sites that explained furrier transforms in easy enough to understand English though.

I actually found quite an interesting article about how MLS can be used for sonar with this method. A clever idea I thought!

Thanks a bundle,
Smion

Rod Gervais Wed, 12/21/2005 - 12:22

Tenson wrote: Hi Rod,

Thanks for that. Can you confirm whether the program removes the 'pattern' of the pulse from the waveform or whether it completely cuts out the waveform at the time it knows the pulse will be? i.e. if a reflection arrived at the exact same time as a pulse was sent out, would it be removed from the recording along with the pulse, or would just the pulse be removed?

I did a little reading of the ETF website myself and noted that they use a Hadamard transform to convert the MLS recording in to a impulse response.

Any chance you could explain how a Hadamard transform works in regards to getting impulse response from MLS? I found a few sites that do, but they all just say the maths of it and I am not a mathematical person! I found a few sites that explained furrier transforms in easy enough to understand English though.

Simon,

Let's see if this helps,

When the signal is MLS, the deconvolution of the Impulse Response can be made with the well known Fast Hadamard Transform (FHT), as originally suggested by Alrutz [2], and clearly explained by Ando [3] and Chu [4]

The process is very fast,because the transformation happens “in place”, and requires only addition and subtractions.

The computations are done in floating point math, and the computed IR is then converted in 16-bit integer format by scaling to the maximum value.

This means that the absolute amplitude information is lost.

Furthermore, being as the sampling process is completely asynchronous with respect to the signal generation, no “absolute zero time” exists, and the computed IR is circularly folded along the time axis.

It is then unfolded, placing the maximum value always at the sample number 1500.

the sound board used in your computer is stereo, so the two channels can be used for working around these limits: one of the input channels is wired directly to the signal output, while the second channel captures the microphone signal.

In this way, the delta function (deconvoluted by the first channel) is placed at the sample N. 1500, with full-scale amplitude, and the system’s response appears on the other channel, with lower amplitude and a correct delay.

In case of different measurements to be compared, seeing as the excitation signal always the same, the relative amplitude and delay between the system responses can be correctly maintained.

This workaround sacrifices one of the two channels, but in most cases the absolute amplitude and delay are meaningless, and thus both the input channels can be used for capturing microphone signals, making it possible, for example, to measure binaural impulse responses simultaneously.

The deconvolve MLS signal module can be used also for a different task: generating an MLS-like signal, but having a predefined spectral content, instead of being white. This idea was first published by Mommertz [6], and it is based on the reversibility of the Hadamard transform.

For generating an excitation signal with a pre-defined spectrum, a Dirac’s delta function, followed by a large number of zeroes, is first generated. Then it is processed, applying a frequency filter. At this point, an artificial impulse response has been created, with the wanted spectral behaviour. The steps necessary to transform it in an MLS-like signal are the following:

1. Reverse it on the time axis

2. Invoke the Deconvolve MLS Signal module, with the required order and tap position

3. Reverse again the result on the time axis

4. The obtained signal can be continuously looped for exciting the system under test

5. When the response of the system is deconvolved again, the result obtained is the convolution of the original artificial impulse response with the system’s IR

Another application of this technique is the pre-equalization of the measurement chain, including the transducers.

Assume that the loudspeaker and microphone response was first measured in anechoic conditions.

An inverse filter of such a response can be created. This inverse filter can be used as the starting point, then creating an MLS-like signal which compensates for the uneven frequency response of the transducers.

When this excitation signal is employed for room acoustics easurements, the deconvolved IR is already free of the effect of the transducers, and contains only the room-related information, as if the trasnducers were “perfect”.

Rod Gervais Wed, 12/21/2005 - 12:33

Tenson wrote: if a reflection arrived at the exact same time as a pulse was sent out, would it be removed from the recording along with the pulse, or would just the pulse be removed?

Simion -

I noticed I missed this completely,

If this occured that would mean you encountered a "room mode" which would cause constructive/destructive effects to the original amplitude of the signal - that would be taken into account by the software. Whether the effect was destructive or constructive would depend on the location of the mic in the room in relation to the to signal. Each pass would add as much as 6dB od amplitude change depending on the location of the original source at the absolute peak or null (aninode or node).

Sorry bout that (missing it that is)

Sincerely,

Rod

Tenson Wed, 12/21/2005 - 13:59

Why would a reflection arriving at the same time as the next pulse being sent out mean you have a room mode? All that would be needed for this to happen is for a pulse to be sent out, and then the reflection from a boundary to arrive back at the microphone at the same time as the next pulse from the speakers. As the pulse's are so frequent, the boundary wouldn't need to be far away.

However, as you explained in the most recent post, the MLS signal is actually deconvolved from the recording, not just 'cut out' at the time each pulse occurred. So it is not an issue anyway!

I'd still love to know how a Hadamard transform actually works though?

Rod Gervais Wed, 12/21/2005 - 18:34

Tenson wrote: Why would a reflection arriving at the same time as the next pulse being sent out mean you have a room mode? All that would be needed for this to happen is for a pulse to be sent out, and then the reflection from a boundary to arrive back at the microphone at the same time as the next pulse from the speakers. As the pulse's are so frequent, the boundary wouldn't need to be far away.

Sir,

!st of all - the signals are psuedo random (as opposed to truly random) and will not be sent out in a perfect time hack to themselves.........

Beyond that, think about what you said for a moment............ a microphone doesn't know anything about when a signal was sent.......... it just records the moment it receives the signal- right?

OK - so from the moment a mic receives a signal - if the time it takes the signal to complete all of it's reflections and return to the mic is exactly the same as the signal length (which is the only way it could be synchronized) then it has to be a room mode - there are no other possibilities........... that why a room mode is a room mode...... the distance of travel before a signal meets itself head on (in synch) - corresponds to a room dimension that works in either an axial - tangential or oblique direction. So it would also work for harmonics of the original signal as well.........

It has nothing to do with the location of the mic - that only effects the amplitude of the mode - in eiither a destructive or constructive manner - and the speaker location only has to do with the degree that the mode may or may not be excited..........

But the mic still only captures the time hack of the reflected signal........... and that only has to do with room dimensions - not the time of transmission from the speaker.........

Also - understand - low frequency sound waves are not really directional in nature - but rather create a wave length like you would expect with water - what happens inside a room when a room mode is excited is a build up of the SPL in some areas of the room - with low pressure zones in other areas of the room - all in a wavefront that has basically excited molecules that are not in movement - but are rather just sitting in an excited agitated state........... that's what a standing waves is....... just that - standing..........

which is also why speaker location has nothing to do with a room mode - ony the room dimensions do - you can just increase or decrease the extent of the mode by proper (or improper) speaker placement.

For example - place the speaker in a trihederal corner and you will excite the mode it's greatest - place it in a node (null) location and you will excite it the least.

Simples yes? and no math involved 8-)

Sincerely,

Rod

Rod Gervais Thu, 12/22/2005 - 09:48

JohnPM wrote: There is a good paper on the various means of measuring frequency and impulse response here: [[url=http://[/URL]="http://www.anselmgo…"]Transfer Function Measurement Using Sweeps PDF[/]="http://www.anselmgo…"]Transfer Function Measurement Using Sweeps PDF[/] (discusses most methods, not just sweeps).

John,

1st - welcome to RO - I hope you find this site helpfull.

Next - your input (on this subject especially) is more than welcome - you could obviously do a better job explaining this to Simon than I could ever dream of..........

I took the liberty of checking out the application you've put together, very impressive - I will be sure to keep you in mind when people inquire about share/freeware applications related to sound measurements.

Again - welcome - and happy holidays to you and yours.

Sincerely,

Rod

Tenson Thu, 12/22/2005 - 10:51

Hi Rod,

I think you misunderstood me. I will try to explain myself again, hopefully more clearly!

What I meant is that, as the MLS pulses are sent out one after another.. A pulse could be sent out and arrive at the microphone. A little while after this the reflections will start to arrive at the microphone as well. Now as it takes time for the reflections to arrive, at this time the next MLS pulse could well have been sent out and the direct sound of this could easily arrive at the microphone at the same time as the reflections of the pulse before it.

Thats all I meant.

If the program was simply removing the part of the recording where the direct sound from the MLS pulses was, then any reflections arriving as described above would also get removed.

They won’t though as it is deconvolved, not simply cut out. As you explained later on.

I think we find it hard to understand each other for some reason! Thanks for continuing to answer my questions though!

Thank you as well John I will take a look at the page in a bit.

Rod Gervais Thu, 12/22/2005 - 12:03

Tenson wrote: as the MLS pulses are sent out one after another.. A pulse could be sent out and arrive at the microphone. A little while after this the reflections will start to arrive at the microphone as well. Now as it takes time for the reflections to arrive, at this time the next MLS pulse could well have been sent out and the direct sound of this could easily arrive at the microphone at the same time as the reflections of the pulse before it.

Simon,

Even if I were doing this manually it wouldn't be that difficult a task.

for example- - if we know the distance from the speaker to the mic - and we know the time hack of a signal we sent - assume we transmitted our initial signal at 80 dB........... then if we overlaid a signal to this in perfect synch - we would have a signal amplitude of 86 dB - if this is the case then it is our signal over a single reflection - a reading of 92 dB would reflect that we were in a room mode condition dealing with 2nd reflection as well........... strip out the original 80 dB signal and the remaing 86 dB is the modal condition of the room at that particular moment.

I don't think we misunderstand each other all that much - I ust don't see it as that deep an issue regardless of how you go about removing the initial signal.

sincerely,

Rod

Thats all I meant.

If the program was simply removing the part of the recording where the direct sound from the MLS pulses was, then any reflections arriving as described above would also get removed.

They won’t though as it is deconvolved, not simply cut out. As you explained later on.

I think we find it hard to understand each other for some reason! Thanks for continuing to answer my questions though!

Thank you as well John I will take a look at the page in a bit.

Tenson Thu, 12/22/2005 - 12:40

I think you are misunderstanding me because I am not asking how you could tell if there was a reflection at the same time as a pulse. I am saying that if the program was completely cutting out the recording taken by the microphone, at the point of each pulse then you would be missing data that arrived at the mic at the same time as the pulse.

As this is not how it works, it really doesn't matter, but it was something that one of your early posts lead me to think it might be doing.

philsaudio Fri, 12/30/2005 - 07:44

Tenson wrote: Okay so with MLS the computer knows when the next impulse is coming but surely the gaps in-between the impulses are not long enough to measure the reflections from walls? Yes it can get the impulse and gate off the rest but this isn’t what ETF shows, it shows the room reflections after the impulse as well and if the gaps between them are so small they are in-audible, I don’t see how it does this.

Definition: pulse train = (pseudorandom noise from the FFT system) is what your hear.
impulse = the display of the de convolved pulse train plotted as if it was one short electrical signal infinately high and infinitely short running through your loudspeaker.

```````````````````````````````````````````````````

If you look at the impulse on one of the subsequent pictures you will see it has no relation to the waterfall or the sound you hear. To change one ( time domain impulse) to the other ( frequency plot) requires an FFT operation on the recorded pulse train of data.

To get the waterfall one would window the data from the entire pulse to make the first plot at the back of the waterfall. This would presumably be the same as the main freq plot as seen without a waterfall.

The next line foward (later decayed info) would be FFT'd from a subset of the data used in the back plot. This subset would eliminate the earliest data in the impulse and only use the latest data to the end of the data.

Each subsequent line in the waterfall would use less and less data always ending at the last point and eliminating more and more data from the front of the impulse.

The pulse train must be longer than the longest "MLS slice" plotted.

Regards
Phil Abbate

philsaudio Sat, 07/15/2006 - 10:42

Waterfall

The waterfall plots popular with EFT and other FFT analyzers such as the one I use (Liberty Audio Suite or LAUD) follow the methodology Rod outlines above.

The way the analysers generate the waterfall plot is by using the data from the impulse but eliminating time from front of the impulse signal and deconvolving.

The first plot is all of the data in the impulse.

Wait so many t seconds and lop off the data from the front of the impulse and deconvolve. You have the second line in the impulse less the initial data occuring t seconds before you started

Wait twice so many t seconds and lop off the data from the impulse and deconvolve. You now have the third line in the impulse less the initial data occuring 2xt seconds before you started.

Continuing along you wind up with no original data and only the room modes ( your rooms ringing spectrum) .

Is this information usefull. Perhaps.

What I found is that depending on where the mic is in the room with respect to the speakers producing the MLS signal I get the same spectrum but different amplitudes of the signal. So depending on where the listener is the same room modes show up but their relationship to one another is different. I sit my head in the spot where all the nodes are more or less equal when I want to do critical listening/mixing/mastering.

I have detailed a lot of my experience in this thread

(Dead Link Removed)