SPDIF vs. AES/EBU vs. ADAT Lightpipe

Discussion in 'Mastering' started by covenant66, Apr 25, 2006.

  1. covenant66

    covenant66 Guest

    Which do you prefer for your connections?

    Because of the Nyquist Theory and Lightpipe's restriction to 48khz (which really is not a restriction at all), I would think Lightpipe would be the best connection.

    Does anyone have definitive information on this?
  2. JerryTubb

    JerryTubb Guest

    Mostly AES here, haven't used light pipe in years.

    Light pipe for stereo or multitrack?

    There shouldn't be any difference in the sound of the 3 stereo formats, unless jitter is an issue.

  3. TVPostSound

    TVPostSound Guest

    All 3 are the same word information, the difference is the preamble, SPDIF, and Lightpipe (Toslink) have a self clocking preamble, AES, requires a reference clock.
    I personally would prefer AES, IF I had a clock reference going to the source.
  4. RemyRAD

    RemyRAD Guest

    Sep 26, 2005
    I think you are confused? The Nyquist theory simply indicates that your sampling frequency should be a minimum of 2 X the highest audio frequency you want to produce. In all actuality, with 16-bit, 44.1kHz sampling, you are merely getting only 2 samples of resolution at 20kHz. Now if you think about connecting the dots in a picture and you want it to look like a sinewave, 2 dots would look like a triangle wave! And that's not smooth sounding! One of the many reasons why so many people feel that Digital sound harsh. They're right! So now if you are talking about 24-bit, 192kHz sampling frequency, you're talking about much better resolution in the highest of audible frequency range. DSD sounds even better because even at 20kHz of audible frequency, your sample rate is 2.5MHz! Of course that is a 1 bit system, since the sampling rate is so high. The way I always thought Digital should be done over 20 years ago? We just didn't have the technology back then as we do now.

    Counting my bits
    Ms. Remy Ann David
    I only have 2!
    And they still get in my way.
  5. TVPostSound

    TVPostSound Guest

    Well if Bell Labs (AT$T) wasnt broken up by the government in 1984, you probably would have got your wish.
    Nyquist worked for Bell Labs, so did many engineers I learned from.
    Digital audio would have developed sooner and quicker.
  6. dpd

    dpd Active Member

    Sep 29, 2004
    No, they are TOTALLY WRONG! Do not confuse TIME resolution with AMPLITUDE resolution.

    Nyquist theory shows that 2 samples are totally sufficient to describe a sinusoid at the the Nyquist limit. Yeah, if you draw it out and just connect the dots, it will look totally distorted, but when the sampled waveform is properly reconstructed via a low pass filter at the Nyquist frequency, the resultant analog waveform at Nyquist, will be resolved at the bit depth of the sampled signal (e.g. 16 bits).

    It's magic, but it works like science.

    Higher resolution audio comes in two flavors - more bits (higher dynamic range due to less noise & greater headroom) and higher sample rates. Higher sample rates enable us to use filters with less phase and amplitude effects in the audible passband and to use digital converters that suffer less non-linearities in their designs.
  7. Cucco

    Cucco Distinguished Member

    Mar 8, 2004
    Fredericksburg, VA
    dpd - you're absolutely correct.

    When redrawing a waveform, the distance between 2 dots determines the frequency. You could draw that however you want as long as the zero crossings hit where the dots are. If drawn as a smooth sinusoidal wave (as it should be) and there are only 2 dots defined, then you will have a pitch (1 and only one). However, with multiple points defined, in a nested type pattern (think something like this - 1, 2, 3, 4, 4, 3, 2, 1, ) Then you will define 4 independent tones. How many dots are needed total?? 8. So, 4 pitches have been defined with 8 points. Hence the Nyquist theorem.

    So, sampled 44,100 times per second, as many as <22, 050 pitches may be identified at any given time within a second.

    Amplitude, however, as you state is completely different and this is where bits come in.

    Each value in binary represents positive or negative voltage (err, as it translates to analog audio). The more bits, the more capability of reproducing multiple levels of amplitude and furthermore, the more distance between max and min...

    Delta of 111111111111111111111111 to 000000000000000000000000
    Delta of 1111111111111111 to 0000000000000000

    As for the cable preferences - AES/Lightpipe/SPDIF -

    Personally, I could care less. I primarily use AES as I've found that the connections are far more robust and the cables are far less fragile. Plus, most pro-audio gear is capable of interfacing with AES. I also use lightpipe though as I love my HD24 but hate its converters. So, I have the Mackie 800R and the Lynx Aurora 8 (which I will soon add the lightpipe option to).

    Remy is quite correct though - Nyquist has nothing to do with the lightpipe specification. As a matter of fact, until MADI came out, lightpipe really did squeeze as much bits through that pipe as they (Alesis et al) could decipher accurately. As for the MADI format - I have no idea how they cram that much data in to one little optical cable..

    Of course, thanks to SMUX, you're not limited to 48 kHz over lightpipe. With reduced channel counts, you can actually get up to 96 kHz or even 192 kHz.


    There are 10 kinds of people in the world -
    Those who understand binary
    Those who don't

  8. TVPostSound

    TVPostSound Guest

    Not quite there yet!!!

    Nyquist stated that 2 samples are the MINIMUM at any given frequency hence the low pass analog "Anti aliasing filter" at the Nyquist frequency. At 48K, the filter would low pass at 24K, since any frequency above 24K would no thave the MINIMUM 2 samples.

    The filter is in place before the A/D conversion, it has nothing to do with the sampled waveform.
  9. dpd

    dpd Active Member

    Sep 29, 2004
    I was speaking of the necessary reconstruction filter after the D/A that removes the images from the output.

    You need the low pass filter on the front end to prevent aliasing.
  10. Boswell

    Boswell Distinguished Moderator Resource Member

    Apr 19, 2006
    Home Page:
    You need an anti-aliasing filter before the ADC and a reconstruction filter after the DAC. Both are independently subject to Nyquist restrictions. Strictly, the reconstruction filter should also compensate for the sinc [sin(x)/x] droop in the frequency response of a zero-order hold system like a DAC. A perfect (brickwall) compensated filter would reconstruct the waveform accurately right up to, but not including, the Nyquist frequency, no matter how the dots appeared visually. No filter is perfect, however, and so a cut-off frequency at 1/2.205 or 1/2.4 of the sampling rate has become standard to allow for the slope of the filters in their transition bands. Even so, aliasing can occur for high level signals near the Nyquist frequency. The problem is of course much reduced at 88.2/96 KHz and almost non-existant for audio at 192KHz.

    To get back to the original question, the type of digital transmission has no effect on the quality of the sound as long as no bits are dropped. So S/PDIF, AES/EBU, ADAT LightPipe are all equivalent for a 24-bit 48KHz stereo pair (provided both ends can handle the wordlengths). The control bits carried along with the data payload differ for each of the formats. S/PDIF is essentially a domestic version of AES/EBU. The Alesis LightPipe has the capability of carrying 4 stereo pairs concurrently (8 mono tracks) at up to 48KHz. Its capacity can also be utilised to carry 2 stereo pairs at 96KHz or 1 stereo pair at 192 KHz. This flexibility allows, for example, a digital mixer such as the Yamaha 01V96 to record or to mixdown 12 channels of 24-bit 96KHz digital audio to/from an Alesis HD24 hard disk recorder via 3 LightPipe connections in each direction, or it could work in conventional 24-channel 48KHz mode.

    Except for short ADAT LightPipe runs, AES/EBU is gererally prefered for professional use because of greater digital signal integrity over the sort of cable lengths encountered in studio and field situations, especially where master clocks have to be taken round to all the digital equipment in use.
  11. dpd

    dpd Active Member

    Sep 29, 2004

Share This Page