Skip to main content

Why I started this thread!

Recently there have been several threads where the "sound" of clock jitter has been mentioned. These discussions seem to get "lost" in the diversity of the forum and points are often repeated. Hence I thought it would be worthwhile starting this thread to collate information in to a single “reference” source and hopefully debunk a some of the common misconceptions about clock jitter. Hopefully others will contribute.

OK that's why I've done this. I hope others agree and, if not, perhaps another method could be proposed.

Comments

MrEase Wed, 01/20/2016 - 11:37

As I have said elsewhere, I do not think that there is such a thing as a characteristic "sound" produced by clock jitter which makes it all the more difficult to identify. In fact I doubt, without specific efforts, anyone has really "heard" the effects of clock jitter.

Of course to back this up I need to produce some sort of evidence for these comments. First I will refer to the "What is clock jitter?" thread which is a "sticky" at the top of this "Converters & Interfaces" Sub-Forum. Here I spent some time explaining the sort of amplitudes we could expect from given amounts of clock jitter and also how I believe that the usual "audio" method of measuring clock jitter differs from other more conventional (and meaningful) measurements used in other fields. As seems “normal” in the audio world, the measurement method employed presents the best possible figure of merit whilst giving almost no useful information about the jitter characteristics and most certainly does not give the worst-case timing errors a given clock will produce.

First, let's make it clear what we mean by a “clock” in the electronic context. This is not a true “clock” in that it conveys time and date but is more accurately described as a reference frequency, which allows many different signals can be synchronised. This frequency usually takes the form of a simple sine or square wave. For digital use, we naturally use a square wave.

This frequency, like everything else electronic is subject to a certain amount of noise and error, which, with careful design, can be minimised but never eliminated. In the context of recording or replaying audio this noise ONLY has an effect at the A-D or D-A interface and causes small timing errors from the “ideal” sampling time. Inevitably this has the wherewithal to affect the quality of our recordings.

MrEase Wed, 01/20/2016 - 11:38

To evaluate what any clock jitter will “sound” like is not straight forward as the noise is not necessarily random and will certainly not be consistent from one sound-card model to another. It will also vary depending on whether one sound-card is slaved to another or a “master” clock or uses it’s own internal clock.

Let’s make a start by trying to make some sort of analogy. Clock jitter is actually a basic measure of what is called the “phase noise” of the reference clock and this noise has a noise spectrum – just like we can see the audio spectrum with a plug in. This noise spectrum is what can (and does) vary between sound-cards.

Essentially what the jitter does is “phase modulate” the audio signal. Phase modulation is very similar to frequency modulation (as per FM broadcasts) and it is valid to use this as an analogy. With an FM broadcast, the audio is used to modulate (vary) the frequency of a radio “carrier” wave, which is then received and duly “demodulated” to reproduce the audio. Essentially, in the case of clock jitter, the clock phase noise (jitter) is the audio source and its noise will be modulated on to the audio signal – which in this analogy is the carrier wave. Whilst the analogy is not exact it is close enough and I hope it is sufficient to explain why the spectrum of the phase noise (jitter) is modulated on to the required signal. It should also be clear that different characteristics of phase noise would create different modulation “sounds”. Hence the characteristic of phase noise is just as important, if not more so, than the jitter spec’s we see in the specification sheets.

Also I know some refer to clock jitter effects as distortion. Personally I think this can be misleading as most are familiar with harmonic distortion or intermodulation distortion. Whilst it is indeed a distortion of the audio wave, I prefer to think of it as an unwanted “modulation”. Perhaps I should refer to it as clock modulation distortion!

I’ll try to explain more, later. In the meantime I would appreciate any input on whether you think this is a worthwhile exercise!

DonnyThompson Wed, 01/20/2016 - 21:05

It's a great thread - well described and explained; and as far as I'm concerned, providing very pertinent - and valuable - information and definitions, in today's world of $6000 clocking devices.

Other than that, I don't have anything else to add. I'm definitely not the guy who should be weighing in or giving advice 0n this subject.

I can guarantee you that I will be following it, though. ;)

-d

MrEase Thu, 01/21/2016 - 04:14

OK, so the site owner and two moderators seem to like the idea! Wow! (EDIT): Crikey, I see it's a sticky already! I hope that's a vote of confidence and also hope that I don't fall flat on my face! :unsure:

I have to warn Boswell that I'll be hoping for the usual corrections whenever I make my careless slips. It'll be good to know you have my back! I also hope you'll chip in also if you find any analogies I may think up for illustration are wide of the mark...

As with the clock jitter thread, I'll do this in smaller chunks for two reasons. 1. Smaller chunks are easier to digest and 2. I don't have the time to do this all in one big "paper" which in any case would put many off.

I actually have "paying" work to do right now but hope to do another instalment later today.

MrEase Thu, 01/21/2016 - 05:46

As a side note, Boswell raised a point in the "Antelope Audio 10M" thread which is perhaps worth expanding on a little.

Boswell, post: 435575, member: 29034 wrote: The problem in identifying it in hearing tests is that signal distortion can be due a number of causes, and a poor quality sampling clock is not usually one of the immediately obvious ones. One clue is that distortion caused by sampling clock jitter is not usually a function of signal amplitude, so if there's no change in the signal quality when reducing the input amplitude by 6dB and increasing it by the same amount on the monitoring amplifier, then distortion added to the signal passing through the equipment could at least have a component due to clock quality.

What this means is that distortion due to clock jitter will always, for a given signal, be a fixed amount (usually measured in dB) below the peak music signal. This is quite easy to show (as I did in the "clock jitter" thread). This is not the same as the usual distortion measurements of harmonic or intermodulation distortion. With these distortions, as the signal rises in amplitude, the relative amount of distortion increases at a greater rate. In the case of intermodulation distortion, each 1dB increase in peak amplitude results in a 2dB increase in intermodulation distortion. Harmonic distortion does not have the same "straight line" consistency between differing levels but it is true to say that as signal level increases the relative distortion will too but this may be very small until signals get quite large. Just a small point but what Boswell says (as usual) is true! As he carefully notes though, his test will only indicate the possibility of jitter induced modulation distortion and is not a definitive test. It will however discriminate quite well against intermodulation distortion.

DonnyThompson Sun, 01/24/2016 - 20:01

Okay... trying to understand here... bare with my ignorance...

So then, distortion due to clock jitter - let's call it a "symptom" of jitter ... will always be of a fixed amount below the peak... but does the amplitude of a signal play a factor in the amount of jitter itself?
Or am I bum-fuzzled on this topic as usual? ;)

audiokid Sun, 01/24/2016 - 20:57

kmetal, post: 435677, member: 37533 wrote: I'm lost on this one. I think I lack the fundamentals.

Maybe we need to back up a bit and start over with what the 10M does and how any of this relates to sound.

I see the 10M as a clock signal amplifier/ router/ junction box, stabilizer.

Constant vs inconsistencies

Lets ask (in layman terms) if I am correct and then learn how this has anything to do with sound, distortion/ shifting of bits that inevitably create phase shift with random peak burst or drops as well.
Related: Why is a dedicated PCIe interface more stable? Then there is USB, FW, MADI, AES EBU, TB,
Then there is a relation to the CPU (Mac, PC, Desktop, laptop, ipad, iphone).
How stable the PSU is and how that relates/ effects stability which in turn effects sound.

Plus, I'm also interested if various cable lengths effect latency? To my understanding, all the cable should be the same length?

Am I close to where the others are lost?

MrEase Tue, 01/26/2016 - 09:13

Sorry folks, I've been busy making a living! I had written a fair amount and realised that I'd been getting far too technical and started on a re-write - having seen Donny & Kmetal's posts then I guess I need to also take a little step back.

Chris, I'm not sure how relevant your questions are to the thread as I'm not really intending to evaluate the 10M or various interfaces. I'm trying to explain how any degradation of audio occurs due to clock jitter and why no particular sound can be associated with jitter.

@donny The errors due to clock jitter arise simply because of these facts - Clock jitter will cause the audio to be sampled at slightly the wrong time. Let's take for instance a signal that is sampled 10us late (from "perfect time). The sample that will be measured will change in that 10us by an amount determined by the signal, let's say that the signal changes by 10uV in 10us, then the error is simply 10uV. Now if you increase the signal by 6dB, the voltage doubles so the error will now be 20uV. That means that the error has changed by the same 6dB that the signal was increased. Therefore for this given example, the error remains at the same relative level to the signal itself. Note that the signal has no effect on the amount of jitter present.

Does that example help?

As before I realise that a few pictures may help but I'll need to work on setting this up.

kmetal Sorry if this is confusing but please bear with me, I would prefer if you understand what I'm talking about. What I've done so far is to try and present the analogy of an FM transmission that is similar to the way clock jitter can generate different "sounds", much like we recognise lots of different music or speech on an FM transmission. The phase noise (i.e. jitter) of a clock will cause a modulation on to our audio that carries the "signature" of that particular clock. That's about as far as I've got but what I want to explain is why different clocks (and slaved clocks) won't always sound the same. My first attempt is to really show what a good real world clock should do but also indicate what can go wrong in the design of these clocks. Maybe that's deeper than you want to go but I'll do my best to keep it understandable.

To all, thanks for the feedback! Please keep it coming so I can try to pitch this so it's worthwhile - to write and read! There's no point if everyone glazes over and rolls their eyes! :)

audiokid Tue, 01/26/2016 - 20:39

MrEase, post: 435744, member: 27842 wrote: Chris, I'm not sure how relevant your questions are to the thread as I'm not really intending to evaluate the 10M or various interfaces. I'm trying to explain how any degradation of audio occurs due to clock jitter and why no particular sound can be associated with jitter.

No worries, I actually think its all relative to the thread though. I only mention the 10M because those following this have also been following the 10M thread. The 10M isn't in any need to judged. Its a beautiful product.

Basically I was just expanding, to demystify some of what people think the clock is doing when it could also be more about what its not doing. Which is to do with why jitter happens, yes, no? Most likely my wording is not "worded" well.

DonnyThompson Wed, 01/27/2016 - 01:29

From what I've been able to understand, the 10 M is a super-uber-precise clock. And to that end, it does what it's supposed to do in the best way possible ( ie. the most accurate)... so it's not that I'm disputing the quality of the piece.

The question, which I think has been on the minds of most here since it was first introduced for discussion, is if a system that is already properly clocked to begin with requires something like the 10M?

Am I correct in saying that those who are hearing a noticeable, audible improvement while using the 10M on their system are doing so because they're clocking was off to begin with? At which point the 10M would indeed make a big difference; but at the same time, would it be possible that a less expensive clock could also provide the same level of improvement, too?

For the typical DAW user ( and I understand that "typical" is a loose description), let's say, someone like myself who works strictly with audio, of no more than 6 minutes in length, at 44/24 (or 32)... working with an average of 18 -24 tracks, and who isn't incorporating any sending of a signal to OB digital processing that needs to be returned to the DAW...
I simply connect mics or instruments to a pre/i-o, ( like the Presonus 1818VSL ) and then route that signal via USB to my DAW. When I'm doing a final mix, I either render through the stereo bus ITB, or will use the 2 DAW method (on the occasion that I have the proper equipment at hand to do that). With that scenario, is there ANY reason why I would need any form of external clocking in place? And if so, why?

I've also read ( and heard) that having a highly accurate clock like the 10M might be valuable to certain specific workflows - video being one of the main type of production where it has been suggested as benefiting from an accurate external clcoking device - where over a length of time, "sync drift" can occur if the project is over a certain length...
Is this true?

Then we get into audio interfaces that are connected directly to the motherboard of a system; PCIe cards with formats like ADAT/Optical, Madi, etc. of which are, presumably, the best current choices because the routing is so "fast" and so "direct" to the DAW, without USB or FW cables, which can't carry the same sizes of data at the same lightning fast rates... is this a correct statement?

I'm not trying to hijack Ease's thread here... I'm simply trying to understand the need for an expensive piece like the 10M; I'm not suggesting that it's all balloon juice, either. I'm sure that it does what it does in a very accurate way... but who actually needs a piece like this? Perhaps we could start there...

kmetal Wed, 01/27/2016 - 11:17

My question is essentially sort of like Donny's. When talking about interfaces clocks, what makes for better clocking from something like a basic m-audio to something like an apogee symphony. Is it the actual clock itself? Or is it the overall design and incorperartion of the clock.

And if you were running multiple digital pieces converters/interfaces like we are at the studio apogee ensemble, Rosetta,motu, or a digital console with something like an spx 90, is there a particular peice that would be better than the other to clock to? Or better than the internal clock from a daw?

Also, would a different daw program, or more or less powerful computer effect the clocking? My (mis) undertanding of that would be as long as everything was in sync 'marching to the same beat' or pulse, that the daw and more so the computer wouldn't have an effect.

Again I'll mention, in know Dave mentioned when he clocked his digi 002 or 003 to his alesis HD hard disk recorder he noticed an improvement. Now that digi was notoriously mediocre, and conversion was the usual culprit. I belive he was on PTLE at the time.

Which also bring up does conversion effect jitter or jitter effect the actual conversion? Or is it again an overall design/integration situation?

Are there situations where the 10m would/should show a significant improvement?
Does it do anything relavent that a 'lesser' clock might, be external or internal?

Lol mr ease an bos will have a whole book written by the time this thread tapers off. Thanks for your patience guys. I feel like this is one of those completely don't get it till you all the sudden get it completely topics, for me at least.

MrEase Thu, 01/28/2016 - 15:03

kmetal & DonnyThompson

Hi guys. A lot of your questions have already been covered in the "what is clock jitter?" thread. I know it's over 5 years since I did that but the facts have not changed in the interim. I cover on board clocks and slaved clocks in that thread and also evaluate the effect of "worst case" jitter scenarios. I know it's several pages but I hope it'll be worth your while sifting through it.

What I'm trying to deal with here is going on from there and I don't really intend to re-hash the older material but move on to concentrate on why a jitter problem could be hard to diagnose due to there being no real characteristic sound. I do believe that some problems ascribed to clock jitter are erroneous and that when clocking problems exist, they are often put down to jitter but other greater problems exist that must be resolved first before the finger is pointed at jitter.

Boswell Thu, 01/28/2016 - 16:28

I keep coming back to the point that a conversion clock (also called sampling clock) is different from a synchronising clock. They can, of course, have the same source, but what they have to do is different.

The synchronising clock's job is to make sure the various digital gear or digital sections of gear have the same notion of what elapsed time is. If this is done, then audio sent as digital serial messages between the different pieces of equipment will be guaranteed to be sequential and unique, that is, it will not be lost or duplicated. Usually, each piece of gear has a time window somewhat less than a clock period in which new data should arrive in order to be treated correctly. It makes no difference to the audio quality of that signal exactly when in that time window the audio arrives. This makes it clear that some jitter can be tolerated on this clock with absolutely no difference to the audio result.

The conversion clock's job is to define as precisely as possible the instants at which the sampling (input) or reconstruction (output) happens at the boundary between analog and digital sections of equipment. It does not govern what happens to the digital samples after sampling nor how digital samples arrive at the output prior to reconstruction. For the lowest conversion errors (in either direction) the sampling clock must have well-defined edges that are as repeatable in time as possible, and hence lowest jitter. Note this has no bearing on clock accuracy, that is, whether there is always a given number of sampling clock edges in some specified external period.

Now this would all be relatively straightforward to think about if every piece of gear had a simple selection between its own internal clock source and an external clock input, and that whichever clock was selected acted as both the conversion clock and also the synchronising clock. Indeed, this was the way most digital gear operated up to a few years ago. Since then, the designers of much of the higher-end digital gear have adopted the policy that a properly-managed internal clock gives the best jitter figures, and that if external synchronisation is required, this is best done by long-term phase-locking the internal clock to the external source. This results in the internal clock taking on the exact frequency of the external clock, but keeping its superior short term jitter properties. Doing this neatly avoids jitter multiplication when cascading clocks through several pieces of gear. It has also largely removed the need for low-jitter external master clocks, frequency accuracy notwithstanding.

I should declare my interest here to say that I have been one of the design engineers involved in this movement to use managed phase-controlled internal clocking. It's been exciting.

audiokid Thu, 01/28/2016 - 16:58

MrEase, post: 435801, member: 27842 wrote: I do believe that some problems ascribed to clock jitter are erroneous and that when clocking problems exist, they are often put down to jitter but other greater problems exist that must be resolved first before the finger is pointed at jitter.

excellent.

Boswell, post: 435804, member: 29034 wrote: I should declare my interest here to say that I have been one of the design engineers involved in this movement to use managed phase-controlled internal clocking. It's been exciting.

And right on Bos!

MrEase Fri, 01/29/2016 - 05:46

kmetal, post: 435802, member: 37533 wrote: Ok heard loud and clear, I'll read the thread and get up to speed.

Please don't get me wrong though, I'm happy to answer questions. I just really want to repeat what's already been done...

In order to get some photo's I need to set up various gear to illustrate some points I'm trying to make. Unfortunately my bench is full of paying work ATM so I'll have to finish that before setting up for photo's.

audiokid Fri, 01/29/2016 - 06:55

MrEase, post: 435819, member: 27842 wrote: In order to get some photo's I need to set up various gear to illustrate some points I'm trying to make. Unfortunately my bench is full of paying work ATM so I'll have to finish that before setting up for photo's.

Excellent.
 

audiokid, post: 435756, member: 1 wrote: No worries, I actually think its all relative to the thread though. I only mention the 10M because those following this have also been following the 10M thread. The 10M isn't in any need to judged. Its a beautiful product.

Basically I was just expanding, to demystify some of what people think the clock is doing when it could also be more about what its not doing. Which is to do with why jitter happens, yes, no? Most likely my wording is not "worded" well.

For more references, here is another link from Bob Katz

http://www.digido.c…

MrEase Fri, 01/29/2016 - 10:09

Having introduced the FM analogy, I hope it explains why the noise characteristic of phase noise (jitter) will affect audio in particular ways and why the noise spectrum is important.

So how do the noise characteristics of various oscillator designs vary and produce varying sounds? I’ll break this into two basic categories. First, non-slaved internal clocks and, fairly obviously, secondly, clocks that are slaved to external sources.

So let’s start with non-slaved clocks. For any sort of reasonable long-term stability, these will be crystal oscillators. There is no need in audio for the long term stability of atomic standards and the like as they do not inherently possess lower jitter and 50 ppm accuracy is way better that we could tune a guitar or piano etc. More on this later when we consider external clocks.

So why do jitter specs seem to vary so much? The simple answer is design. Crystal oscillators need just as careful design as do our precious audio circuits. As we know, noise levels vary between differing audio designs and they also can be affected by external sources, hum being one we’ve all encountered at some stage. As oscillators run at much higher frequencies than audio, the range of external influences can also increase. Let’s first deal with self-generated noise within an oscillator. This can vary due to the many different circuits that are used. Quite often oscillators are based on logic inverters and these give no method of improving the basic noise performance of the device. Also, they are often used in basic circuits that will drive the crystal too hard and this will degrade the noise performance from the optimum. Although this noise can vary in amplitude it will generally have the same characteristic in that it is random and will decrease with the frequency offset from the desired frequency. Thus its modulation on to sampled audio will have a pink (ish) characteristic. This would not be intrusive to the “sound” and would be indistinguishable from other audio noise sources.

Here’s a picture of what this type of noise looks like using a spectrum analyser. Remember that, in an ideal world, this should just be a vertical line. The smooth curve at the higher levels very close to the centre is determined just by the test equipment bandwidth. What we’re looking at here is the clearly noisy bit outside the middle two vertical divisions. I’ve purposely exaggerated the levels to make them clear.

Next is to discuss external noise influences. These can arise from many areas in a design. Soundcards will have some form of communication to the host. This could be USB, Firewire, Ethernet etc. Each of these use their own clock standards and frequencies, separate from the sampling clock generator. We also have mains hum (which I presume we’re all familiar with!). Depending on how the clock is designed and if these external “influences” are not well isolated from the oscillator we could end up with what’s shown in the second picture. Here I’m just showing a phase modulated 5 kHz signal, but it is clear that the result is not just noise.

This type of clock problem can produce similar 5 kHz frequencies in our recorded audio and discrete signals are much more obvious than pure noise. A clock with this spectrum could cause audible degradation BUT only at the levels I calculated in the “other” thread. It’s not good design but it isn’t necessarily going to be intrusive. Of course there could be many sources of external noise affecting the jitter noise and these would only serve to make the situation worse. Of course the worse it is, the worse would be the jitter spec.

I hope what I have shown here explains a little bit of why jitter does not have a characteristic sound! Modern clocks should be nowhere near as noisy as the examples I’ve given which have been produced purely for illustration of the problem.

DonnyThompson Sat, 01/30/2016 - 03:26

Thanks for taking the time to do this, Ease.

I am grasping some of this... not all, but some. It's gonna take me a little time to wrap my head around all of this - but that's certainly not your fault. You're explaining things very well; my lack of understanding of certain things in certain areas is entirely my own responsibility. ;)

Getting through to those of us heathen who are ignorant to so much of this ( and of course when I say "us" I mean "me" LOL) cannot be all that easy, regardless of your screen name. You're a great teacher; you just happened to be cursed with a poor student. :)

I will say this... what a fantastic resource this is for those who are doing research on the subject! :)

-donny

MrEase Mon, 02/01/2016 - 10:21

Donny,

Thanks for the kind words. I can hardly believe I've been designing electronics for over 40 years so I now like to help if I can! :<( While I am trying to get the concepts in to an easily absorbed form there are two problems. First I can easily get over technical without realising it and secondly, to try and explain some more complex technical parts I try and introduce some analogies to make it more "accessible". Working out good analogies is not always easy (and can even make things worse) and such things are never 100% accurate anyway.

To all, please let me know where you find difficulty with my explanations and I'll try again. The worst case is that I present all this information and it gets misunderstood. If that happens, rather than clearing things in peoples minds, I could be guilty of (inadvertently) spawning even more misinformation on the web. I don't want to do that so please let me know if anything is not clear.

I'll be back when I get the chance!

BTW my screen name was always intended as a pun on "mysteries" rather than making things easy! Not such a good screen name if I have to explain it I guess....

MrEase Tue, 02/02/2016 - 10:14

I’ve decided to move a little “off topic” in this post because both here and in the clock jitter thread I’ve mentioned that I don’t approve of the way soundcard manufacturers present their jitter spec’s and I now have test equipment that allows me to illustrate my point. I think this is necessary as without some pictures it is hard to describe in simple terms.

So this first picture is of a clock that is relatively clean (as good as my test equipment will allow!)

Nothing really to mention here except that the clock is a sine wave and I’m deliberately overdriving the oscilloscope in order to give the best picture of the timing.

The next picture shows the same signal but with some deliberately added phase noise (i.e. Jitter). Looking closely at this you will see that each consecutive transition is slightly wider than the previous one.

This is how soundcard manufacturers derive their jitter specs. The second rising edge at the centre of the screen shows that the “width” of the trace is wider than the first (left hand) rising edge. The amount is about 2 ns (each horizontal division is 20 ns) so the specification would read “jitter = +/- 1 ns". I know this is much larger than many soundcard specs but this is just an illustration using the equipment I have.

Reading this spec. you could well expect that the maximum timing error would be the same +/- 1 ns. Not so!

The next picture introduces a second clock, which is locked to the first in just the same way that we sync. clocks across multiple soundcards.

This shows that both clocks look reasonable and that the second clock (the blue trace) is only slightly degraded from the first (yellow) trace. The total jitter is (coincidentally) about 2 ns. Note that this is the relative jitter between the two clocks so each source will contribute half of this amount (as the two generators I’m using here are identical). This means that the worst case timing error between two locked soundcards would be +/- 1 ns. That would be pretty good overall.

What happens though when we add the phase noise back to the yellow signal? Surely we’d naturally expect jitter to be still reasonable. Brace yourselves!

Wow! What this now means that our clock specified as +/- 1 ns is in real time actually around +/- 35 ns! This is the spec. I prefer as it is much more meaningful and actually relates far more accurately to the real world. This is the standard way RF signal jitter is measured. Please bear in mind that, although I’ve used two signal to demonstrate this, a single soundcard with it’s own clock and a jitter spec. would show the same timing errors – the yellow trace is just a reference time in this example.

Note. I had to reduce the gain on the blue trace so the jitter could be seen! The whole screen turned blue at the original amplitude...

Also bear in mind that, although this demonstration shows the “real” jitter as 35 times the “Spec. sheet jitter” this is not a constant multiplier! The ratio varies depending on the spectrum of phase noise itself, which can be larger or smaller than my example. Now that is why I don’t like the way jitter is specified!

I realise these pictures and explanations might be a bit daunting so please let me know if you have any “grey areas” in my illustration and I can try to explain a bit more clearly…

I'll be back when I have time to write something about slaved (or sync'd) clocks.

MrEase Tue, 02/02/2016 - 10:44

Just as a side note regarding the way jitter is measured. Many years ago I was asked by a company manufacturing equipment for the satellite communication market to look at their products as they kept getting equipment rejected by customers because it didn't meet jitter specs. They couldn't understand why. I'll let you guess how this manufacturer was measuring jitter! Suffice it to say, their customers measured it properly and I got a nice little job redesigning their phase locked loops...

MrEase Fri, 02/26/2016 - 08:40

Thanks Adam for your expert appraisal! I obviously presume you eliminated all other possibilities of distortion before giving us the benefit of your wisdom and experience. More information on how you reached these conclusions would be appreciated as I may be approaching this from completely the wrong angle.

To others in general, my apologies for the delay in continuing this analysis. I am currently quite hectic with paying work as unexpected urgent projects have landed on my lap. I'm not complaining but just sorry I can't continue immediately.

DonnyThompson Fri, 02/26/2016 - 23:29

MrEase, post: 435927, member: 27842 wrote: Wow! What this now means that our clock specified as +/- 1 ns is in real time actually around +/- 35 ns! This is the spec. I prefer as it is much more meaningful and actually relates far more accurately to the real world. This is the standard way RF signal jitter is measured. Please bear in mind that, although I’ve used two signal to demonstrate this, a single soundcard with it’s own clock and a jitter spec. would show the same timing errors – the yellow trace is just a reference time in this example.

oooookay... I think I may be starting to get this.....

MrEase, post: 435928, member: 27842 wrote: Many years ago I was asked by a company manufacturing equipment for the satellite communication market to look at their products as they kept getting equipment rejected by customers because it didn't meet jitter specs. They couldn't understand why. I'll let you guess how this manufacturer was measuring jitter!

Was it that they weren't measuring the jitter in the same way you found to be best, mentioned above?

MrEase Wed, 03/02/2016 - 09:18

Hi Donny,

I hope you're well! If you think about it another way, if we start with a perfectly timed clock and the next cycle is at the minimum time (-1ns) then the very next cycle cannot be shorter than perfect, otherwise the error would exceed the 1ns stated jitter. There is absolutely nothing that would ensure this, so the actual timing error can build up far beyond the single cycle specification. How much the timing error exceeds the single cycle error depends entirely on the characteristic of the jitter noise and, as this noise can vary, so can the multiplying factor (about 35X in the example above).

On your second point, exactly! They were measuring a single cycle of jitter variation which was nothing like the real timing error their clock could produce. As I said, the "normal" (and IMHO correct) way of measuring jitter in the RF way would show the 35ns noise in my example. Why this standard (and informative) method has not been adopted by the audio world is I believe a cynical way of presenting data to look the most impressive in spec. sheets. Let's face it, the average user would look at a spec. sheet and compare 1ns jitter (apples) with 35ns jitter (oranges) and draw an incorrect conclusion. In this case there is no difference as I'm measuring exactly the same clock but then compare a 1ns "apple" to a 20ns "orange" and the orange would actually be the superior performer!

MrEase Thu, 03/03/2016 - 04:29

No problem Donny. What I also should have said (although it's sort of implied) is that two different converters quoting "1ns" as the jitter spec could be quite different in the actual timing errors that could be incurred. Something to ponder about...

I will get back with the thread asap but currently I'm just squeezing this in while doing email catch up!

x

User login