Discussion in 'Converters / Interfaces' started by audiokid, Apr 18, 2010.
What is clock jitter in detail?
Are you looking for something more detailed or specific than the Wiki article?
Yep. Despite Wikipedia's notorious reputation for inaccuracies, that article pretty much delineates anything the novice would ever want to know (and a lot more) about clock jitter.
Thankfully, AD/DA has come a long, long way since the days when jitter was a serious enough issue that everyone was using outboard clocks.
The Wiki article is detailed but general. In the field of recording, all one usually needs to know is that clock jitter is the variation in time of a sampling instant about the ideal time, and apples to signal conversion between analog and digital domains in both directions, i.e. both A-D and D-A conversion. Since the analog waveform is constantly changing, a variation in time translates to a variation in amplitude, and hence an error in the converted signal. This will show as non-harmonic distortion.
I think that (well deserved) reputation comes from pop culture, history, politics, and the "soft" sciences. I find Wiki to be as reliable as any other general reference source that I know of in math and classical physics. Compare it to a modern Encyclopedia Britannica. Wiki is also more comprehensive and the price is right. As always, check it twice. In the word of Ronald Regan, "Trust, but verify." (Which means, "Don't trust.")
As someone who moonlights in the medical field, I can assure you, Wiki blows it in that department regularly.
I certainly agree with Boswell's comments but I do have something to add. In just about all spec sheets for audio interfaces jitter is quoted as error between consecutive clock edges rather than the more key aspect, as Boswell infers, of errors in absolute time. Nor have I seen any figures given for clock jitter when externally referenced. Phase noise (particularly in PLL's) when clocks are slaved can be very poor and often do not follow the WIKI suggestion of Guassian noise. This is one of the areas where I certainly think improvements can be made and I think will only happen if we can get better specification of jitter from the manufacturers. Indeed, what is mostly quoted as jitter, to my mind at least, is not a true measure of jitter.
OK this is the tech talk forum, so my question is how far do we want to go into this topic?
This forum could use a topic like the legendary 96K argument on George Massenburg's old (now defunct) forum on Musicplayer....
Go as deep as you like, and offer as provocative an opinion as you dare! *grin*
Maybe the moderators should decide that! :<) I have no intention of being provocative though on a what is a "tech talk" forum and my thoughts have nothing at all to do with anything like a sample rate "sound" debate. Boswell (our revered mod!) has already indicated that the WIKI definition is very general and I am quite sure knows exactly what I have suggested.
Take all the fun out of it, then.
In all seriousness, I'd be intrigued to hear your dissertation and, more specifically, how you believe it affects us these days, post the rise of multi-bit error correction, inexpensive-to-produce mag shielding, oversampling, and other formerly renegade ideas that have become industry standard cures for the jitter bug (pardon the pun).
What sparked this thread was reading this the other day: I'm guessing, nothing has changed? Excerpt from: 5.7 - What is clock jitter?
As someone who experiences dropouts and other irregularities often enough to cause concern, I beg the both of you to go on.
I am a novice compared to many here, but a quick study.
My current setup is somewhat reliable - not good enough.
While this may or may not affect my problem or solution (I suspect a faulty unit), I still jump at the chance to learn something about my craft.
So go on, Scott, and Mr. Ease, please!
Oh Boy, what have I let myself on for!
First of all, due to the technical nature of this I think it is best from the point of view of my available time and helping understanding if I do this in bite size chunks.
Let me be clear from the outset that I am viewing this purely from the engineering aspect but will do my best to present things in relatively laymans terms. Not an easy task I assure you.
OK so the first thing to bear in mind is why "jitter" is not (normally!) desireable. In the audio sampling world, it has already been stated (via WIKI and other links here) that variation in the absolute (perfect) timing will cause small errors in the actual sampled signal. This is simply because the analog signal is continuously varying and if we change the time of the sample, the signal will also change. As far as the DAW is concerned, it HAS to assume that all samples were taken in perfect time (it has no other reference) so we can directly infer that any timing errors caused by jitter will cause some form of distortion to the sampled waveform.
Now the explanation on the link provided by audiokid talks about how fast the clock edges are (which is where we sample) cause the jitter. That is not strictly correct as it is actually noise that causes the jitter. There is no mention in that page at least of what is the key factor which is called "phase noise". Phase noise is present in all clock signals and is directly linked to jitter.
Various references correctly point out that (I believe) all interfaces use a crystal controlled clock oscillator. While these present by far the lowest phase noise at reasonable cost, the phase noise of crystal oscillators will also vary depending on the particular crystal and oscillator circuit. However it is not very difficult to produce a good design. What is far more critical though is to ensure that the oscillator is not degraded by external effects such as power supply noise and various local digital signals. This is an important point as, while the inherent noise of the oscillator should be guassian as mentioned on the WIKI article, external noise sources could cause jitter that is not only not guassian but also contains discrete signals, say from a local digital signal. This could certainly cause sampling errors with discrete signal distortion which most likely going to be much more noticeable.
I think that's enough for the first bite of the apple!
P.S. I am writing this stuff "on the fly" so I may well make some silly errors. Hopefully Boswell will be able to spot these and help me out!
By all means go on discussing the topic of clock jitter - such discussions are a big reason for having this forum. It would be helpful to keep the contributions grounded in fact rather than in opinion or hearsay, but different people's differing views are what makes for a good discussion.
Clock jitter normally causes problems below the level of dropouts or bit errors, but if severe, can lead to those as well. It's an inexactitude in time that is usually small enough not to exceed bit error limits, but to cause quantisation errors due to the temporal displacement of the sampling instant.
First of all I would echo what Boswell said regarding Soapfloats dropouts. Maybe one question arises though, are you (Soapfloats) using interfaces sync'd to master or other clocks? If so the possibility of dropouts due to jitter will increase significantly but still should not be enough to cause dropout problems.
OK, that said, now for a shorter byte. If we now assume we have a good, low phase noise (i.e. low jitter) oscillator, it will be running at a very much higher frequency than the sample rate. The actual frequency will normally be the LCM (lowest common multiple) of all the sample frequencies the interface allows although it may, for various reasons, be higher. This means the frequency will be divided down to suit the required sample frequency selected. Now dividing high frequencies also lowers the phase noise, the bigger the divider the lower the phase noise. This follows a simple formula but there is a limitation on phase noise floor at the output which is dependent on the logic family of the divider. Logic families such as ECL (probably never used in this application) are the worst, followed by LSTTL, TTL, CMOS etc. If you are not familiar with these terms, it doesn't matter, I'm just making the point about different options for the designer. Normally the dividers in our interfaces will be either CMOS or perhaps be implemented in what is called an FPGA. An FPGA is simply a programmable logic chip. These can implement all sorts of logic functions including dividers. With FPGA designs care must be taken to ensure that any dividers are as free from jitter as possible but in general they perform well.
So now we should have a good low jitter clock source for our interface and in general, this is the case.
Next up is how we measure jitter and later, what happens when we start trying to slave various interfaces to a clock reference....this gets a lot more tricky!
P.S. I missed an important point! The jitter - in terms of absolute time - at the output of a divider will in essence be the same as the jitter from the oscillator, again in absolute time, with perhaps a tiny degradation due to the noise of the divider itself. Note that this is not the same as the cycle to cycle jitter spec's often provided by manufacturers.
To Mr. Ease:
My setup is as follows: Presonus Firestudio, and two Digimax LTs as ADAT 1 & 2.
There are periodic, but consistent issues w/ clocking (FS as master, Digi won't sync sometimes; Digi as master, FS will sync, but dropouts occur. Sometimes total stoppage of tracking).
I've tried changing which unit is master, how it's clocked (FS v. ADAT v. BNC) and changing sample rates. These issues occur at random regardless.
This is why I have my finger on the trigger for an RME FF800.
Again, jitter probably isn't my issue. But, I saw the opportunity to spur a discussion I could learn from, if not solve my issues.
Or, I smelled a good ol' good one in the works, and thought I'd provide the spark.
Please keep it coming w/ your bites - a little of it is over my head, but I'm following!
As all your gear is from Presonus so I am surprised that there is an issue. The most curious thing is that the Digi's sometimes won't sync to the firestudio. Unfortunately though the problems you see do not seem to be related to jitter and I would suspect the Firestudio drivers. This does not explain though why the digi's sometimes fail to sync - unless the firestudio clock is not consistent. I have no experience with presonus gear but I have heard of quite a few people with driver trouble. This is primarily from the Cakewalk Sonar forums where several people have given up and got rid of such problems by changing their interface! Apparently this is something to do with the DiceII chipsets. A common piece of advice with Firewire is to use Ti chipset ports on your PC or Mac.
If I were you I'd be trying to get some support from Presonus but I have no idea how good their support is....
OK, so how do "we" measure jitter, or rather, how do I measure jitter!
I have already mentioned that all the spec's on this I have seen give a figure for cycle to cycle variations in timing. To me, as an engineer, this means virtually nothing as it gives almost no information that is crucial to audio sampling. There is no doubt though that a lower figure for jitter should represent a better product, that is not necessarily the case. If we knew absolutely that the phase noise was a guassian distribution then it would but as I have already said, that is certainly not an assumption we can make.
In answering Soapfloats in my last post here I looked up some spec's and noticed this:
Jitter <20ps RMS, 20Hz-20kHz
Jitter Attenuation >60dB, 1ns in=>1ps out
This seems to be a more detailed spec. than normally given but still the first line tells me very little. This is an RMS figure, so clearly the peak errors will be greater, it also mentions 20Hz - 20kHz. This gives me a very good idea of how this was measured but still tells me very little about real world performance. Are there any discrete signals in that 20 - 20k bandwidth, is it white noise, pink noise or what? The answer to this will give me some idea but the bottom line is I simply would not measure jitter in this way.
The second line gives us information about the performance when locked to an external source although I would defy the layman to know that. Having said that, the >60dB does tell me something about the PLL performance but is that a good or indeed necessary requirement? First of all, I do not believe for one second that using an external source could provide the apparently improved jitter performance of =>1ns over the internal clock spec of <20ps - all derived from a pretty awful clock in the first place. I'll mention more on this later (if I remember - please remind me if I forget!).
As I have already implied (I think!) if you are using a single interface and not slaving any other soundcards, jitter should really not be an issue as a single clock determines the timing of all channels of the interface and unless the design is poor in respect of allowing PSU or digital noise degradation then this will be the case. While it is important that the clock is good enough to all but eliminate distortion on a single channel, it is also important that all channels in an interface sample at the same time. It is not really sufficient to say that differences in distance between microphones already skew the timing of signals. From the engineering viewpoint this is a different problem as it would not cause distortion in the sampling process itself. The biggest problems with jitter arise when we slave one interface to another. Now using the real spec given above you might think that we should always get two samples within 20ps of each other - if that were the case we really would not have any problem or debate about clock jitter. What the spec does not tell us how many consecutive periods we might get of minus 20ps (for example)and that is important as it would tell us what the maximum timing error between supposedly indentically timed samples might be.
Testing for this figure is not quite so straightforward as cycle to cycle testing but does reveal an awful lot more about what our real timing errors are going to be. The problem with absolute testing is that we have no perfect standard so the best technique, which actually shows real world results, is to measure one clock against another in time. If we have a reference clock which is the best available and then use it to reference two identical interfaces we can then measure the maximum difference between the two interfaces. This is a real result and we can also analyse the characteristic of this noise to identify any problems. Of course the performance of a single slaved interface will have half of this measured error (with two identical interfaces) as we are measuring the sum of two errors. I have used this technique for over thirty years and can also say that, in the satellite comm's world, this is how jitter is really measured and specified, as cycle to cycle measurements just do not convey enough information.
I'm not sure if any of this is going to come over clearly. If not, please let me know and I'll try to clarify it all.
It occurs to me I should have made this my "Why I'm not sure I want to win Sonar 8.5" blog! :<)
As putting these thoughts on jitter is somewhat time consuming, could I ask for a bit of feedback on whether my ramblings are of interest?
So far the only response is from soapfloats and writing stuff like this on a forum is a bit like shouting into a black hole - you have no idea if anyone is interested. Also there don't seem to have been that many hits on the thread (since I started my ramblings) so currently I just don't know if this is worthwhile continuing. Of course I'll continue if you are interested and don't find it too boring!
To paraphrase a well known saying, in cyberspace, no one can hear you yawn!
Separate names with a comma.