Jump to content

Recommended Posts

Posted

@@kdoot and I had a bit of a discussion on the PS Audio DirectStream DAC thread. 

 

I wanted to separate out a tangent that I could see forming that was unrelated to the topic of that thread. The below is a few comments on the technical issues and specifications. For the most part with today's technology the skill is all in the circuit board layout more than the specs of the individual components. For clocks the impedance control and power supply & its decoupling are the real issues that affect performance.

 

 

Talking about the design choice made by Ted kdoot said:

 

And he really emphasised how far he went in his search for a clock which was ultra-consistent from sample to sample, as you mention below. But the reason for that is to move the jitter - ie the clock variation - down into frequencies which are much more like low levels of "wow and flutter" in the analog world. So low that he calls the result "wander" rather than "jitter", and I assume he's talking about final audio output measurements there. I might go ask for some typical jitter noise plots on the PSA forum or something.

 

 

The concern I had in reading the bolded section is the idea that jitter is 'moved' to a lower frequency. I *think* kdoot understands the issues but was a bit lazy in the wording of this. I wanted to make this post to clear it up for him and hopefully help others that might be interested to start to understand some of the issues at play and how they impact performance.

 

A more correct way of thinking about this is that 'the jitter that is relevant to perception has been minimised'.

 

The confusion with phase noise and jitter come in when we mix up a few of the following and don't understand them fully:

- phase noise values at low frequencies

- phase noise plots and what parts are important (10Hz - 100kHz is my current understanding) - these give us an idea of a statistical likelihood that a clock period has changing at a certain frequency. It's a complex concept to grapple with. It's probably more generally understood as a spectral density.

- jitter in ps RMS (the RMS value gives us no idea of the way the spectrum looks so it's value doesn't necessarily give us a clear idea of the value at low freq). This is simply an easy number to compare, but it must be calculated based on the same freq range.

- clock drift over time due to aging/temp changes etc is just not audible to any human.

- A FIFO buffer separating two clock domains does not 'move jitter' to a lower frequency either, it just means that the input jitter is no longer relevant to the output - circuit noise and output interface arrangement and clocks are important.

 

 

As an example, some plots of phase noise can be found in the Crystek CCHD957 clock datasheet that are used by many DAC designs now for their excellent (really amazing) performance and reasonable costs - they're likely a long way better than anything audible. If there is still a jitter problem in an interface using one of these ... look at the implementation, not the oscillator.

 

They don't show performance of phase noise below 10Hz, because it's not really relevant (those freq are not likely to be an issue unless something is broken with most ICs and designs of recent years). ie sub-Hz phase noise is not of interest in an audio DAC that's designed by anyone half awake with a few modern inexpensive parts. 

 

Take the datasheet numbers and plug them into a jitter calculator to convert from dBc/Hz to ps RMS. For the 45.1584MHz Crystek CCHD957 we can estimate jitter at ~0.11ps RMS. The capacitors and the circuit board layout that powers this circuit are far more important than replacing the clock itself if chasing a measured improvement in performance.

 

 

Hopefully this post gives some information for those trying to understand what all this engineering lingo is about. If there is anything that could be clearer in the above, shout out, as I get time I'll try to comment / clarify. I certainly don't know everything about this topic, I'm young and there is lots to learn.

 

 

 

Chris

  • Like 2

Posted

Nice topic Chris and quite timely for me because I have been trying to settle this stuff in my mind for a little while now.

 

I get that the FIFO should decouple incoming jitter from audio jitter, that is easy to understand and the entire purpose of the FIFO.  About the only way that I can see the FIFO affecting the audio clocks is via 'ground bounce' between the common grounds or some airborne RFI.  I am aware of some anecdotal evidence to suggest that tuning one end of the USB link so that the clocks are as close to in synch as possible may have a beneficial effect on the operation of the FIFO in regards to stability and the 'bounce' of the ground as previously discussed.  As I said, this is anecdotal, or listening based only at this stage but with any luck there will be some measurements to confirm or deny it in due course.

 

Phase noise is obviously affected by implementation as you say; or things like quality of power supply/grounding, decoupling et cetera.  I do wonder how many audio clocks are optimally implemented and get near the actual specifications listed by the manufacturer.  Another thing that worries me is that some oscillator manufactures publish 'calculated' phase noise plots rather than measured plots (the Crystek plots are measured, the NDK plots are not measured) and that the power supply noise used to generate those plots are not published.  For example I have seen comments that one particular brand of clocks was more susceptible to power supply noise than other brands, but it was kindly explained to me that perhaps the quality of the power supply used during manufacturer testing was better than that being used in its real world implementations, thus leading to less than optimal results.

 

Anyway Chris, is there a level at which clock phase noise needs to remain below?  For example below -100dB/Hz or -80db/Hz?  I ask this as this is the bit that I don't really understand, that differences of a few dB/Hz in the manufacturer phase noise plots seems to result in readily audible changes.  I know of one dac that has had three different clocks (Tent Labs, Dexa Neutron Star, Crystek) and they all sounded different but basically the better the phase noise plot the 'better' the sound (Crystek easily the preferred one of those three).  Where does it end?

 

Anthony

Posted

Hey Chris, thanks for this.

I *think* I understand this stuff to a first-order approximation but I would not claim to be an expert on the characterisation or measurement of jitter.

As Far As I Know...

Jitter is about instabilities in timing. A perfect clock would "tick" with exactly equal intervals between each tick. No clock is perfect.

If you measure the difference in duration between one tick and the next, you get a time interval such as a picosecond or whatever. For digital audio you want those differences to be minimised to a point where the variations in timing become inaudible. There is considerable debate over the threshold for jitter audibility.

Part of the debate relates to the pattern of variation exhibited by the jitter intervals of a particular clock/circuit, and especially how random it is or isn't. If you have repeated trends of the interval getting shorter for a while, then getting larger for a while, you have a measurable oscillation or "phase noise". (I hope.) Both the magnitude and the frequency of this oscillation are important.

One of the challenges for DACs is that they often have to match their own pace to the pace of incoming data. If you don't want to just trust the incoming clock (and you shouldn't if you want to minimise jitter) you generally have to try and get a feel for the "average" rate of incoming data. You react. Depending on how long an average you can observe, and how precisely you can adjust your own pace, you might find yourself over- and under-shooting the target repeatedly and that means you're oscillating. You've created your own phase noise. I imagine there are also local electrical causes of phase noise too, such as instabilities in the power supply.

As well as minimising random jitter, phase noise needs to be minimised also. Clean power, a large buffer and a very consistent but finely adjustable clock are the key ingredients for doing this.

How's that?

  • Like 2
Posted

One of the challenges for DACs is that they often have to match their own pace to the pace of incoming data. If you don't want to just trust the incoming clock (and you shouldn't if you want to minimise jitter) you generally have to try and get a feel for the "average" rate of incoming data. You react. Depending on how long an average you can observe, and how precisely you can adjust your own pace, you might find yourself over- and under-shooting the target repeatedly and that means you're oscillating. You've created your own phase noise. I imagine there are also local electrical causes of phase noise too, such as instabilities in the power supply.

As well as minimising random jitter, phase noise needs to be minimised also. Clean power, a large buffer and a very consistent but finely adjustable clock are the key ingredients for doing this.

How's that?

 

Hey Kdoot.  From my understanding, you have one thing here out of kilter, but I could be wrong myself, so don't take my word for it.  The audio clocks are used to retrieve data from the FIFO and control how the dac chips time their output.  These clocks work at a set and constant rate and they do not wait or adjust or do anything other than tick over at a constant rate.  If the data is not ready they push out an empty frame and you get a tick or a pop through your speakers.  They can do this because the FIFO is controlled by a different clock on the input side to the output side.

 

Asychronous USB is a little different.  Here the computer spits info out of its buffer using a fixed clock that cannot vary its pace, it is set in stone and cannot be altered by any means.  The data transfer rate is set by the amount of information held in the "frame" for each individual clock interval...so on average 16/88.2 has twice the amount of data transmitted in each frame as Redbook i.e the frames are twice as big. Now these clocks at each end of the usb cable will always run out of synch with each other, there is just no way that they are ever exactly at the sample 24MHz or 48MHz so the receiver (the dac) has to be able to send some form of correction to the host (the computer) i.e. "whoa back" or "give it the berries".  This is done by the FIFO in the dac sending a little signal back to the computer which in turn adjusts how much data it packs into each frame.  So there are likely to be constant "correction calls" along the USB cable to vary the amount of information that is put into each frame, thus slightly altering the data transfer rate so that the FIFO buffer in the dac neither under nor over-runs.

 

Now, as far as I know this type of inter-communication is not possible once the audio clocks become involved because these clocks are used to pull the data out of the output side of the FIFO and to time the operations of the dac chips themselves thus timing the music that we hear...there is no "other end" of the link because it is the end of the road as far as the dac is concerned. 

 

I hope I have not lead you astray here Kdoot!!  Chris will let us know if I am right.

 

Anthony

Posted

Hey Kdoot.  From my understanding, you have one thing here out of kilter, but I could be wrong myself, so don't take my word for it.  The audio clocks are used to retrieve data from the FIFO and control how the dac chips time their output.  These clocks work at a set and constant rate and they do not wait or adjust or do anything other than tick over at a constant rate.  If the data is not ready they push out an empty frame and you get a tick or a pop through your speakers.  They can do this because the FIFO is controlled by a different clock on the input side to the output side.

This may be true in specific instances but I disagree with it in a general sense.

I forget which maker it was but one DAC used a FIFO that was a bit like you describe. It had a clock tuned a tiny bit slower than normal so its rather large buffer would gradually fill up. It watched for moments of silence in the data and used that to skip forward and "catch up" with real time.

The Wolfson WM8805 SPDIF receiver chip has a small jitter-reducing FIFO component. The 8805 has a very high freq clock source (can't recall whether external or via internal multiplication) which allows it to keep an eye on the average buffer fill rate over time and pulse an I2S signal out at a fairly consistent rate. It works well at pulling really bad signals like low-rate Toslink into some semblance of order but has a pretty hard floor below which it can't reduce jitter any further.

The DIY FIFO design and the DirectStream FIFO we were discussing in the other thread both rely on voltage-controlled crystal oscillators as their master clocks for taking data out of the buffer and performing D-to-A conversion. These clocks can have their overall frequency adjusted by changing the voltage level applied to them. In these FIFOs, if the buffer is getting too full the clock speed gets turned up; conversely if the buffer is running out of data the clock speed gets turned down.

 

Asychronous USB is a little different.  Here the computer spits info out of its buffer using a fixed clock that cannot vary its pace, it is set in stone and cannot be altered by any means.  The data transfer rate is set by the amount of information held in the "frame" for each individual clock interval...so on average 16/88.2 has twice the amount of data transmitted in each frame as Redbook i.e the frames are twice as big. Now these clocks at each end of the usb cable will always run out of synch with each other, there is just no way that they are ever exactly at the sample 24MHz or 48MHz so the receiver (the dac) has to be able to send some form of correction to the host (the computer) i.e. "whoa back" or "give it the berries".  This is done by the FIFO in the dac sending a little signal back to the computer which in turn adjusts how much data it packs into each frame.  So there are likely to be constant "correction calls" along the USB cable to vary the amount of information that is put into each frame, thus slightly altering the data transfer rate so that the FIFO buffer in the dac neither under nor over-runs.

That's different to my understanding. See how this sits with you:

Async USB is a LOT different to any other common form of digital audio interconnect for one fundamental reason: the DAC is in control of the overall rate of data flow. USB itself can send packets of data in bursts which run at a far higher data rate than is needed for the real-time audio playback. The USB receiver has a small-ish FIFO buffer. After an initial handshake which sets expectations for the overall rate of data flow (eg two channels of 16/44.1) the receiver signals to the USB host each time it's ready to receive another packet of data to add to its buffer. It's a bit like clay pigeon shooting where the DAC is calling "Pull! Pull! Pull!" at the pace which suits it best. And that pace is ideally under the overall control of the master clock used for D-to-A conversion.

 

Now, as far as I know this type of inter-communication is not possible once the audio clocks become involved because these clocks are used to pull the data out of the output side of the FIFO and to time the operations of the dac chips themselves thus timing the music that we hear...there is no "other end" of the link because it is the end of the road as far as the dac is concerned.

Have to admit I'm not quite sure what you mean by this :)

Posted

That's different to my understanding. See how this sits with you:

Async USB is a LOT different to any other common form of digital audio interconnect for one fundamental reason: the DAC is in control of the overall rate of data flow. USB itself can send packets of data in bursts which run at a far higher data rate than is needed for the real-time audio playback. The USB receiver has a small-ish FIFO buffer. After an initial handshake which sets expectations for the overall rate of data flow (eg two channels of 16/44.1) the receiver signals to the USB host each time it's ready to receive another packet of data to add to its buffer. It's a bit like clay pigeon shooting where the DAC is calling "Pull! Pull! Pull!" at the pace which suits it best. And that pace is ideally under the overall control of the master clock used for D-to-A conversion.

 

 

 

Hi Kdoot,

 

Yes, the dac is in control of the amount of data it receives, but not by of the method you describe.  Have a read of the USB protocols some time.  I did have a group of links here when I did my research last year but they have disappeared from my browser so I can't give them to you just now.  The USB Host Controller chip that is on the computer mobo or PCIe/USB card has its own clock sitting right next to it which it uses to time the frames that it sends.  On the dac end is another USB Controller Chip that also has its own clock (not the Audio Clock - it is either 24.0000MHz or 48.0000MHz).  If the dac uses its Audio Clock (say 22.5796MHz) to time the USB IC then it is a very poor design because USB works at a rate that is just not divisible by the audio clocks frequency and there would be all kinds of problems.  So no...USB transmission is not controlled at all by the audio clocks in a dac...there is a separate clock for that.

 

 

Have to admit I'm not quite sure what you mean by this :)

 

What I meant is that once the signal leaves the dac chips that no more timing (or clock input) is required because the signal is now analogue.  Apart from going through I/V the dacs job is done.

 

 

Cheers,

 

Anthony

Posted (edited)

I'll get to the earlier questions a bit later on.

 

Just wanted to comment on the discussion about asynch USB. I think Anthony you've focused mostly on the physical aspect of the protocol and less on the logical (software) layers. kdoot's description is in line with my understanding of asynch USB. The timing of the DAC clock isn't used but the volume of data it consumes and what average rate it consumes it at is used to determine the average data rate through the USB 'pipe'. The clocks will drift over time by different amounts to eachother, so this protocol is necessary to ensure that the 'bucket' of data at the DAC end of the USB cable does not become empty. The rate of the data out of the bucket is defined by the DAC clock. The rate of the data into the bucket is determined by the computer and the 24MHz or whatever USB clock is simply there to manage the physical transfer of data (in bursts, in packets) so the USB clock has no influence on the average data rate as such.

 

Cheers,

Chris

Edited by hochopeper
Posted

Yeah, thanks Chris for clarifying. My comment about async USB neglected to confirm the actual USB transfers are still controlled by standard USB clocks at both the sending and receiving ends, which might have given the impression that I thought the audio clock was being used instead.

The USB transfer is all done in rapid bursts and they can be jittery as anything (provided the data is received correctly). But those bursts of data go into a small FIFO which gets drained at a rate defined by the audio clock. When the buffer gets too low, the async USB endpoint yells "Pull!" for another burst of data from the computer.

Posted

USB data is not sent in "bursts" though...it does not just start up and stop when it is asked to...a frame is sent at _every_ time a frame is supposed to be sent which is solely determined by the cycle rate of the USB clock in the computer..the computer just spews them out almost without regard for what is downstream.  Whether or not those frames actually contain any "packets" or data is up to the FIFO of the dac to manage which it does it by monitoring its buffer and then telling the USB Host Controller in the computer to adjust the the number of packets contained in each frame.  The reason that the USB controllers at each end of the cable have their own clocks is so that they are not so beholden to other processes in the computer or dac (read latency and the timeliness of getting data to the USB buffer).

 

So my explanation is very similar to yours Chris in that the only way that the audio clocks have anything to do with the rate of data transfer over asynch USB is by the rate at which they empty the FIFO...they have zero direct effect on the asynch USB transfer at all.

 

Async USB is a LOT different to any other common form of digital audio interconnect for one fundamental reason: the DAC is in control of the overall rate of data flow. 

 

Yes, the dac is in control of the rate of data flow because it has to keep its FIFO buffers within range.

 

 

USB itself can send packets of data in bursts which run at a far higher data rate than is needed for the real-time audio playback. The USB receiver has a small-ish FIFO buffer.

 

 

Sort of true.  USB at full data rate has every frame full of packets and every rate under that has the same number of frames sent, because the clock in the computer determines that, but those frames are under-populated because we don't need them filled up.  USB usually does have smallish buffers, but as far as I know you can actually make them as big as you want.

 

 

 

 After an initial handshake which sets expectations for the overall rate of data flow (eg two channels of 16/44.1) the receiver signals to the USB host each time it's ready to receive another packet of data to add to its buffer. It's a bit like clay pigeon shooting where the DAC is calling "Pull! Pull! Pull!" at the pace which suits it best. And that pace is ideally under the overall control of the master clock used for D-to-A conversion.
 

 

Not true.  Asynch USB doe not work by the computer only sending frames when it is asked or like in the "Pull, Pull, Pull" analogy.  The computer sends out a constant that is only changed when the dac asks it to change...so if the clocks at each end of the link are perfectly in synch there is no change ever made...but no two USB clocks are ever of equal frequency so there are messages "Up, Up, Up, Down, Down, Up"  etc.

 

Hope this helps.

 

Anthony

Posted

Not true.  Asynch USB doe not work by the computer only sending frames when it is asked or like in the "Pull, Pull, Pull" analogy.  The computer sends out a constant that is only changed when the dac asks it to change...so if the clocks at each end of the link are perfectly in synch there is no change ever made...but no two USB clocks are ever of equal frequency so there are messages "Up, Up, Up, Down, Down, Up"  etc.

 

It's not the two USB clocks that influence the Up, Down rate feedback calls. IMO the USB clocks are a distraction. The average rate of data into the USB to I2S device fills a buffer (small FIFO). This cannot overflow or empty. The rate feedback ensures that doesn't happen. The timing of data out of this buffer is controlled by the audio clock.

Posted

Anthony, I think we agree except we're talking at different layers. In my analogy I'm abstracting one level up from your explanation - although I am now persuaded that the clay shooting comparison doesn't work as well as I'd hoped.

How about this one...

The async USB audio connection is like a string of mining carts (a la the recent Hobbit movie) running at a constant speed, with a signalling system so that the receiving end can advise the sender of the state of its buffers. The sender has a loading mechanism which dumps a quantity of ore into each cart. The actual quantity can be adjusted in a series of discrete increments, all the way down to zero. Based on the signals from the receiver, the sender will adjust the quantity of material being sent in each cart.

Does that work better for you?

The basic contention is that the signalling feedback from the receiver is based on the current fullness of the FIFO which is being emptied at a constant rate under the control of the audio clock, independent of the speed of the USB transfers which are under the control of other clocks.

Posted

Anthony, I think we agree except we're talking at different layers. In my analogy I'm abstracting one level up from your explanation - although I am now persuaded that the clay shooting comparison doesn't work as well as I'd hoped.

How about this one...

The async USB audio connection is like a string of mining carts (a la the recent Hobbit movie) running at a constant speed, with a signalling system so that the receiving end can advise the sender of the state of its buffers. The sender has a loading mechanism which dumps a quantity of ore into each cart. The actual quantity can be adjusted in a series of discrete increments, all the way down to zero. Based on the signals from the receiver, the sender will adjust the quantity of material being sent in each cart.

Does that work better for you?

The basic contention is that the signalling feedback from the receiver is based on the current fullness of the FIFO which is being emptied at a constant rate under the control of the audio clock, independent of the speed of the USB transfers which are under the control of other clocks.

 

That sounds more like it.  Yes.  Now back to normal programming...sorry for the distraction Chris.

Posted

I've just re-read this thread and my brain hurts more the second time. But good on you guys for explaining all this.

 

Please though. What is a FIFO?

Posted (edited)

First In First Out buffer used for isolating the data transfer part of the dac from the part that actually makes the analogue waveform.  A FIFO is very good at greatly reducing the effects of source jitter from things like your computer or CD Player.

Edited by acg

Posted

Just re-reading this thread...

They don't show performance of phase noise below 10Hz, because it's not really relevant (those freq are not likely to be an issue unless something is broken with most ICs and designs of recent years). ie sub-Hz phase noise is not of interest in an audio DAC that's designed by anyone half awake with a few modern inexpensive parts.

What if the *only* measurable phase noise is sub-Hz? (And it's at a suitably low value.)

That would be a good thing, right?

Posted

Just re-reading this thread...

What if the *only* measurable phase noise is sub-Hz? (And it's at a suitably low value.)

That would be a good thing, right?

 

That would be great but I fear that it is not possible either at this point in time nor the foreseeable future, but you obviously asked this question for a reason so you maybe you know something that I don't.   :)

 

Sure you could try to measure phase noise using some gear with a poor noise floor and you would not catch a ripple on some of the oscillators out there, all you would see is the noise floor of the measurement devices themselves.  The best clock phase noise plots that I have seen are about -150dBc/Hz @ 10Hz (they go down to about -170/180 dBc/Hz from memory) and these are not for audio frequencies, and most likely not for audio budgets.

 

Anthony

Posted

That would be great but I fear that it is not possible either at this point in time nor the foreseeable future, but you obviously asked this question for a reason so you maybe you know something that I don't.   :)

 

Sure you could try to measure phase noise using some gear with a poor noise floor and you would not catch a ripple on some of the oscillators out there, all you would see is the noise floor of the measurement devices themselves.  The best clock phase noise plots that I have seen are about -150dBc/Hz @ 10Hz (they go down to about -170/180 dBc/Hz from memory) and these are not for audio frequencies, and most likely not for audio budgets.

 

Anthony

Ta for that. Interesting. If you watch Ted Smith's presentation on the DirectStream DAC, you'd hear him make claims about the low jitter of his clock using words somewhat similar to what I've described. If he's done anything remotely like that, it stands a good chance of producing some interesting musical output.

Again... I'm dead keen to actually listen to the thing to learn whether they've delivered anything special. Until then it's just speculation.

Posted

Just re-reading this thread...

What if the *only* measurable phase noise is sub-Hz? (And it's at a suitably low value.)

That would be a good thing, right?

 

 

Correct. Except that our insensitivity to perturbations in the clock rate over that time period is quite high, so I think it is a red herring in general as it doesn't characterise anything about the performance of the device that is relevant.

 

It would be more correct to say that the phase noise of the clocks / signals were below the noise floor of the instrument used and state that value and leave it at that.

Posted (edited)

Anyway Chris, is there a level at which clock phase noise needs to remain below?  For example below -100dB/Hz or -80db/Hz?  I ask this as this is the bit that I don't really understand, that differences of a few dB/Hz in the manufacturer phase noise plots seems to result in readily audible changes.  I know of one dac that has had three different clocks (Tent Labs, Dexa Neutron Star, Crystek) and they all sounded different but basically the better the phase noise plot the 'better' the sound (Crystek easily the preferred one of those three).  Where does it end?

 

 

Hi Anthony,

 

That probably depends on the application. The clock is only part of the 'jitter' picture. 

 

Also, noticed a bit of a mistake in the units you're using, the measurements we're looking at for clock performance are dBc/Hz which is a power spectral density, not a ratio as dB. 

 

dBc / dBmW / dBW are units of power. dBc vs Hz is the power at a freq relative to the carrier or fundamental freq of the clock.

 

Chris

Edited by hochopeper

Posted (edited)

Hey guys, interesting discussion.

I agree with your final analogy about asynch USB - the mining cart analogy is correct, AFAIK

 

One thing about jitter - I've seen it stated that close-in phase noise is the most important in audio - it determines things like imaging & soundstage.

Close-in meaning <10Hz (often referred to as Allen deviation, I think)

My intuitive understanding of this is that this close-in jitter would have a detrimental affect on the purity of a fundamental tone so in other words there would be some frequency smear in the sound & hence some smearing of the location of instruments, voices etc. Now this is very abbreviated & tentative but I hope you get what I mean?

 

The problem is that this Allen Deviation is mostly determined by the crystal itself & also that it is not often supplied for oscillators.

 

BTW, where did you find out that the NDK phase plots were calculated rather than measured? 

Edited by jkeny
Posted

 

BTW, where did you find out that the NDK phase plots were calculated rather than measured? 

 

Intuition I guess.  The NDK graph is a series of straight line segments (it is smooth) whereas others such as the Crysteks show the plot from the measuring instruments.  This does not mean that the NDK stuff has not been measured, but I think that it is highly unlikely that the graph they show is related to anything that they have measured, otherwise they would show the instrument plot like everybody else does.

Posted

Intuition I guess.  The NDK graph is a series of straight line segments (it is smooth) whereas others such as the Crysteks show the plot from the measuring instruments.  This does not mean that the NDK stuff has not been measured, but I think that it is highly unlikely that the graph they show is related to anything that they have measured, otherwise they would show the instrument plot like everybody else does.

Oh, OK 

Posted

I'll take responsibility for this :

 

NDK is more a hoax than reality; Through PM I told Anthony about some sneaky questions I asked NDK. What Anthony doesn't know yet is about the answers I received - even talked to a guy on the phone who sorted out a few things for me - and when he came back with the answers concerned, the strict message was that they bailed out.

Read : I just know a tad too much about these things to let them see themselves they are fooling you/us.

 

I keep this vague on purpose and for me it is always enough to prove that things are fake where I see it as suspicious in the first place. Sort of hobby.

 

Peter

Posted

Btw pity that this thread ran into nothing else but in my view unrelated USB interface stuff. Which, Anthony, won't even be recognized by others (just saying). So, a subject in itself alright, but how it is related to the OP ?

 

But where it would be about "move the jitter into the lower frequencies and call it wander" ... mwah. I don't have anything to contribute there, other than this should be a derival and maybe misinterpretation from how PLLs work and how a very low rate (think few Hz) "wander", tunes the VCXO.

That this easily turns into a USB-USB tuning (which is close to the same) is nice(ly picked up) but related to nothing I can see (very very indirectly yes).

But that's me of course. ;-)

 

Ok, one small contribution from my side I can think of now (and it's the first time I am spitting this out anywhere) :

I now know that jitter can not be measured at the output of D/A chips (or at the output of the DAC if you like) because there's one parameter more that nobody ever thought of. So jitter is measured this way alright, but the results say nothing much because first this parameter has to be out of the way and this can be done;

I have an analyzer under way and when it arrives I'll have to see how I can manage. Should hopefully be the most interesting.

 

Peter

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...
To Top