Jump to content

Recommended Posts

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Posted
10 minutes ago, rmpfyf said:

 

TIA-568-B and ISO 11801:2002 have a number of tests with indirect relation to what could make a difference in an audiophile context. Criteria are pass/fail. No one should be using a cable that fails anything. 

Beyond that, degrees of performance, and there's more than the cable that can affect performance.

 

I'm familiar with the standards. Proper testing of ethernet cables and structured cable management systems goes well beyond pass/fail testing as I'm sure you know. 

 

Mostly, there is little in the feature sets or functionality of enterprise or carrier grade equipment that has any applicability to an average home user, who couldn't configure or maintain the equipment. 

Guest Eggcup the Dafter
Posted (edited)
37 minutes ago, Telecine said:

 

I'm struggling to understand how any of this has anything to do with Ethernet cables which have standards and can be tested using industry standard test equipment.

The case to be made is that in the audio context, the Ethernet standards (or construction of network complnents upstream of the streamer) allow for noise and or jitter to pass through and affect, at the other end of the streamer/DAC process, the audible signal..

 

I get into trouble with just about everyone on this topic, because I contend that a dedicated streamer should be built to work well with what passes trhough the Ethernet plug as allowed by the Ethernet standards. If the streamer is a general purpose device (your laptop for example) I doubt being a perfect audio streamer is among the design requirements though. 

Edited by Eggcup the Dafter
corrected meaning
Posted
1 minute ago, Eggcup the Dafter said:

The case to be made is that in the audio context, the Ethernet standards (or construction of network complnents upstream of the streamer) allow for noise and or jitter to pass through and affect, at the other end of the streamer/DAC process, the audible signal..

 

I get into trouble with just about everyone on this topic, because I contend that a dedicated streamer should be built to work well with the signal allowed by the Ethernet standards. If the streamer is a general purpose device (your laptop for example) I doubt being a perfect audio streamer is among the design requirements though. 

 

They are stuck with the Ethernet standard. Buying audiophile ethernet cables won't improve matters and probably make them worse. Perhaps they don't understand what can be measured against the standard.

  • Like 1
Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Posted
1 hour ago, allthumbs said:

This seems helpful for the likes of me to understand what Jitter is.

 

 

A fairly clear and good explanation.  One of his conclusions is pretty much what Dave was saying about the J-test picture he linked above.  The jitter has raised the noise floor and fattened the bottom of the 12kHz signal, but all at very low levels and nothing to worry about. 

Posted
2 hours ago, rmpfyf said:

It is a very long reach to apply a J-test result in an audibilty context.

 

I'm not sure I understand why.

 

The J-test (purports to) induce the worst case data jitter.

We see that (in this case) the result of that is nothing substantially different to the already existing noise-floor.

 

So at any frequency..... it is telling us that the worst case (data) jitter product which might be mixed with our music signal is (lets say)-120dBFS.

 

So..... for a given component of music at any frequency ..... at full output (say 0dBFS = 2V)... it equates to 0.0001% distortion (as -120dBFS = 0.000002V)

 

For a component in our music which is -60dBFS (0.002V when 0dBFS=2V) ..... we are going to be adding to it, a worst case distortion component of 0.1%.

 

0.1% might sound like a lot....  but that SPL level is 60dB below peak, which for most people is < 50dB.    At such low SPL, I don't believe you can hear 0.1% mixed in..... and that is before we start masking its audibility with louder sounds.

 

 

What have I got wrong?

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Posted
1 minute ago, rmpfyf said:

For starters it's from one single tone.  If you want to use an additive frequency spectra argument, unless you're listening to test tones then the net effect is going to be considerably more significant. 

 

Actually, going back that video of Amir's - he was saying that in fact it's the jitter on test tones that is easiest to hear, and listening to music makes jitter a lot harder to hear.

 

2 minutes ago, rmpfyf said:

What's the baseline? Where the sensitivity test?

 

He shows parts of a study where subjects report at what levels they hear jitter.

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Posted
34 minutes ago, aussievintage said:

but all at very low levels and nothing to worry about. 

 

@aussievintage my first step in audiophilia was a Thorn three in one, I have a come a long way and now have a very for me, adequate, entertaining and impressive set up (or two or three) put together at a reasonable price, the last thing, in fact they  don't even show up on my radar as a concern nor disturb my mind when listening to music is the matter of  consideration of jitter or cables or interconnects.

 

 

 

 

  • Like 1
Posted
8 minutes ago, rmpfyf said:

For starters it's from one single tone.  If you want to use an additive frequency spectra argument, unless you're listening to test tones then the net effect is going to be considerably more significant. 

 

The J-test purports to induce the worst case data jitter.....   so I take that to mean no other combination of data will induce distortion where the components are larger.

 

So we'll have distortion components spanning many if not most frequencies, being inter-modulated with the music ..... but at no frequency is any distortion component louder than -120dBFS.

 

Is that not right?

 

8 minutes ago, rmpfyf said:

What's the baseline? Where the sensitivity test?

J test is not worst-case distortion, it's highest-visibility distortion in a digital system. Very different.

You're still interpeting spectral components as noise sources. They're not.

 

I don't understand.

 

 

Under what variation of this data jitter scenario do I ever see distortion in the output signal where those distortion components are larger than -120dB

 

Guest rmpfyf
Posted (edited)

.

 

Edited by rmpfyf
Posted
14 minutes ago, rmpfyf said:

That is not a 'jitter product mixed with our music signal'. There is no mixing. What you see is a discretised spectral decomposition of what was acquired. 

 

Yes, I understand that

 

.... but what was acquired, is representative of what would be mixed into my music.... yes?!

 

14 minutes ago, rmpfyf said:

Those additions from a randomised clock inaccuracy are not equal side to side either

 

Sure.... but we're (i'm) just talking about data jitter right now (as it what is in this example)

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
  • Volunteer
Posted
Just now, rmpfyf said:

 

Yes

 

 

Something you'd less enjoy, duly amended.

 

Ok so I'm looking at where I should be looking. Can you explain in lay terms what it means?

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
To Top