Jump to content

Recommended Posts

Posted (edited)
31 minutes ago, Stereophilus said:

and that particularly low frequency phase noise is reportedly quite difficult to remove.

But is low frequency phase noise even important?  Does it need to be removed?  Would that not depend on its amplitude and frequency characteristics?  How serious is it for human hearing?  Is it audible at all?

 

For example a vinyl record turntable may be electronically speed controlled with a time constant of around 1 second, i.e. it may overshoot the speed slightly and take a second to come back to within a quarter of that deviation and then overshoot in the other direction (temporarily running below correct speed), constantly "hunting" for the correct average turntable speed. But provided the deviations are small they will be inaudible of themselves.

 

Of greater importance may be any warp in the disc geometry in the vertical plane, creating low frequency speed variations in the tracking of the groove.  And any off-centredness of the spindle hole will introduce slight speed variations in the groove tracking, also at a low frequency.

 

Still, with a disc that is only mildly warped, and with a spindle hole just slightly off-centre, there may be no audible difference arising from those two imperfections.

Edited by MLXXX

Posted
24 minutes ago, Stereophilus said:

If, as you say, low frequency jitter is quite hard to remove

It depends on precisely how the signals are transmitted.... and which signals you are talking about.

 

Signals in the network that comprise packets .... are passed into a "computer" where they are broken up, reassembled and processed as information.   The jitter here is both irrelevant and completely eliminated.   When it comes out the other side of the computer, nothing (literally nothing) that was in the original remains.

 

24 minutes ago, Stereophilus said:

some quotes I’ve seen even say it is impossible to remove

It isn't impossible to remove.

 

For example, if the audio player copies the data to be played into memory.... once it is in memory, the "jitter" that is had is totally gone.

 

BUT... when you read it back out, and the send it to the audio hardware, DAC, etc.....  you can easily introduce new low frequency jitter.

 

24 minutes ago, Stereophilus said:

I suspect this is what @Assisi has been trying to infer from his experiences.

Perhaps.

 

"Everything matters" is often, if not usually, not a very good tru-ism.....  either because it is (situationally) in fact not true ... or because it steers us away from the "why does something / everything matter" (and what does that mean).

  • Like 1
Posted
1 minute ago, MLXXX said:

But is low frequency phase noise even important?  Does it need to be removed?

Yes.

 

Certain types of jitter have been shown to be quite inaudible.... even with magnitudes that are quite high.

 

Low frequency jitters seems to be something that keeps showing up as demonstrably audible.... but it is both hard to eliminate, and hard to measure/test, and so these tests are not as commonly done by people as the old-school (we based this with 1ns jitter, and nobody can could tell the difference, therefore jitter is a totally solved issue).

 

I think conceptually we usually put too much emphasis on being able to hear distortion components, where as in reality they're easily ignored .... and not enough emphasis on being able to hear "dynamic range", and so if we look at LF jitter does to a single tone (spreading of the base), tis extrapolates out to a rinsing "noise floor" for more typical program material like music.

 

1 minute ago, MLXXX said:

 Would that not depend on its amplitude and frequency characteristics?

Yes, on its frequency characteristics... ie. low. eg. << 10Hz.

 

1 minute ago, MLXXX said:

For example a vinyl record turntable may be electronically speed controlled with a time constant of around 1 second, i.e. it may overshoot the speed slightly and take a second to come back to within a quarter of that deviation and then overshoot in the other direction, constantly "hunting" for the correct average turntable speed. But provided the deviations are small they will be inaudible of themselves.

It isn't quite analogous to jitter in digital audio.

  • Like 1
Posted
2 minutes ago, davewantsmoore said:

Certain types of jitter have been shown to be quite inaudible.... even with magnitudes that are quite high.

 

Yes, it is by no means guaranteed that the presence of measurable jitter, even at a high level, is actually audible.

 

The relevant aspect for this thread is whether the performance of streaming boxes or equivalent devices is in some cases at all likely to be audibly affected by low frequency jitter. 

 

If so, we would need to consider:

(1) how do we to identify such cases, and;

(2) how do we address the problem, so as to eliminate audible impairment of the analogue signal arising from this particular cause.

 

 

In terms of (1) above, measurements alone might not suffice, though they could be usefully indicative.

 

An obvious approach would be a controlled listening test, to see whether this impairment, together with any other accompanying suspected impairments, are demonstrably audible..

Posted (edited)

 

35 minutes ago, davewantsmoore said:

It isn't impossible to remove.

 

For example, if the audio player copies the data to be played into memory.... once it is in memory, the "jitter" that is had is totally gone.

 

BUT... when you read it back out, and the send it to the audio hardware, DAC, etc.....  you can easily introduce new low frequency jitter.

Agreed.  But… can we assume (speaking universally) that phase noise from the upstream clock, linked by the PLL, is not influencing the downstream “read out” clock when it exits the buffer?

 

35 minutes ago, davewantsmoore said:

"Everything matters" is often, if not usually, not a very good tru-ism.....  either because it is (situationally) in fact not true ... or because it steers us away from the "why does something / everything matter" (and what does that mean).

That’s the essence of why we are here having this chat.

Edited by Stereophilus
Posted
3 minutes ago, Stereophilus said:

Agreed.  But… can we assume (speaking universally) that phase noise from the upstream clock, linked by the PLL, is not influencing the downstream “read out” clock when it exits the buffer?

I think we can.  

 

You seem to be worrying about variations in an upstream clock in a router or switch operating at say a nominal frequency of x GHz somehow interfering  with a local buffering clock operating at a nominal frequency of say y Mhz.  The slight pulling of the upstream clock phase to maintain the nominal x GHz (plus or minus a small fraction of one percent) is thought to possibly influence a downstream device operating at the nominal y Mhz (plus or minus a small fraction of one percent).

 

I would point out that the genuine signal voltages travelling through the Ethernet cable wires are huge in amplitude compared with the small PLL control voltages present in the inner circuitry of a router or switch feeding the Ethernet cable. 

 

It would help if someone could actually record and upload an example of this sort of apparent audible impairment arising in an Ethernet streaming context. It would shed a practical light on the discussion.

Posted
30 minutes ago, MLXXX said:

I think we can.  

 

You seem to be worrying about variations in an upstream clock in a router or switch operating at say a nominal frequency of x GHz somehow interfering  with a local buffering clock operating at a nominal frequency of say y Mhz.  The slight pulling of the upstream clock phase to maintain the nominal x GHz (plus or minus a small fraction of one percent) is thought to possibly influence a downstream device operating at the nominal y Mhz (plus or minus a small fraction of one percent).

It may indeed be small relatively speaking, but is it significant?  You seem to be dismissing it out of hand, but I think if we couple upstream phase noise to a sensitive downstream clock where the digital stream becomes synchronous (ie, where timing matters), it could be significant.

30 minutes ago, MLXXX said:

I would point out that the genuine signal voltages travelling through the Ethernet cable wires are huge in amplitude compared with the small PLL control voltages present in the inner circuitry of a router or switch feeding the Ethernet cable. 

I suspect you are confounding amplitude issues with frequency issues.  For the purposes of my line of questioning, SNR is irrelevant.

 

30 minutes ago, MLXXX said:

It would help if someone could actually record and upload an example of this sort of apparent audible impairment arising in an Ethernet streaming context. It would shed a practical light on the discussion.

I think, as @davewantsmoore mentioned, you would need to look at what “LF jitter does to a single tone (spreading of the base), this extrapolates out to a rising "noise floor" for more typical program material like music”.  The comment certainly correlates with most audiophiles seem to perceive, although it is hard to be sure.

 

That said, I am not in the measurements game, and neither am I in the blind listening test game.  I will leave that to others.

Posted (edited)
On 06/12/2023 at 1:23 PM, Stereophilus said:

 

That said, I am not in the measurements game, and neither am I in the blind listening test game.  I will leave that to others.

I may be wrong but I think that very few contributions to this thread, if any, have been of the measurement or blind testing results variety.

 

On 06/12/2023 at 1:23 PM, Stereophilus said:

but I think if we couple upstream phase noise to a sensitive downstream clock where the digital stream becomes synchronous (ie, where timing matters), it could be significant.

My understanding is that the audio buffer output clock of a streaming box is never synchronous with an upstream Ethernet clock.   They run at different rates.  One device supplies data, the other device supplies decoded audio. They are independent in their clock rates.

 

If you are referring to the path from streaming box audio output buffer, left in digital form,  along a digital path to a DSP buffer inside an amp, that opens up a possible can of worms.  However that amp buffer is even more remote from the phase and frequency of upstream Ethernet clocking. 

 

 

On 06/12/2023 at 1:23 PM, Stereophilus said:

I suspect you are confounding amplitude issues with frequency issues.  For the purposes of my line of questioning, SNR is irrelevant.

I note also that there is the theoretical potential for EMI from input PLL circuitry of a streamer box dealing with Ethernet cable input to affect its output buffering clocking. But we could ignore that as well, and restrict our consideration to what difference a wobble in the  Ethernet pulse timing (typically a very small percentage of the clock cycle nominal duration) might possibly make.

 

On 06/12/2023 at 1:23 PM, Stereophilus said:

I think, as @davewantsmoore mentioned, you would need to look at what “LF jitter does to a single tone (spreading of the base), this extrapolates out to a rising "noise floor" for more typical program material like music”.  The comment certainly correlates with most audiophiles seem to perceive, although it is hard to be sure.

Well it's not particularly clear how upstream Ethernet pulse rate jitter would be likely to trigger or induce jitter in the buffer output clocking of a streamer box, but it we entertain that as a given for discussion purposes, the effect that Dave referred to would be measurable.

 

If a test tone is supplied via the internet, the purity of that tone at the DAC output of a streamer box or of a DAC driven by that streamer box can be measured; or a test tone combination specifically for testing jitter could be sent.  Comparison could be made between the analogue test results with the digital test signal delivered via the internet and through the forum member's home Ethernet network, and more directly.

Edited by MLXXX
Posted

Good to see a sensible exchange of opinion and researched evidence going on. I'm heartened by today's discourse.

 

I'm largely staying out of threads these days, but I will pass comment on one thing based on this:

 

1 hour ago, MLXXX said:

variations in an upstream clock in a router or switch operating at say a nominal frequency of x GHz somehow interfering  with a local buffering clock operating at a nominal frequency of say y Mhz

 

and

1 hour ago, Stereophilus said:

It may indeed be small relatively speaking, but is it significant?  You seem to be dismissing it out of hand, but I think if we couple upstream phase noise to a sensitive downstream clock where the digital stream becomes synchronous (ie, where timing matters), it could be significant.

 

I don't want to interrupt the thought processes going on, and hypothesising on what matters or not, but it is worthy of repeating: Clocks in networking devices do not handshake upstream or downstream with other clocks. The timing signal is entirely encapsulated within the data stream between devices exchanging data - that is why it is referred to as asynchronous - the clocks do not need direct, stateful connections to each other as the timing is embedded in the payload. The relative slow-pace (compared to high-rate clocks) of Gigabit Ethernet is a reason why faster ocxo clocks are unnecessary in switches.

 

The received data stream is effectively functionally-divorced from upstream clocks.

 

Keep calm and carry on 🙂

 

 

  • Like 4

Posted
45 minutes ago, Stereophilus said:

That said, I am not in the measurements game, and neither am I in the blind listening test game.  I will leave that to others.

I definitely am not into the above.  I just listen.  Very worthwhile pursuit.

 

36 minutes ago, andyr said:

to be met with disdain

Happens regularly.  The disdain comes from posters who may know the theory but have little or no actual listening experience.  To me the continual denials, confusions and theory based explanations do not assist anyone who may want to learn from the experience of posters who have achieved beneficial outcomes with Ethernet Networking.

 

This is quote from a post in another Forum

 

"In a digital system, noise is the biggest enemy of D to A conversion. The argument that ‘bits are bits’ and that nothing in the digital chain can affect the data, and therefore the music, completely misses the reality. It isn’t the data that noise impacts, it’s the carrier and, ultimately, the conversion at the DAC that is degraded. The more noise present on the line, the larger the uncertainty at the triggering threshold and the worse the deviation from perfect conversion into an analogue signal."

 

John

 

  • Like 2
Posted
2 hours ago, Neo said:

Would love to actually hear from people putting their Ethernet system together and various experiences that they had.

 

2 hours ago, andyr said:

John ( @Assisi) has posted about his network findings ... to be met with disdain from those same members.  :sad:

Unfortunately, I am one of those who have given up posting anything about Ethernet.  The people intent on killing this as a topic are still trying. 

 

And worse still, your posts are now hidden.

Posted
2 minutes ago, Snoopy8 said:

And worse still, your posts are now hidden.

Pity. 
Useless to start new topic and as it would attract same people or suggest they start their own and discuss their big brain hypothesis. 
All up disappointing and so much for sensible exchange of opinion.

Neo

Posted
2 hours ago, Stereophilus said:

It may indeed be small relatively speaking, but is it significant?  You seem to be dismissing it out of hand, but I think if we couple upstream phase noise to a sensitive downstream clock where the digital stream becomes synchronous (ie, where timing matters), it could be significant.

The you "phase noise", you're talking about timing of upstream digital signals (ie. the "jitter" in the digital information).   This will not "couple".   It simply does not (I can go into why, etc.)

 

But that isn't to say that modifying anything upstream won't have some effect.

 

Let's say you change the clock in a switch.    The timing of the data in the switch does not affect anything downstream in the context of "data timing".   BUT, clocks and their power supplies are noisy things... this noise could be radiating into your DAC or amplifier or whatever..... this is compounded by setups with lots of boxes, and lots of cables, (ie. lots of "grounds"), and audio devices which operate with excessively wide bandwidths.

 

2 hours ago, Stereophilus said:

That said, I am not in the measurements game, and neither am I in the blind listening test game.  I will leave that to others.

I think this is smart, as defeatist as it sounds.   It is easy to call for these tests.   It is difficult to do them and get results which are uncorrupted....  and we can do such tests and get all too sure about something, which is folly if it is based on an incorrect test.

 

1 hour ago, MLXXX said:

Well it's not particularly clear how upstream Ethernet pulse rate jitter would be likely to provoke jitter in the buffer output clocking of a streamer box

It doesn't.

 

The ethernet is "unpacked", processed, and then "repacked" as digital audio ... which totally decouples it in that context.

 

1 hour ago, MLXXX said:

but it we entertain that as a given for discussion purposes, the effect that Dave referred to would be measurable.

Sure.... you can see that effect if you put different clocks in a DAC, for example.

 

1 hour ago, Assisi said:

who may know the theory but have little or no actual listening experience.

How do you know that?

 

 

 

1 hour ago, Assisi said:

To me the continual denials, confusions and theory based explanations do not assist anyone who may want to learn from the experience of posters who have achieved beneficial outcomes with Ethernet Networking.

They can't listen to both?

 

Are you saying that caution that these things won't apply in every situation amount to "denials and  confusions".     This is quite a naive thing to say. <shrug>

 

1 hour ago, Assisi said:

"In a digital system, noise is the biggest enemy of D to A conversion. The argument that ‘bits are bits’ and that nothing in the digital chain can affect the data, and therefore the music, completely misses the reality.

This is very true.

 

... but it is important to understand what noise, and in what context..... and what does that all mean (and what to do about it).

 

Fiddling with noise on the network side, is often not the best approach..... and sometimes when that "works", it says more about the problem than the successful solution.

 

... but if course, people will do what they do in their systems and there's no shame in that.   Some systems where are lots of boxes, and/or wide bandwidth type equipment, it might be a big part of the best solution.... but this is certainly not typical to most systems.

Posted
30 minutes ago, Neo said:

Pity. 
Useless to start new topic and as it would attract same people or suggest they start their own and discuss their big brain hypothesis. 
All up disappointing and so much for sensible exchange of opinion.

Neo

 

Perhaps if you find the thread a snooze-fest (your words), rather than posting such just put it on your Ignore Topics list.

Your post, ironically, actually adds nothing to the thread, and could very well be considering not in the "spirit of the community". 

  • Like 2

Posted

A gentle reminder, folks: public criticism of volunteer and volunteer action is not likely to go down well.

 

Yes, posts were hidden. They were hidden as they were derailing on ongoing discussion that has been good-natured and not detracting from the "why?" as posited in the opening post of the thread all those years ago.

  • Like 2
Posted

I'd suggest to those that have nothing to contribute to this discussion - contribute nothing to the discussion, and move on.

Let those who wish to debate, do so. The thread stands a chance at getting back on track and a future reference for small bits of useful information for interested readers - or it will die an organic death.

  • Like 1
Posted

I've become more of an observer than contributor.  I felt quite some time ago this thread was generally too far off course and becoming lost at sea.  Even when I started the thread I felt this was bound to happen.

 

Now we've got 4 consecutive posts by moderators. 

 

So I am reflecting on the openning post:

 

Posted August 24, 2021

This thread is for ...

 

people to share how they have put their ethernet systems together, and why.  Let’s help each other understand how to assemble ethernet hardware to improve sound quality.

 

Please be constructive rather than critical.

If you wish to argue that network quality makes no difference to sound quality, please do so in other threads such as some I have started for that purpose.

 

For what it's worth at this point:

 

It was not intended to presume an ethernet system should be improved, moreso a resource for those who are inclined to explore the possibility.

 

There is a presumption, based on many observatioms, that sound quality could be improved (quite a probability if not only a possibility), not that people should or that it always would.

 

The 'and why' is open to interpretation however was intended to be "I did X because... someone recommended it, or I tried a few things and this sounded best, etc".  I did not anticipate technical dissection and rationale attempting to prove someone's why is technically flawed.

 

My feeling is it is a bit hard to see how so much technical discussion is constructive (even though it is interesting and informative for those compelled to write and read it).  I am wary that technical explanations could be a form of expectation bias, which might explain why it is used in marketting material (ie. What one understands technically about equipment couid affect what they hear from that equipment).

 

My feeling is the technical discussion is typically more in the vein of arguing "network quality makes no difference to sound quality".  But it is not my call how it is managed.

 

8 hours ago, Marc said:

those that have nothing to contribute to this discussion - contribute nothing to the discussion, and move on.

Let those who wish to debate, do so.

I suppose it is arguable what is a contribution ... is all the debate really a contribution.

 

8 hours ago, Marc said:

"spirit of the community".

I am curious if there has been an attempt to describe this in words so it is transparently available for members to be guided by.

 

  • Love 1
Posted
1 hour ago, dbastin said:

It was not intended to presume an ethernet system should be improved, moreso a resource for those who are inclined to explore the possibility.

 

That is a very important point. Because Ethernet networking was already a mature technology  at the time this thread was opened, there was good reason to expect that off-the-shelf Ethernet hardware would perform correctly, i.e. that the hardware would supply uncorrupted data and would conform to regulations concerning minimising spurious EMI emissions.

 

In the circumstances, there could have been no "presumption" when the thread was opened of the possibility of an improvement over what standard equipment would routinely provide. 

 

That though is not the attitude that took hold amongst posters to your thread, @dbastin! The attitude that took hold and held sway was that standard Ethernet equipment would typically not provide full sound quality, and special measures are needed to attain better quality.

 

 

1 hour ago, dbastin said:

The 'and why' is open to interpretation however was intended to be "I did X because... someone recommended it, or I tried a few things and this sounded best, etc".  I did not anticipate technical dissection and rationale attempting to prove someone's why is technically flawed.

 

For some reason, the people who responded to your intended call and recounted "what sounded best" to them declined to make recordings, to take measurements, or to organise a blind test (even an informal one).

 

The uniform absence of objective evidence led to scepticism on the part of certain readers, However those readers stayed silent for a long time because of this direction in your opening post, "If you wish to argue that network quality makes no difference to sound quality, please do so in other threads such as some I have started for that purpose.".

 

In effect this thread became a sanctuary for those who wished to report a subjective improvement in sound quality to be free to do so without backing up their claims with corroborating objective evidence.  And  -- importantly -- without running the risk of anyone querying the technical plausibility of their claims. 

 

 

*   *   *

 

Of course that is no longer the case. This thread now does include technical queries, and -- as well -- technical explanations.

 

What this thread still does not include is any objective evidence of audible improvement in sound quality.   I think that a lot of the people reading this thread are waiting for such evidence to be forthcoming, before committing themselves to substantial financial outlays.  In the meantime, they will continue to use standard equipment.

Posted
4 hours ago, dbastin said:

I am curious if there has been an attempt to describe this in words so it is transparently available for members to be guided by.

 

It is expanded upon in the website guidelines which everyone has agreed to. When they were updated and went live, a membership-wide alert went out with the details and supporting links. Supporting narrative in that alert explained that closure of the alert by the member was deemed acceptance of the guidelines.

  • Like 1
Posted
13 hours ago, Marc said:

I'd suggest to those that have nothing to contribute to this discussion - contribute nothing to the discussion, and move on.

 

4 hours ago, dbastin said:

I suppose it is arguable what is a contribution ... is all the debate really a contribution.

 

 

My suggestion was aimed at a couple of members whose posts were only visible for mere seconds - and literally had nothing to do with "ethernet" or audio. 

Posted
18 hours ago, El Tel said:

The relative slow-pace (compared to high-rate clocks) of Gigabit Ethernet is a reason why faster ocxo clocks are unnecessary in switches.

??? perhaps I've misunderstood what you are trying to say, but.....  Ethernet requires much much faster clock rates than typically used in audio.

 

They typically come at it from different directions though.

 

In audio, we have data rates in the << 500kHz range ( < 50kHz for CD) ....  but we typically use a clock in the order of 10s of MHz, and either convert the clock down, or convert the data up (eg. in an oversampling DAC architecture).

 

 

In Ethernet we need clock rates beginning in the 100MHz range, going up to many hundred of Mhz..... and while they may "only" begin with a clock oscillator of 25MHz .... this is scaled up using some sort of "clock synthesiser" to the hundreds of MHz needed.   Typically around < ~1ps RMS jitter is required.... but this is at a fairly high lower corner frequency (eg. they're looking at the jitter in the kHz and MHz ranges, and not even measuring jitter below a few kHz)

 

 

 

Posted
1 minute ago, davewantsmoore said:

than typically used in audio.

 

I didn't mention the clocks used in audio - I just mentioned , in isolation, faster clocks that exist for more demanding applications.

Posted
17 hours ago, Neo said:

discuss their big brain hypothesis. 

Like ... "share how they have put their ethernet systems together, and why" ???

Or address any of the "noise assumptions" outlined in the first post?

 

The word "irony" (as used by Marc) I think is very apt....  you're right, it is disappointing.

 

7 hours ago, dbastin said:

There is a presumption, based on many observations, that sound quality could be improved (quite a probability if not only a possibility)

 

I think a very important context-switch for a lot of people would be replace the word "improved" .... with "harmed less"....  and then deeply reflect on the greater meaning of that in terms of how to construct a very high quality audio reproducer.

 

I'm not trying say "all 'ethernet tweaks' are stupid" ... or that they won't work.... it's quite a bit more nuanced that that.

 

7 hours ago, dbastin said:

My feeling is the technical discussion is typically more in the vein of arguing "network quality makes no difference to sound quality". 

Not entirely (from me, at least).

 

It is more the general advice that many many people pick up from discussions like this which boils down to things like:

  • It will always be a problem.
  • Adding higher quality "audio grade" ethernet equipment and cables is always a good solution.
  • A system where "everything you do has an audible impact" is necessarily the marker of a very very good quality system.

There can sometimes be some contextual truth to the above things.... but they are poor generalisations.

 

7 hours ago, dbastin said:

I suppose it is arguable what is a contribution ... is all the debate really a contribution.

Use dictionary.... answer yes.

 

7 hours ago, dbastin said:

I am curious if there has been an attempt to describe this in words so it is transparently available for members to be guided by.

Perhaps it's just my perceptions but there really are some incredibly super-duper-ultra left-brained people on this forum.

 

Just live and let live, and I'm sure the mods won't be too busy.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...
To Top