Jump to content

Recommended Posts

Posted (edited)
5 minutes ago, sir sanders zingmore said:


A couple of points: the differences discussed in these discussions (Ethernet cables, power cables) are often described as being really big  “night and day” type differences.

 

Also a poorly implemented blind (or double blind) test is far more likely to give a false positive than a false negative. 
 

Sorry, not sure I follow (the false positive bit)

Edited by frednork

Guest Eggcup the Dafter
Posted

Let’s stop back a bit. The topic under discussion is whether Archimago’s test is adequate to show no difference in sound when used in a particular way. Since it seems that the participants here have individually rejected every assumption or shortcut we could use to answer the question, we need some way to make sense of what’s going on. I suggest we need to answer this through a set of questions.

 

1) Is the actual sound different?

2) Are any differences audible to everyone who may be subjected to them?

3) Are any differences audible to a subset  of people who may be subjected to them?

4) Do any people for whom a difference is not explained by a change that is audible to them, experience a change in sound?

 

You’ll notice that none of those questions ask why anything. We need to know what the phenomenon is, first, before we can determine any reasons why. And clearly we are determined to either get stuck at the first hurdle or ignore it and jump multiple steps.

 

Actually that’s fine for a debate. It just doesn’t lead to a proper answer...

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
Posted
16 minutes ago, sir sanders zingmore said:

Also a poorly implemented blind (or double blind) test is far more likely to give a false positive than a false negative.

 

Sounds like a gross generalisation to me...like a bias is at play...a false negative of its own.

Posted
2 hours ago, Martykt said:

Sure, it probably helps to know that you're capable of hearing what you're testing for with sighted tests too, though if all you're doing is listening to see if a particular ethernet cable makes a difference in your system then if you can't hear a difference either way then there's probably not much point in spending the extra money anyway.

 

Yes, a test should be applicable for whatever you happen to be testing for and be able to provide a valid result.

So in this particular case if you're testing to prove or disprove whether an ethernet cable can make a difference then if you can't tell whether a null result from a DBT is due to the cable not making a difference or hampered senses then the result is invalid and really doesn't help much in answering the question.

 

I think your terminology is wrong, the result is NULL not Invalid as is the result of every single sighted observation. 

Posted
1 hour ago, rmpfyf said:

Example: random noise in the clock sources involved does not 'add to the shag pile', it'd appear as a widening of the peak.

 

The video of Amir's demonstrated the opposite

  • Like 1
Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf
  • Volunteer
Posted
58 minutes ago, acg said:

 

Sounds like a gross generalisation to me...like a bias is at play...a false negative of its own.

Poorly implemented blind tests are usually poorly implemented because some things are not adequately controlled (like careful volume matching or the person doing the switching giving unconscious clues). 
Usually these are things that lead to differences being heard. 

Posted
15 minutes ago, rmpfyf said:

 

Where? I've seen the whole video and I don't see a conclusive demonstration of as much.

 

 

 

at about 17 minutes he switches the square ware jitter to noise

 

Posted
8 minutes ago, sir sanders zingmore said:

A couple of points: the differences discussed in these discussions (Ethernet cables, power cables) are often described as being really big  “night and day” type differences.

 

That is a fair point and my thinking on this is that the magnitude of  difference they are referring to is relative to their current baseline which may be relatively stable and as they listen a lot they train their hearing to get used to that baseline, until they introduce the new thing that "blows them away" (ie things sound different), So for them it seems like a big difference. For others it may seem very minor or even nonexistant.  It just depends on what you are comparing to as the reference baseline. 

 

From my perspective the differences I hear with these types of cables are mostly in the clarity and solidity of things farfield in the soundstage (or phantom image) as well as a level of "relaxedness" in the sound.  The quality of this image depends on the quality of the system (not unexpected) but also ,and even moreso the room and how the sound coming from the speakers spreads through the room.  i would also say that the further you sit from your speakers the better your room needs to perform.

 

If I sit pretty close to my 180cm tall speakers  (and look like a bit of a dick) I get a highly resolved soundstage and can perceive differences in soundstage with ethernet changes. If I sit further  back (in the way a lot of people have their systems setup (which definitely looks less dicklike) I find it more difficult to perceive differences.  I dont think the differences are necessarily huge and wouldnt expect someone that is not used to listening for soundstage changes to pick them up.  The other thing i would say is that often people say that to hear these types of differences you need batlike golden eared perfect hearing.  My experience is that the differences are not frequency dependant and as long as you are not highly impaired in your ability to locate where a sound is coming from it will be hearable with an appropriately setup system.

 

Even if all other aspects of the setup are superb if the room is contributing negatively the ability to hear a difference is highly diminished  and I have heard setups with stellar equipment where the room swamps the presentation and it sounds pretty awful.  

 

Something that might be interesting to try is to optimise a setup for soundstage or phantom . It may result in a different setup for many.

 

Not wanting to teach anyone how to suck eggs but for those that haven't chased a better soundstage, I highly recommend it and, here are some ideas from my personal soundstage quest.

 

1. place your speakers in a symmetrical room or failing that a symmetrical part of a bigger room. Hopefully the room is not too echoey and has a reasonable amount of soft furnishings to prevent ringing.  If the room has a signifcant level of ring then not sure if it worth continuing until that is remedied.

2. ensure your speakers are at least 1m from each wall,

3. that they are then optimised within the space left for them to go for tonality and low frequency room effects (eg not equidistant from both walls, etc)

4. they will most likely need to be toed in quite a bit to keep the sweet spot as big as possible and give your speakers the best possible chance of a smooth frequency response and also increase the delay of the first reflection (especially for monopole speakers)

5.  sit in a equidistant triangle with the speakers or not too far behind where it sounds best to you for imaging. (this will most likely make you look like a dick)

6. Put up with (most likely) compromised bass response in a region that will be dictated by the size of the room that is less preferable than where you normally have your speakers setup or fix that with dsp/subs if you can

 

Try that setup for a few weeks and listen to lots of stuff. Then go back to the old setup and see if you can live with the loss of soundstage. I couldnt.

 

 

 

 

 

 

  • Like 1

Posted
10 minutes ago, sir sanders zingmore said:

Poorly implemented blind tests are usually poorly implemented because some things are not adequately controlled (like careful volume matching or the person doing the switching giving unconscious clues). 
Usually these are things that lead to differences being heard. 

 

So you've sat in on these blind tests, peformed your analysis of protocol and implementation, and therefore have the data?  You must have spent years on this work.  Or is it just your opinion?  Sounds like the latter to me.  Sounds like that is how you want to see these things and you are willing to overlook the other factors of these kinds of tests that influence the listener towards an inconclusive result.  I do not wish to debate this again but felt I needed to add some balance to your seemingly absolute comment.

 

I think I previously pointed SNAers towards a blind listening test that was aced by the participant but the skeptic controlling it could not accept the results because it seemed it did not fit his belief system.  That summary is a gross generalisation in itself, but if you are interested in how some inconsequential little change in the digital domain was  reliably heard look for "Blue Pill or Red Pill" thread at AudiophileStyle.  If you look closely at the actual reports from the testing day you will see some of the ways in which the method of testing and the emotional pressure of things had influence on the results.

  • Like 1
Guest rmpfyf
Posted (edited)


 

Edited by rmpfyf
Posted
1 hour ago, rmpfyf said:

I don't think we'll ever put to bed the notion of 'Amir/Archimago/etc said so in a video therefore it must be

 

I thought the ASR video was just a short hand demonstration of the study by Julian Dunn?

Posted
14 minutes ago, sir sanders zingmore said:

Poorly implemented blind tests are usually poorly implemented because some things are not adequately controlled (like careful volume matching or the person doing the switching giving unconscious clues). 
Usually these are things that lead to differences being heard. 

Ok, I understand now,  but I would describe that as test failure or invalid as the test cannot work as intended.  , A bit OT but would you control for volume matching (aside from ensuring volume was not changed) when testing ethernet cables?

 

My perspective on how these tests should be run is in contrast to an engineering type approach as the  typical bit of measuring equipment does not get tired, has no performance anxiety, does not have "bad" days etc etc.  In fact I have seen others promote that the test subject should not know anything about the test and just follow instructions.  My view is that the test subject should know as much as possible about the test and have as much practice as possible at similar tests or the exact same test and if possible be part of the process to determine what sample is used and how it is presented (as long as this doesnt negate the objectives of the test). This is because doing a sensory analysis accurately is very difficult for the participants.  Its like my golf swing, as soon as i think about it, I'm screwed.

 

This might mean helping the subject initially to tell the difference between the samples  and to practice doing so , perhaps on exactly the same piece of music that they will be tested on and perform practice tests much in the same way you would prepare for a difficult exam.

 

As long as during the test they have no way of telling which sample is presented all the homework previously done does not invalidate the test, unless you are trying to determine if people with no prior knowledge of the test can tell the difference.

 

From my previous experiences with sensory analysis we generally did not expect the testing to show up small differences in samples that we knew were there as even a well trained panel  does not provide that fine a level of discrimination even if  an individual panel member does. 

 

Guest rmpfyf
Posted (edited)

.

Edited by rmpfyf

Posted
3 hours ago, Ittaku said:

Well that's the thing. They ALWAYS fail at these tests of questionable changes, so either every single DBT ever conducted in the history of audiophilia has been done wrong, or something else. Some random person saying they conducted a DBT at home and heard a difference doesn't count since no one knows exactly how it was truly conducted.

It all depends on what you are trying to prove and what was done. Broadly speaking if the goal of many of the DBT's done was to prove that people with unknown abilities of discrimination , when put into an unfamiliar situation with an  unfamiliar sound system and room  in random  listening positions  and tested once were unable to determine a difference then , I would say the tests have been successful.

 

Of course it would be better to look at the individual tests and how they were done and go from there.

Posted
2 minutes ago, frednork said:

It all depends on what you are trying to prove and what was done. Broadly speaking if the goal of many of the DBT's done was to prove that people with unknown abilities of discrimination , when put into an unfamiliar situation with an  unfamiliar sound system and room  in random  listening positions  and tested once were unable to determine a difference then , I would say the tests have been successful.

 

Of course it would be better to look at the individual tests and how they were done and go from there.

You know I'm obviously referring to the ones that involved trained listeners and/or audiophiles.

Posted
4 hours ago, sir sanders zingmore said:

Understanding your personal bias very difficult

It's called awareness, some have it more than others.

  • Haha 1
Posted
1 minute ago, Ittaku said:

You know I'm obviously referring to the ones that involved trained listeners and/or audiophiles.

Sadly my mind reading abilities are not as well refined as you imagine or may not have seen the ones you are referring to. Happy to have a look if you specify some.

Guest Eggcup the Dafter
Posted (edited)
3 hours ago, rmpfyf said:

 

Not so, read above. The test is inadequate. 

So. You may well be 100% correct but others here have challenged what you assert.

Edited by Eggcup the Dafter

Posted
6 minutes ago, Ittaku said:

Ok, there are at least 50 tests referenced there and before I spend too much of the precious time I have left on this mortal coil criticizing the methodology of some to find that they were not the one you thought was done well perhaps you could point me to one or more that WAS done well as far as you are concerned.  Good news is I think my mind reading abilities are improving!!

Posted
1 minute ago, frednork said:

Ok, there are at least 50 tests referenced there and before I spend too much of the precious time I have left on this mortal coil criticizing the methodology of some to find that they were not the one you thought was done well perhaps you could point me to one or more that WAS done well as far as you are concerned.  Good news is I think my mind reading abilities are improving!!

I did not say that, what I said was not a single one has shown any effect when questionable changes were made. So either they're all badly conducted, or something else is going on.

Posted (edited)
4 hours ago, Ittaku said:

Well that's the thing. They ALWAYS fail at these tests of questionable changes, so either every single DBT ever conducted in the history of audiophilia has been done wrong, or something else. Some random person saying they conducted a DBT at home and heard a difference doesn't count since no one knows exactly how it was truly conducted.

And to elaborate on the bolded latter part of my quote, what I mean is when I read them say "I blind tested X or Y and could easily tell them apart" - I don't believe them. Not when all publicly conducted DBTs (of whatever calibre) fail to reproduce it.

Edited by Ittaku
Posted
4 hours ago, sir sanders zingmore said:

Also a poorly implemented blind (or double blind) test is far more likely to give a false positive than a false negative. 

In general no.  Poor implementation usually leads to more variability between test subjects, leading to a statistical null result (potentially a false negative).  This is a very common issue in DBT.  However specific methodological issues can end to erroneous results, such as the situation you describe where blinding is not properly done.  This latter scenario is usually much rarer.

  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...
To Top