Jump to content

Double Blind Tests- NOthing to see hear!!


Recommended Posts

Hi all, what does everyone think about this? No funny business, you change a component and tell the reviewer what has changed E.g CD player, amp, cables, speakers etc. You just don't tell them what it has changed to. Also no funny business where you tell the reviewer that something has changed but you don't change it. The reviewer then reviews the changed component based on what they hear. I assume that most pro reviewers and even many amateur ones, would have there source material worked out.

Personally, this would be a magazine I would buy on a regular basis but I think a lot of manufacturers and reviewers would be pretty scared of blind reviewing, if not terrified!

PS. Blind reviewing is not the same as blind testing.

Link to comment
Share on other sites



No company would pay for advertisement in the mag and in the end it will fold .

Cheers

Maybe it would be full of advertisements from manufacturers who truly know the worth of their products.

Cheers.

Link to comment
Share on other sites



It's an interesting idea however I think you'd have to be mindful of the fact that some amp and speaker combinations don't work well together (even if the term "synergy" is overused in the industry). Just because a MBL amplifier with a zillion watts driving some high efficiency horns might be a mismatch doesn't make either of those pieces of gear bad just inappropriately matched.. admittedly at the more entry level where the sensitivity of speakers considered are in a fairly narrow band that problem would be less and you could do a "Comparison of 5 moderately priced 50W amps with a very good 88db sensitive speaker" or similar which would be a fairly useful shootout for somehow considering that speaker.

A lot of the better reviewers do the system context thing well at the moment, ie. even if they are reviewing expensive speakers with very good sources (typically their own gear) they report on how things sounds with cheaper cd players or phono stages.

I'd have to say though that as someone who reads both printed mags (Stereophile, ocasionally some of the Aussie hifi mags) and some of the better known websites such as (enjoythemusic, sonicflare,sixmoons,stereophile,positive feedback) I see there being enough different sources of reviews out there that any press shilling because of cosy ad deals with suppliers would become fairly obvious.

Link to comment
Share on other sites

Fair comment Veefy, if there is a mismatch this should be clearly audible. Some equipment could get damaged, especially when powering inefficient hard to drive speakers with lower powered amps.

Link to comment
Share on other sites

So no more comments? Comon speak now or forever hold your peace! :) Otherwise we will have to believe that sighted reviews are nothing more than cheap imitations :rolleyes:

Link to comment
Share on other sites

So no more comments? Comon speak now or forever hold your peace! :) Otherwise we will have to believe that sighted reviews are nothing more than cheap imitations :rolleyes:

Hello X

I don't know about speak now or forever hold your peace:D- this subject always seems to bring forth a lot of "passion" and pops up on a regular basis.

Best

JA

Link to comment
Share on other sites



Hi all, what does everyone think about this? No funny business, you change a component and tell the reviewer what has changed E.g CD player, amp, cables, speakers etc. You just don't tell them what it has changed to. Also no funny business where you tell the reviewer that something has changed but you don't change it. The reviewer then reviews the changed component based on what they hear. I assume that most pro reviewers and even many amateur ones, would have there source material worked out.

Personally, this would be a magazine I would buy on a regular basis but I think a lot of manufacturers and reviewers would be pretty scared of blind reviewing, if not terrified!

PS. Blind reviewing is not the same as blind testing.

I really don't see where you're coming from with this. What is the value of a review? It describes what one person heard, that's all. Let's imagine that, having adopted your methodology, the reviewer determined that amplifier outperformed amplifier B. Would you then run straight out and buy A? Of course not. Does it mean that A will sound better than B in your system? No. That A will sound better than B in the majority of systems, and to the majority of listeners? Again, no. If you took both amps home and found that you preferred amplifier B, would you purchase A regardless because it 'won' the comparative test? Hardly. So, in a practical sense, what has adopting your methodology achieved? Absolutely nothing.

In fact, I'd suggest that it has made the outcome LESS reliable. Such "blind" reviewing is only possible in controlled circumstances and over short time frames - yet most components only reveal their true nature over a longer period of familiarisation, a much larger selection of programme material and a range of partnering equipment. Read reviews for entertainment, for technical insight and, if you find something in there to empathise with, put that component on your shortlist. It doesn't have to be any more complicated than that.

Incidentally, to the other poster who decried use of the term "synergy" ... it does not refer to the type of obvious electrical mismatch you describe, but to the simple fact that some components just seem to work very well in combination - the whole exceeding the sum of the parts. Far from being "overused", it has always been the key to assembling a quality system for reasonable outlay.

Link to comment
Share on other sites

I really don't see where you're coming from with this. What is the value of a review? It describes what one person heard, that's all. Let's imagine that, having adopted your methodology, the reviewer determined that amplifier outperformed amplifier B. Would you then run straight out and buy A? Of course not. Does it mean that A will sound better than B in your system? No. That A will sound better than B in the majority of systems, and to the majority of listeners? Again, no. If you took both amps home and found that you preferred amplifier B, would you purchase A regardless because it 'won' the comparative test? Hardly. So, in a practical sense, what has adopting your methodology achieved? Absolutely nothing.

Where am I coming from, thought that was obvious? Reviewers are prejudiced, sometimes lazy and suffer from flaws that all humans do, being human. How can you say that removing all these would achieve nothing? If it will achieve nothing, then there is absolutely no difference between sighted and non-sighted reviewing is there? Or are you saying that sighted reviewing is going to give a more reliable review? Because if you are I don't agree. But that's OK, people disagree all the time.

In fact, I'd suggest that it has made the outcome LESS reliable. Such "blind" reviewing is only possible in controlled circumstances and over short time frames - yet most components only reveal their true nature over a longer period of familiarisation, a much larger selection of programme material and a range of partnering equipment. Read reviews for entertainment, for technical insight and, if you find something in there to empathise with, put that component on your shortlist. It doesn't have to be any more complicated than that.

Why does a blind review have to happen over a short period of time? Why complicate matters with sighted prejudices and/or expectations and/or possible lazy reviewers? Some people don't read reviews for entertainment or to empathise but for the actual best advice. The fact that a component make take a while to appreciate it's benefits, hints to me that's it's improvements are probably subtle.

Anyway it's just and idea which I think would produce some interesting results. Might get the reviewers ears tuned a bit more while being under a bit more pressure.

Link to comment
Share on other sites

Why does a blind review have to happen over a short period of time?

If you can explain to me how it's possible to perform a comparative review of 2 CD players in my home for 2 months without ever finding out which is which, I'm all ears ...

Some people don't read reviews for entertainment or to empathise but for the actual best advice.

So what exactly is the "actual best advice"? IF your methodology achieves its aim, then at best it might more reliably determine how a particular component performed in the context of that exact system, in that room, but still based on the reviewer's preferences. BUT here's the crux of the matter ... does that tell you any more about how the component will sound in your system, your room and to your ears than a traditional review? I really don't see how. So what has your methodology achieved, in practical terms?

If you want to determine which of two cars is faster, you can take them to a test track and find a clear winner. Much as we might wish it, audio reviews will never deal in such certainties - there are far too many variables. Read them accordingly and take from them what you will, then let your own ears decide. Simple.

Link to comment
Share on other sites



I find the value of reviews is that it helps to narrow down choices. I want a review to tell me a few things: Is the gear any good and is it worth the asking price? This is a good first step and opens up options on brands which you may never have heard of or hadn't previously considered. Probably more important though is when the reviewer can describe the sound. Now your opinion may differ to the reviewers but when they describe a component as sounding clinical or very accurate I tend to look elsewhere, whereas when a component is described as musical I'm more likely to want to hear it. Different components have different sounds which generally reflect the preferences of the designer. This information is very useful as you can get an idea which components are going to suit your own tastes.

After reading the reviews you can make a short list to audition. I didn't audition my amp at home but I knew what sort of sound I was after so I knew what to look for. Maybe I should have listened at home but I'm happy with my choice so it doesn't really matter.

DS

Link to comment
Share on other sites

I really don't see where you're coming from with this. What is the value of a review? It describes what one person heard, that's all. Let's imagine that, having adopted your methodology, the reviewer determined that amplifier outperformed amplifier B. Would you then run straight out and buy A? Of course not. Does it mean that A will sound better than B in your system? No. That A will sound better than B in the majority of systems, and to the majority of listeners? Again, no. If you took both amps home and found that you preferred amplifier B, would you purchase A regardless because it 'won' the comparative test? Hardly. So, in a practical sense, what has adopting your methodology achieved? Absolutely nothing.

In fact, I'd suggest that it has made the outcome LESS reliable. Such "blind" reviewing is only possible in controlled circumstances and over short time frames - yet most components only reveal their true nature over a longer period of familiarisation, a much larger selection of programme material and a range of partnering equipment. Read reviews for entertainment, for technical insight and, if you find something in there to empathise with, put that component on your shortlist. It doesn't have to be any more complicated than that.

Incidentally, to the other poster who decried use of the term "synergy" ... it does not refer to the type of obvious electrical mismatch you describe, but to the simple fact that some components just seem to work very well in combination - the whole exceeding the sum of the parts. Far from being "overused", it has always been the key to assembling a quality system for reasonable outlay.

Bravo, Toon. That is the best commentary on this issue I've yet seen on SNA. Especially valuable is the part of your comment I highlighted - which is very true and very important.

Link to comment
Share on other sites

Bravo, Toon. That is the best commentary on this issue I've yet seen on SNA. Especially valuable is the part of your comment I highlighted - which is very true and very important.

It's a fair comment which I understand and appreciate. But if a component takes months to appreciate, I'll repeat myself and say that the improvements are subtle at best.

Link to comment
Share on other sites

[

I really don't see where you're coming from with this. What is the value of a review? It describes what one person heard, that's all. Let's imagine that, having adopted your methodology, the reviewer determined that amplifier outperformed amplifier B. Would you then run straight out and buy A? Of course not. Does it mean that A will sound better than B in your system? No. That A will sound better than B in the majority of systems, and to the majority of listeners? Again, no. If you took both amps home and found that you preferred amplifier B, would you purchase A regardless because it 'won' the comparative test? Hardly. So, in a practical sense, what has adopting your methodology achieved? Absolutely nothing.

Hi Toon.

The practicality of the matter in this instance is selling equipment. Of course we magazine readers who are looking for advice in a hugely confusing subject, (that is the lesser experienced of us without the judgement of many on SNA) must rely on the impartiallity of the reviewers. A good and unbiased reviewer can guide us from making disappointing and potentially disastrously expensive mistakesw in system assembly. Most buyers don't have the time to assess over many months, difficult to access systems or components, especially for the extended periods prescribed. Some of the mags in circulation are obviously no more than manufacturers flunkies and cannot be taken too seriously, but having read a glowing article in a mag that you trust, it's again, human nature to defer to the greater knowledge and experience of the writer in using that info in system building.

I speak from experience in that ten or so years ago I bought a Marantz PM17 amp having read a good write-up in 'What Hi-Fi' and though I now know it wasn't the best I could have done, I've enjoyed a hell of a lot of music on it over the years and have no regrets. On the other hand, three years ago I did the same with my Dynaudio Audience 82's, they having won a "group test" in 'HI-FI Choice' being described as "Stunning - King Cones" against half-a-dozen other well regarded makes. Call me what you will but from that article, I bought them. The description suited exactly my requirements and at almost three grand my pocket too, though only just. since then over time I've come to realise that the speakers are nothing like the "magnificent" hi-fi they were made out to be.

My point is, people do buy purely on the recommendation of reviewers, it may not be wise and is surely a sheepish thing to do, but products are broken or made by these things and it's immensely important that the Ken Kesslers of this world are seen to be scrupilously honest in their assessments. IMHO blind testing would be fantastic in an ideal world, but have the opinion that it's too risky for the writers to get into. They could end-up with very eggy faces.

Grimmie

Link to comment
Share on other sites



Well said Grimmie.

Unfortunately thats the very reason why blind reviews would never happen in big mags. Manufacturers that trade on their name due to past deeds or simply on myth would be unwilling to present the goods for evaluation. Also theres a whole heap of cash for comment out there, and those mags would never jeopardise their cash cows.

Hi Toon.

The practicality of the matter in this instance is selling equipment. Of course we magazine readers who are looking for advice in a hugely confusing subject, (that is the lesser experienced of us without the judgement of many on SNA) must rely on the impartiallity of the reviewers. A good and unbiased reviewer can guide us from making disappointing and potentially disastrously expensive mistakesw in system assembly. Most buyers don't have the time to assess over many months, difficult to access systems or components, especially for the extended periods prescribed. Some of the mags in circulation are obviously no more than manufacturers flunkies and cannot be taken too seriously, but having read a glowing article in a mag that you trust, it's again, human nature to defer to the greater knowledge and experience of the writer in using that info in system building.

I speak from experience in that ten or so years ago I bought a Marantz PM17 amp having read a good write-up in 'What Hi-Fi' and though I now know it wasn't the best I could have done, I've enjoyed a hell of a lot of music on it over the years and have no regrets. On the other hand, three years ago I did the same with my Dynaudio Audience 82's, they having won a "group test" in 'HI-FI Choice' being described as "Stunning - King Cones" against half-a-dozen other well regarded makes. Call me what you will but from that article, I bought them. The description suited exactly my requirements and at almost three grand my pocket too, though only just. since then over time I've come to realise that the speakers are nothing like the "magnificent" hi-fi they were made out to be.

My point is, people do buy purely on the recommendation of reviewers, it may not be wise and is surely a sheepish thing to do, but products are broken or made by these things and it's immensely important that the Ken Kesslers of this world are seen to be scrupilously honest in their assessments. IMHO blind testing would be fantastic in an ideal world, but have the opinion that it's too risky for the writers to get into. They could end-up with very eggy faces.

Grimmie

Link to comment
Share on other sites

Where am I coming from, thought that was obvious? Reviewers are prejudiced, sometimes lazy and suffer from flaws that all humans do, being human. How can you say that removing all these would achieve nothing? If it will achieve nothing, then there is absolutely no difference between sighted and non-sighted reviewing is there? Or are you saying that sighted reviewing is going to give a more reliable review? Because if you are I don't agree. But that's OK, people disagree all the time.

I agree with you Dr X that sighted reviews are quite likely biased. But I still give them credence, as all the reviewers I have come to respect do them.

If we did not read sighted tests then we would not read any reviews. There don't seem to be any BTs. A fact of life, and may also be due to them being tedious and only useful for academic discussions.

But I would always be interested to read a BT.

Link to comment
Share on other sites

A good range of opinions and interesting reading, guys. Just one comment and one question I'll throw into the ring.

I disagree with you on the value of long-term listening to a component, Dr X. In a quick audition (a couple of hours, say), components that are very detailed and slightly bright often seduce the ear ... yet, when used for long-term listening at home become fatiguing. Even experienced listeners who understand this still fall for it! It's even been claimed in the past that some manufacturers deliberately engineer this sound to increase sales from A/B demos. :P Some aspects of a component's sound, subtle maybe but still important to the long-term satisfaction it will provide, simply take time to be realised. That's why reputable reviewers typically conduct listening tests over many weeks or months.

I'm still waiting, good Doctor, for an explanation of your assertion that it's practical to conduct blind tests over an extended time period? The blind testing methodology that I'm familiar with requires seated listener(s) and the system to be concealed behind an acoustic curtain, while an assistant makes all the necessary equipment changes and cues up the music. Commonly a special venue is utilised for the purpose, further adding to the expense. With the vast majority of reviews in the printed and online media currently being performed in the reviewer's home during their free time (there are vanishingly few full-time professional audio journalists, believe me), it's just not going to fly, is it?

Link to comment
Share on other sites

I'm still waiting, good Doctor, for an explanation of your assertion that it's practical to conduct blind tests over an extended time period? The blind testing methodology that I'm familiar with requires seated listener(s) and the system to be concealed behind an acoustic curtain, while an assistant makes all the necessary equipment changes and cues up the music. Commonly a special venue is utilised for the purpose, further adding to the expense. With the vast majority of reviews in the printed and online media currently being performed in the reviewer's home during their free time (there are vanishingly few full-time professional audio journalists, believe me), it's just not going to fly, is it?

Blind REVIEW, not blind TEST, let's just get that straight!!!! If you refer to blind TEST again any of you, I'll formally lodge a complaint to the mods that it's off topic!!!! Bhahahahahahahahahaha.....just kidding! :P

OK certainly Toon, it's not practically as easy to do a blind REVIEW as a sighted REVIEW over a long period of time, but it's not impossible either! S there you go, you have waited for my answer and now you have it.

Link to comment
Share on other sites



Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...
To Top