Skip to content
View in the app

A better way to browse. Learn more.

StereoNET

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

A.I General Discussion Thread

Featured Replies

Talking of surgery and seeking a qualified professional. With the indulgence of Mark and the rest of the members I would like the opportunity to tell my story.

Only if it’s ok with Marc. It would really mean a lot to me to get this burden off my chest. At the same time I respect this might not be the place to tell my story.

  • Replies 72
  • Views 2.3k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • sir sanders zingmore
    sir sanders zingmore

    Not sure if I already told this anecdote..(apologies if I'm repeating myself but I like it!) We had a "situation" at work where someone asked Copilot to summarise the organisation's policies on certai

  • Thanks for the suggestion. For some time now we have been watching to see how it finds its way into our forums, and what the pros/cons are. We are not opposed to technology, and there are some definit

  • I think that A.I. use, especially if thrust upon one, rather than purposely used, is alienating, and if used in a forum context would untimely be able to be traced back to the reason for that forums g

Its about what has happened to me in a case of a surgeon not doing his job correctly and the ramifications it has caused me.

I think it would mean a lot for me mentally to get it off my chest. I certainly wouldn’t mention any names or facilities just the facts. I would seek your approval first Marc, as I do not know whether this is the correct setting to discuss such matters and I totally understand if you agree that it’s not.

I suppose one of the reasons for telling this would be one, to get it off my chest and two, to read the opinions of others.

nearly all absolutely 'genuine'; they passed their exams by themselves.    But in 5 or 10 years time ... you may get a 'GPT' doctor (rather than a 'real' one). 🙄

Might say more about the quality of "exams" (regurgitating information only?) that you can pass them using "help".

Doctors obviously need to be good at many more things than what AI can help them with ... and I'd prefer them to focus on that (and if we think about that, that's typically what makes a great doctor. care, motivation, interest, interpersonal, good experience at practical/physical things like pulling a rusty nail out of my hand... or feeling up my sore leg.


In recent studies in the past few years they show that for complex cases "significantly higher diagnostic coverage compared to specialists"

AI vs specialist correct diagnosis.

76% correct vs 25.9%
85.5 vs 20%

71.4 vs 42.6%
"AI models often include the correct diagnosis in their top-10 list of possibilities far more frequently than unassisted doctors"


New tools will make doctors (and everybody) significantly better at some things ... but way we assess competence will need to adjust (probably a lot, and that's probably long overdue too).

21 minutes ago, davewantsmoore said:

Might say more about the quality of "exams" (regurgitating information only?) that you can pass them using "help".

Great point, Dave.

I suspect that's the inevitable result of universities morphing themselves into 'businesses' - which are more focussed on increasing student numbers than delivering excellent education.

Edited by andyr

  • 4 weeks later...
  • Volunteer

Long video but worth it (in my opinion). Geoffrey Hinton (Nobel prize winner) on StarTalk.

His view is that “hallucinations” are a sign that Al’s thinking is actually more human-like.

On 11/3/2026 at 7:36 AM, sir sanders zingmore said:

Long video but worth it (in my opinion). Geoffrey Hinton (Nobel prize winner) on StarTalk.

His view is that “hallucinations” are a sign that Al’s thinking is actually more human-like.

I watched this the other day.

Great insight from Geoffrey.

Super concerning.

I think this all leads to massive social unrest in the short term.

Edited by Grizaudio

After deep diving on this topic for over a year or more, the Silicon Valley narrative that "AI is somehow going to provide abundance for all" appears to be a total misrepresentative of reality [Certainly at least in the short term].

Based on the insight of Geoffrey, Dario and others... its likely civilisation will experience massive social unrest, as humans lose their jobs, their earning capacity, and job centric self-fulfilment.

You only need to look at the exodus of AI safety employees to know the focus is elsewhere. Billionaires building nuclear bunkers, and land for food supply is also not providing any comforting reassurance.

The private sector will also struggle, as consumers stop purchasing, compounded further by continual AI advancement out pacing policy, and new opportunity. The majority will suffer; a minority will prosper. The consumption paradox on full display.

All this change ends traditional finance, money, and government, raising some BIG questions about how UBI or similar is ‘actually’ going to work [and who funds it]. I personally think the promise that being entrepreneurial [in the longer term] will somehow save you, is just totally illogical within the context of the larger socioeconomic environment. THe only solution that makes any real sense to me, is some type of social credit score, which determines income above a minimum - which by design promotes community to engage and help each other.

AI companies are going to be the BIG winners, we will see a massive centralisation of power, surveillance and wealth - with next to no incentive to redistribute any wealth in/from the private/public sector, and indeed a government with no means to provide UBI with capitalism dying in real time.

What does government look like in a post AGI world?

I have no idea how policy addresses these BIG questions, what a hot mess.

Maybe a super intelligence is needed to solve these BIG problems. The irony!

Edited by Grizaudio

In general I’m finding AI useful & insightful.

However, in the hifi world products evolve regularly

Examples: - the refinement of Class D amplification;

- the DAC chip evolution and certain brands later developing there own FPGA DACS;

- for spice add in the MQA debate.

So if / when AI digs up opinions / criticism of say older 1st, 2nd generation versions, its not reflective of the manufacturer’s latest model developments and this can result in the wrong impression of a brands sound characteristics -:-:- until perhaps the new model is reviewed in magazines, or commented on in Forums such as StereoNet, Whats Best, Hifi Shark etc.

The point that AI takes multiple sources to have a definitive and comprehensive answer is correct way ahead. AI gives you the truth from many sources. To discard it as a tool is telling that you are stuck in the past and in many cases with a belief that is wrong. This can't be more true in audiophile where physical hearing if poor can't dispute any of the issues in audiophile and technical that changes sound and only challenging the numbers. This is the only reason why someone would limit the tools to argue when the answers are so easy to get today. If admin are forcing this they are ignorant and stupid and not understanding how the AI works. It also gives the sources where the answer(s) originate. So follow the rabbit hole and learn more. Stuck in the past with old geezers with A/AB obsolete tech is not what we have today. If you live in the past you shouldn't control the present.

On 10/03/2026 at 7:53 AM, Marc said:

What we have said, is lazy slab copy/pasting of AI responses is not permitted in discussion forums.

You only argue about this issue if you don't like the answer that proves you wrong and can't dispute. It is a childish approach when given comprehensive answer based on multiple sources. There are admins and mods that are trolls with no intelligence only here to argue and not giving anything. I'll give you strait I won't tolerate BS and one of them is not allowing tools that give answers that are based on multiple sources making them correct and to correct the old beliefs.

  • Administrator

Errr, did you get out of bed on the wrong side today?

1 hour ago, P.M.B.66 said:

AI gives you the truth from many sources

You couldn’t be more wrong. AI has limitations and your description shows a lack of understanding of those limitations. If you’re going to interpret AI responses as the truth, then you’re going to be wrong. Often.

1 hour ago, P.M.B.66 said:

trolls with no intelligence only here to argue

You are looking in a mirror yeah?

I find this thread ... more than a tad bizarre. 🙄

Back at the time of the '89 GFC, I was working in a startup which was developing 'neural network' models - to recognise shapes (ie. patterns). This was defence-related - iow, is the picture you (the camera!) sees ... a friendly plane ... or an enemy plane?

Is this AI? NO - simply pattern recognition.

Thus, most of today's LLMs - supposed to be AI - are simply pattern recognition. A long way from 'Intelligence' - which I would suggest is a uniquely human attribute.

Edited by andyr

On 14/03/2026 at 8:53 AM, P.M.B.66 said:

The point that AI takes multiple sources to have a definitive and comprehensive answer is correct way ahead. AI gives you the truth from many sources. To discard it as a tool is telling that you are stuck in the past and in many cases with a belief that is wrong. This can't be more true in audiophile where physical hearing if poor can't dispute any of the issues in audiophile and technical that changes sound and only challenging the numbers. This is the only reason why someone would limit the tools to argue when the answers are so easy to get today. If admin are forcing this they are ignorant and stupid and not understanding how the AI works. It also gives the sources where the answer(s) originate. So follow the rabbit hole and learn more. Stuck in the past with old geezers with A/AB obsolete tech is not what we have today. If you live in the past you shouldn't control the present.

OK, ask your A.I. information aggregator this question.

"Why is it risky to use AI to evaluate expected outcomes for audiophile system analysis?"

4 minutes ago, bob_m_54 said:

OK, ask your A.I. information aggregator this question.

"Why is it risky to use AI to evaluate expected outcomes for audiophile system analysis?"

Sorry this is copy and paste from Grok AI (but I think it’s allowed in this context)

Quote in part of a longer answer

Using AI (like large language models or chatbots) to evaluate or predict expected outcomes in audiophile system analysis—such as how an amp-speaker-DAC-cable combo will perform in terms of soundstage, tonal balance, noise floor, dynamics, or overall synergy—is risky primarily because AI lacks genuine auditory experience, relies on flawed training data, and cannot perform or verify real-world testing. While it can summarize specs or forum opinions quickly, it often leads to misleading predictions that sound plausible but fail in practice.

Here are the core risks, grounded in documented issues:

Now from Perplexity AI

Quote in part of a longer answer

Using AI to predict how an audiophile system will sound or what you will prefer is risky because both the data and the task are fundamentally mismatched with how humans actually perceive audio, and current models are prone to confident error, bias, and oversimplification.

On 14/03/2026 at 9:08 AM, P.M.B.66 said:

You only argue about this issue if you don't like the answer that proves you wrong and can't dispute. It is a childish approach when given comprehensive answer based on multiple sources. There are admins and mods that are trolls with no intelligence only here to argue and not giving anything. I'll give you strait I won't tolerate BS and one of them is not allowing tools that give answers that are based on multiple sources making them correct and to correct the old beliefs.

And another one to ask your favourite A I tool.

"can public ai give me technically accurate information regarding the operation of electronic systems?"

38 minutes ago, bob_m_54 said:

And another one to ask your favourite A I tool.

"can public ai give me technically accurate information regarding the operation of electronic systems?"

Some of this will come down to prompt design, e.g. limiting the sources AI draws answers from:

"Only source answers from professional electronic engineering literature or peer reviewed literature; do not include any answers from blogs, forums, and other non-peer reviewed sources; provide a list of references used in compiling your answer".

Asking the LLM yes/no style questions is a bit of a black hole mistake.

Better procedures:

  • keep refining answers received

  • first answer never quality

  • "Before answering, ask me 3 clarifying questions"

  • “Burden of Proof” prompt

    • Summarize the data and prove all claims

    • “Do not include if you can not cite your source”

    • AI to pick a side in an argument:

      • Defend [argument] and refute common criticisms

      • Make a strong argument for [subject matter] from an abc perspective and then argue against it from a zyx perspective

Edited by Steff

19 hours ago, andyr said:

Thus, most of today's LLMs - supposed to be AI - are simply pattern recognition. A long way from 'Intelligence' - which I would suggest is a uniquely human attribute.

Your perspective isn’t technically accurate based on how modern transformer models actually operate.

I'd highly recommend watching Goffrey's [https://en.wikipedia.org/wiki/Geoffrey_Hinton] video posted below, where this topic is explored for the everyday person. Geoffrey is an expert [Nobel price winner, cognitive scientist, cognitive psychologist, computer scientist, etc] unlike all of us here.

Geoffrey categorically believes these models are thinking, he also argues conciousness its self is largely a human construct, and doesn't exist.

Lets think on your point for a second.... When you train & scale Transformer models on enough data, they develop emergent properties.

Abilities like reasoning and logic, which wasn't actually programmed specifically by dev's. This behaviour flips the 'just pattern recognition' arguement on its head, competely disapproving your comment above.

Right now, even the best scientists don't understand how human memories or thinking /reasoning actually works.

However, the digital approximation using neural synapses replicates human memory/thinking quite well, so if a system replicates something where it is indistinguishable, and the outcome is identical - the process or variance is totally irrelevant.

If you roll in the theory of [MUH], what we call intelligence is just a specific and highly complex mathematical arrangament of information.

Combine this with LLM's having PHD level maths & coding capability, and the entirity of human history and language available to them - Just think on that for a second.

No disrespect, but I think Geoffrey might know a little more than anyone here on the topic. So its probably best to get your understanding of the technology from someone that a). Understands congnitive function, and b). Basically kick started AI and continues to advocate for AI controls.

Edited by Grizaudio
typo's

13 minutes ago, Steff said:

Some of this will come down to prompt design

These written prompts are very average. @bob_m_54

You need to understand prompt design, to better request responses.

Google has a very good overview on this : https://cloud.google.com/discover/what-is-prompt-engineering

Edited by Grizaudio

Interesting

I'm finding AI good for interpreting a speaker design I'm working on.

I'd rather sort this out now, rather than having posts deleted, as I'm only trying to share and log the build.

Is it ok to quote some of the script, "copy paste" of actual build parameters/probable outcomes, or is this seen as a contravention of the guidelines ? Generally this dialogue will be accompanied with real world photos, and measurements and my own dialogue.

THanks in advance

EDIT

bit of a grey area

  • StereoNET’s official editorial and publishing policy is human-authored content only. AI tools may be used for assistance with research or fact-finding but will never be used for writing or editing published material.

Most likely I'll fall into the fact finding/assistance/research category.

playdough

P.S. Feel free to make an opinion or Moderate as necessary.

Edited by playdough

4 hours ago, Grizaudio said:

These written prompts are very average. @bob_m_54

You need to understand prompt design, to better request responses.

Google has a very good overview on this : https://cloud.google.com/discover/what-is-prompt-engineering

True, but it was in response to the reply I quoted, to illustrate that not all information is from accurate sources. And the way he seemed to think AI was the be all and end all of technical knowledge. To be able to use AI to verify something, you need to actually understand the technical aspects of the system you require information about. It is not a substitute for your learning and understanding.

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.