Jump to content

MLXXX

Full Member
  • Content Count

    8,396
  • Joined

  • Last visited

Community Reputation

829 Superstar

1 Follower

About MLXXX

  • Rank
    5000+ Post Club

Profile Fields

  • Region
    Australia
  • Location
    Brisbane (ex-DTV Forum member)

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. There are differences that suggest the playback wasn't paused on the same frame. That was not helpful. To my mind the part in the following quote in bold [my emphasis] was critical for the video reproduction: "The Oppo player is connected to the OLED TV via an AudioQuest Cinnamon HDMI cable. The Oppo’s stock internal switch mode power supply was replaced with an upgrade Oppomod linear power supply. The home theater system is served by two dedicated 20 amp AC circuits. All electronics are fed ultra pure AC power by a PS Audio P5 AC regenerator and a PS Audio P10 AC rege
  2. When I was much younger, I listened (using headphones) to versions of 24-bit masters truncated to 16 bits as distinct from being dithered and I could hear a slight roughness in the last dying moments of reverberation. These files went up to 0dB. However the recordings were indeed made under pristine conditions. Also I was able to replicate that experience by performing truncation vs dither myself on nominally 24-bit files. (Noise-shaped dither was a useful advance over earlier forms.) So I don't regard dither to 16 bits as academic, where the source recording is made under very
  3. If you need to attenuate by 30dB for listening at your normal maximum listening level, you are down to 18 bits (dithered) at that maximum listening level, which is better than CD quality (16 bits, dithered) but still an actual loss of bits. I will concede that, noting however that 30dB is a voltage ratio of 31.622777:1. It would be an abuse of a DAC to operate it in such a way that even at maximum listening level it was being fed a signal at only 1/30th of normal strength. If you need to attenuate by 6dB for listening at your normal maximum listening level (and that could indeed routine
  4. I'm in agreement with your posts, Satanica, but will offer a point or two of possible clarification to thread readers, if I may. Just a rider on the nominal bit depth of a DAC, and I'm sorry if I'm going over old ground!, but actual linear DAC performance even with a good DAC may only in practice reach down to the 20th bit. In recent years this has been confirmed in the many detailed reviews of DACs by Amir on the Audio Science Review (ASR) Forum. Of course there also begins to be significant DAC noise developing right down at the 20th bit. As I alluded to
  5. I note that at a practical level in 2021, a good DAC will have such low noise that its noise will not be audible when controlling system volume using DSP before the DAC, rather than controlling volume after the DAC. That is why it is feasible in 2021 to control the volume of the music digitally even in a high quality system. An exception would be if system gain is much higher than it needs to be in the first place, e.g. feeding the DAC signal into an amplifier input intended for much lower output devices, and lacking a gain control of any kind.
  6. If the above is a possible trap, the converse would be a possible trap, i.e. a fear that well designed equipment will almost certainly fall foul of "spuria". As davewantsmoore has mentioned: Yes. Can be. ... but you can see this in their output. You can see it in their output at frequencies you can hear.... you can see it in their output at frequencies light-years outside of what you can hear. As I have already suggested, setting up two [identical model] stream receivers and recording their audio outpu
  7. As this thread is not in The Great Debate of the forum, I'll try to keep my reply relatively brief. There certainly are expert panels to help evaluate speaker system prototypes. Speaker systems tend to sound radically different from each other. Subjective judgments are needed to arrive at the best compromise. With streamed audio, I understand that under normal conditions the data received is an uncorrupted copy of the data sent. So the role of an expert panel would be quite narrow. Presumably, once assembled, members of such a panel would listen out for a random "noise
  8. I imagine that view might possibly resonate with at least some of the readers of the Part A, companion thread. However in this Part B thread, I'd think a high percentage of readers would be looking for some hard evidence, as distinct from mere "belief". If a "souped up" ethernet switch can actually make an audible difference with streamed audio, I don't think it's unreasonable to ask for some audio recordings demonstrating the difference made; and an explanation of the circumstances that would trigger the audible difference. Is such evidence available? I confess I haven't looked
  9. I think listening results depend on the efficiency of the codec (e.g. AAC is more efficient than mp3) and the bitrate. For instance, at 320kbps, stereo AAC is very hard to distinguish from lossless; unless listening to critical sections of the music, and even then it helps a lot to have short excerpts available to listen to repeatedly; to compare the lossy against the lossless, for the slightest audible difference. Some audiophiles only remember the early days of low bitrate mp3; and don't fully appreciate how good a modern codec, at a medium to high bitrate, can be.
  10. I was not envisaging using a number of different model amplifiers to present the test sounds. I was envisaging comparing different sounds fed into a single sound reinforcement system. My comment was that whatever amplifier was used to present the test sounds, it should have plenty of "amperage"* to do the job. ______ * That is, current capability.
  11. I believe this is a broad reference to the capacity of an audio power amplifier to maintain its delivery of an undistorted waveform at frequencies where the connected speaker system's impedance becomes unusually low, and/or highly reactive. (To achieve that, the amplifier needs to have a capacity to source or sink a large current flow.) I note that unlike "maximum engine torque" which can be measured and listed in the specifications of an engine, amplifier "amperage" is a broad, descriptive term, lacking a standard measuring method. Suffice it to say that if conducti
  12. Well said! The term "amperage" is not in mainstream usage in electronics publications today, but it still appears in dictionaries and it isn't at all hard to guess what is meant when the term is first encountered. Interestingly, the way we use the word "voltage" today is a broader usage than in former times. Publications of 100 years ago would often refer to EMF, or electromotive force, rather than to voltage. The volt is a measure of EMF (or of "potential difference") just as the amp (or ampere) is a measure of current.
  13. Perhaps this analogy will help. If the power authority monitors the voltage at the street power pole* to which the mains switchboard of a house connects, and the voltage remains constant in amplitude (230.00V) , constant in frequency (50.000Hz), constant in phase (0.000 degrees offset ), and with very low harmonic distortion, the power authority can be satisfied it is meeting its obligations to supply mains power current to that house. _____________ * Assume the cable from the house to the power pole connects to only one of the three phases at the power pole
  14. I've heard the informal term "amperage" used in relation to fuses; equivalent to the more general term "current rating". Years ago we'd see the word "Amperes" in technical magazines, but today we pretty much only see "Amps". Some people still call the capacitors used in electronic circuits, condensers. And a handful of people would still say kilocycles rather than kilohertz. Inches is still a very common usage in Australia for TV set diagonals. __________________ @frednork, I've been thinking of a blind test exercise involving different brands of microphones to recor
  15. You also need to consider whether audio engineering may have improved to the point where differences are not audible at all with some equipment, be that for a listener concentrating hard for a DBT, or settling back at home immersing themselves in the music. That really is a very straightforward explanation. I suggest it should not be dismissed lightly! It would not be surprising an outcome when you consider that human ears can detect loudness changes down to about 0.25dB in very, very good conditions, and 24-bit ADCs and DACs routinely operate down to at least the 20
×
×
  • Create New...