Jump to content

Recommended Posts

Posted

Not sure if you guys have seen this but an interesting article on Ars Technica:

In the age of highly compressed music files playing on iPods and even lower-quality Pandora streams playing on iPhones, some artists, music producers, and others in the music industry are apparently pushing for iTunes and other digital download services to adopt higher-fidelity 24-bit files. But while a small niche of audiophiles might appreciate the move, it seems unlikely that the necessary sea change in hardware and software will happen in order to support such a move, nor do we see consumers flocking to 24-bit files in order to make it economically viable.

According to music industry executives speaking to CNN, record labels are supposedly in discussions with Apple to begin offering 24-bit music files. Most of today's digital music is encoded using 16 bits per sample at a rate of 44.1kHz, including audio CDs, MP3s, and the AAC files used by iTunes. However, master recordings (or digital remasters from analog tape) tend to be done using 24 bits per sample at a rate of 96kHz, which offers a wider dynamic range with smoother waveforms. This wider dynamic range can capture subtle aural nuances that can be lost in the conversion to 16-bit format for audio CDs.

Add to that the fact that the large majority of music is further compressed using MP3 or AAC formats. These lossy formats use "psychoacoustic modeling" to compress the audio waveform by eliminating certain frequencies and harmonics that humans are least likely to notice are missing.

The difference between an uncompressed 24-bit/96kHz recording master and a 256kbps, 16-bit/44.1kHz iTunes Plus track is great indeed, and some in the music industry lament that listeners just aren't hearing what the artist intended. "What we're trying to do here is fix the degradation of music that the digital revolution has caused," Jimmy Iovine, head of Interscope-Geffen-A&M, said recently during an HP event to announce its new webOS-based mobile devices.

Dr. Dre, famed hip hop artist and producer—as well as Iovine's partner in Beats Audio—agrees. "Most of you aren't hearing [music] the way it's supposed to sound. And you should—hear it the way I do." HP uses Beats Audio hardware in some of its laptops, and will also incorporate the technology in its TouchPad tablet.

According to Iovine, Universal Music Group has been working with Apple to try and transition iTunes to 24-bit audio. "Apple has been great," Iovine said during the HP event. "We're working with them and other digital services—download services—to change to 24-bit. And some of their electronic devices are going to be changed as well. So we have a long road ahead of us."

A long road is right. While most Macs and PCS can natively handle 24-bit audio playback, none of the hundreds of millions of iPods sold can play back 24-bit files, nor can most PMPs, smartphones, or other mobile devices.

"Paul McCartney can master The Beatles albums all he wants, [but] when you play them through a Dell computer, it sounds like you're playing them through a portable television," Iovine said, suggesting such sound lacks depth. But remasters and master recordings use higher-quality sampling so that the conversion process to 16-bit format for CDs can be better controlled through mastering techniques.

To play back native 24-bit audio files, Apple would have to reengineer iPods and iPhones to use hardware decoders and 24-bit D/A converters that support 24-bit audio; computer and mobile device vendors would have to do the same. Without wide device support, 24-bit iTunes tracks make little sense.

The case could be made that 24-bit audio files would sound better, assuming consumers could (or would) get access to hardware capable of playing it, but there are other considerations at play. 24-bit audio files would also be larger than current digital music tracks, taking up more storage space and more time to download. The music industry is also looking at 24-bit files as something that could carry a premium price, so tracks could cost more as well.

There's some precedence for this in the digital music download market. When Apple introduced iTunes Plus in 2007, it increased the AAC compression rate from 128kbps to 256kbps. Along with the data rate increase—and elimination of DRM—came a price increase from 99¢ to $1.29 per song. Six months later, however, iTunes Plus tracks were dropped to 99¢.

(Yet more variable pricing came when Apple persuaded all record labels to standardize on the DRM-free format in 2009, but the price difference was not related to sound quality).

While some audiophiles can discern the difference between AAC tracks compressed at 128kbps versus 256kbps, or compressed tracks versus an uncompressed CD source, there's evidence to suggest most listeners can't, especially on low-end audio gear like iPods, cheap computer speakers, or compact home theater systems. Much like expensive Super Audio CDs (SACDs), the extra sound quality would only be of appreciable benefit to an extremely small niche of audiophiles with very expensive audio systems. While those customers would likely appreciate a higher quality download option, the music industry already provides them with higher-fidelity uncompressed formats, such as CDs, SACDs, and vinyl albums.

For the vast majority of listeners—many of which are satisfied with low-bit rate streams from the likes of Pandora—a transition to 24-bit audio would be superfluous.

Source

What do you guys think? Personally I think would be a great move. Undoubtlbly though Apple would choose there format of apple lossless. What would be even greater news would be if they used FLAC for the wrapper. That would be awesome. Not going to happen though.

Posted

I've often questioned the benefits of what is marketed as "high resolution" when it comes to the lossy audio formats.

Take for instance dts. A 6 channel dts-CD with 16 bit depth and a sampling rate of 44.1kHz results in a SPDIF bitsteam of approximately 1.5Mbits/s.

A recent DVD purchase which has 4 channel dts at 24 bit depth and a sampling rate of 96kHz results in the same SPDIF bitrate of 1.5Mbits/s.

IMHO there appears to be no advantage (in the case of dts) in increasing the "resolution" to a higher bit depth and sampling frequency if the resultant bitstream is the same bitrate as the lower (standard) resolution audio.

If one does the math, it would appear the "higher resolution" audio must be undergoing more compression than lower (standard) resolution audio to accommodate the information in the same bandwidth.

So I guess it raises the question. Is the same sort of compromise taking place when we compress 24 bit / 96kHz audio into the same bitrate (say 256kbits/s) MP3?

Cheers,

Alan R.

Posted

"what you talking about Willis"?....er Alan

In your second example each channel gets more information. You don't listen to a bitstream, you listen to speakers, four in the second example you gave, six in the first. More speakers does not equal more sound quality, just more sound.

Posted
"what you talking about Willis"?....er Alan

In your second example each channel gets more information. You don't listen to a bitstream, you listen to speakers, four in the second example you gave, six in the first. More speakers does not equal more sound quality, just more sound.

Before I answer your question, do you agree dts is a lossy encoding system?

Cheers,

Alan R.

Posted (edited)
High resolution lossy format = oxymoron

DTS (and DTS 24/96) are lossy formats. They cannot be 'hi-rez'. Ever.

Totally agree. Isn't marketing hype just sooooooooo wonderful? :D Starting off with say each channel as 24bit / 96kHz LPCM and then compressing it down with a lossy compression system like dts or Dolby Digital - something's gotta give.

Cheers,

Alan R.

Edited by Alan Rutlidge
Posted

Sounds like you know something about dts that I don't.

I think what has confused me is that the original post was about mp3 vs cd vs high res downloads, where as you have an issue with dd and dts that may not be what they claim to be. My thoughts were about flak and aiff both of which are lossless in that they unpack to the same 0 and 1 s as the original.

At face value your example appeared to be flawed as the sums allowed more data per channel, at least on paper.

Posted
Sounds like you know something about dts that I don't.

I think what has confused me is that the original post was about mp3 vs cd vs high res downloads, where as you have an issue with dd and dts that may not be what they claim to be. My thoughts were about flak and aiff both of which are lossless in that they unpack to the same 0 and 1 s as the original.

At face value your example appeared to be flawed as the sums allowed more data per channel, at least on paper.

I assume you've heard of dts-CD and are familiar with dts surround tracks on DVD-V which are both lossy digital audio encoding systems (as distinct from the later dts-HD Master Audio used on Blu-ray discs which isn't in the examples I originally used)?

Lets take a typical 5.1 surround dts-CD. It would be reasonable to expect the 6 channels (LF, RF, LR, RR, C and LFE) originated as 16 or 20bit / 44.1kHz LPCM for each channel before undergoing dts encoding. Using the 16 bit depth and 44.1kHz sampling as a typical example, the total aggregate bitrate into the encoder for all 6 channels would be 16 bits x 44100 Hz sampling x 6 channels = 4.233.6Mbits/s.

The dts encoder compresses and processes the incoming data from all 6 channels into a data rate and format that can be stored on a CD-ROM and played back through the S/PDIF port of a conventional CD player / transport or DVD/CD player, where the dts signal is decoded back into the 6 channels of audio via a dts decoder as found in most HT receivers. The bitrate of the dts bitstream leaving the S/PDIF output of the CD player or transport is approximately 1.5Mbits/s, somewhat less than the aggregate incoming bitrate to the dts encoder of 4.233.6Mbits/s.

So what happens in the case of 24 bit / 96kHz dts? Let's redo the calculation as per my example using 4 channels (LF, RF, LR, RR). 24 bits x 96000 Hz sampling x 4 channels = 9.216Mbits/s.

When encoded into dts the bitrate leaving the S/PDIF port on the DVD player is showing up on the processor as 1.536Mbit/s.

Notice that even though the original LPCM data streams used to encode were of a higher resolution (24bit / 96kHz) the encoded dts bitrate is virtually unchanged from the 16 bit / 44.1kHz of the dts-CD. IOW more information has to have undergone more lossy encoding to occupy the same bitrate or bandwidth.

So IMHO it's Clayton's hi-res where the encoding process must have incurred more loss with the 24/96k channels than with the 16/44.1k channels.

Cheers,

Alan R.

Posted

9.2 vs 4.2 Into 1.5 (insert lightbulb here) got ya!

My Peter Gabriel and Depeche Mode 24/96 DVD both sound pretty good with the solidity that I associate with high def, mind you I tend to find the differences between bit rates are small compared to other factors. 16bit 44.1 can be very very good if done well. I am listening to a CD rip right now at a modest volume and it is a joy to listen to.

I agree about the dodgy claim of high res. It reminds me of a discussion at JB hifi where a sales guy wanted to sell me Apple tv instead of the blue ray player I was interested in (but he didn't have in stock). "high res video" he said. "1080p" I asked. Not even close. He could not walk away fast enough when he knew I was on to him.

Posted

Although I have not heard any DTS 24/96 dvd's, I have read on the interwebs that some of them sound exceptional.

  • Recently Browsing   0 members

    • No registered users viewing this page.




×
×
  • Create New...
To Top