Skip to content
View in the app

A better way to browse. Learn more.

StereoNET

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Keith_W system

Featured Replies

Whether sinbgle or multibit, to run a convolution filter that does anything useful at 11.2MHz rate of DSD256 would require a tremendous number of taps and be more than any PC hardware could handle especially running under an operating system such as Windows etc.

 

No, the number of taps determines what shape the filter can be (more => shaper / steeper filter)  ....  Operating at a higher sampling rate, just requires doing the filter over and over and over again more times.

 

 

It's possible that he is taking advantage of the CUDA capability of any nVidia hardware 

 

Yes, HQPlayer does this if available  (and hardware extensions in modern CPUs)

  • Replies 1.2k
  • Views 91.1k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • At the moment the system is not very photogenic. I am waiting for a couple of things: 1) Paul to finish building the subwoofers, 2) Lucas to finish repairing my monoblocks. Until then, there are cable

  • I found a whole bunch of old photographs of systems I have owned over the years! Unlike many people here, I am slow to change equipment. I tend to buy things and enjoy them for many years before upgra

  • ghost4man, how much more testing and measuring do I need to do? The answer is - as long as it takes. Even if it takes years, you will find me patiently working my way through it   Davewantsm

No, the number of taps determines what shape the filter can be (more => shaper / steeper filter)  ....  Operating at a higher sampling rate, just requires doing the filter over and over and over again more times.

 

 

 

Yes, HQPlayer does this if available  (and hardware extensions in modern CPUs)

 

Number of taps determines the filter resolution. ie F = 1/(Ntaps * Ts) = fs / Ntaps. The lower the number of taps means the filter resolution is poor. Simple FFT and inverse FFT relationship applies here. For DSD256 fs=11.2MHz so filter granularity is very course for a given number of taps compared to a PCM filter so to get a fine granularity filter number of taps has to be huge.

 

cheers

I had some further thoughts on this last night and kind of worked out what is being done. I realised that it is feasible to run a DSD convolution in real time as claimed however it is not as CPU intensive as I first thought so I am not so skeptical now but it is still CPU intensive none the less and would chew up CPU resources quickly for longlength FIR's. And it only requires integer arithemetic so well within the realms of a 32 or 64 bit processor such as the Intel or ARM most likely being optimised or hand coded in assembly lnaguage ;) Having said that I don't believe the process is lossless from the perspective that the single bit FIR coefficients are an approximation to the actual multibit coefficients so there is no free lunch here ;)

 

cheers

Thanks zydeco. Using a multichannel DAC with the PC sending 8 channels to it allows much finer tuning of the crossover as well as simplifying the system a great deal. On its own, Acourate can achieve plenty but it won't be able to do individual driver correction, nor can it match the timing of the drivers quite as well. 

 

Why Acourate and why multichannel? Well, no other solution allows you to perform convolution in DSD256, for 8 channels, and then perform DA conversion in native DSD256. I will be able to take a native DSD256 file and play it back without conversion to any other format. Nothing else will let me do it - not MiniDSP, nor DEQX. 

 

Right now it's one PC doing everything. I might change this in the future - one PC as a music server, second PC to do the convolution. But then Intel will probably release a new generation of CPU's which would have enough power to do everything AND have less heat output. We will see. 

 

The TV is in the room because I have nowhere else in the house to put it. At the moment, the receiver sends output to the 2 channel system via HT bypass on the preamp. Once the system is converted to full digital, I will have no way to route the audio from movies through the 2 channel system ... for the time being. I don't even know if it is possible. I think it COULD be done if I take a TOSLink signal from the receiver and input it into my RME HDSPe AES sound card, and then use that as an ASIO input for HQPlayer to do the convolving. If you or anybody else has any idea how to do this, I am all ears. 

 

As a stopgap solution, there will be another pair of speakers purely for movies and watching TV. 

 

Thanks. Re: TV. I didn't realise that HQPlayer could take a live external feed (in addition to playback of an audio file). The issue with convolution in A/V is the time lag introduced by the audio processing. I've gone down the path of using JRiver for both audio and video as it compensates for the lag introduced by audio processing but a big part of your plan is up-sampling to DSD256 so this doesn't make sense (and, honestly, it's kind of frustrating to be constrained to a single source.)

Number of taps determines the filter resolution. ie F = 1/(Ntaps * Ts) = fs / Ntaps. The lower the number of taps means the filter resolution is poor. Simple FFT and inverse FFT relationship applies here. For DSD256 fs=11.2MHz so filter granularity is very course for a given number of taps compared to a PCM filter so to get a fine granularity filter number of taps has to be huge.

 

cheers

 

Yes, it was a rather weak/layperson explanation from me, sorry.

 

There some very detailed information on this in the thread(s), but you do have to be patient as it's buried within heaps of other discussion.

 

 

When the player does the SD modulation  (ie. renders something as 1bit)  ... AFAIUI, convolving with another impulse at this stage  (ie. your filter)  is only 'one extra small step'.

 

An 8 threaded processor running at 4Ghz .... has ~3000 clock cycles available per sample of DSD256   (of course, not the only thing it would need to do with its budget .... but you get the point) .... seems more than plausible to me.   I've never actually even ever heard anyone question it from a 'performance' plausibility standpoint.

 

Others are even running this modulating for 512 or 1024x 1bit rates   (ie. up to 50mhz sample rate)

I had some further thoughts on this last night

 

.. and I continue my flaw of not refreshing threads before I reply to them  >_<

Thanks. Re: TV. I didn't realise that HQPlayer could take a live external feed (in addition to playback of an audio file). The issue with convolution in A/V is the time lag introduced by the audio processing. I've gone down the path of using JRiver for both audio and video as it compensates for the lag introduced by audio processing but a big part of your plan is up-sampling to DSD256 so this doesn't make sense (and, honestly, it's kind of frustrating to be constrained to a single source.)

 

Compute filters with a low amount of delay.  Accourate (and others) have a tutorial on this.    Yes, your filters won't have quite the same resolution, but if you work at it you can get fine results.

 

If you get creative, you can switch between short and long filters depending on what you are playing (y)

  • Author

Layperson explanation works for me, Dave! To be honest all this talk about CPU taps and the mathematics behind DSD convolution is beyond me. 

 

I sit in the middle of the objective/subjective camp. Where possible, I will take measurements. Where I can't take measurements, I will listen and decide for myself if a difference exists. Usually I can tell if a difference is too small to bother. 

 

Good point about the processing overhead introduced by convolution which can potentially cause a problem with audio/video synchronization. I didn't think of that. Are there any receivers that can delay the video signal to match the audio? Does such a thing exist? 

 

I have been corresponding with another SNA'er about computer audio. I have to admit that I am a complete novice when it comes to computer audio, but I have already noticed (on another forum) that there seem to be two camps. One camp says: use as little CPU as you can get away with. That way you deal with less heat, less EMI, and so on. The other camp says: you want as much computing overhead as possible. 

 

Given that I have traditionally been an "overkill" kind of guy, and believe that the key to good sound is to have plenty in reserve, I SHOULD naturally fall into the "plenty of computer overhead" camp. But digital audio is a different kettle of fish. It is not the same as - having a powerful amplifier that will never, ever clip powering sensitive speakers. I was wondering what SNA'ers thought about this. 

 

I also have another question - is it better to have plenty of RAM overhead, or is it better to use boutique RAM (like Paul Pang), even if these are not available in the capacities you get from "normal" RAM? Please don't ask me to post jitter measurements. I have no way to obtain such measurements, so I will have to go by ear. But believe me, if I was able to take such measurements, I would do so. 

Edited by Keith_W

Layperson explanation works for me, Dave! To be honest all this talk about CPU taps and the mathematics behind DSD convolution is beyond me. 

 

I sit in the middle of the objective/subjective camp. Where possible, I will take measurements. Where I can't take measurements, I will listen and decide for myself if a difference exists. Usually I can tell if a difference is too small to bother. 

 

Good point about the processing overhead introduced by convolution which can potentially cause a problem with audio/video synchronization. I didn't think of that. Are there any receivers that can delay the video signal to match the audio? Does such a thing exist? 

 

I have been corresponding with another SNA'er about computer audio. I have to admit that I am a complete novice when it comes to computer audio, but I have already noticed (on another forum) that there seem to be two camps. One camp says: use as little CPU as you can get away with. That way you deal with less heat, less EMI, and so on. The other camp says: you want as much computing overhead as possible. 

 

Given that I have traditionally been an "overkill" kind of guy, and believe that the key to good sound is to have plenty in reserve, I SHOULD naturally fall into the "plenty of computer overhead" camp. But digital audio is a different kettle of fish. It is not the same as - having a powerful amplifier that will never, ever clip powering sensitive speakers. I was wondering what SNA'ers thought about this. 

 

I also have another question - is it better to have plenty of RAM overhead, or is it better to use boutique RAM (like Paul Pang), even if these are not available in the capacities you get from "normal" RAM? Please don't ask me to post jitter measurements. I have no way to obtain such measurements, so I will have to go by ear. But believe me, if I was able to take such measurements, I would do so. 

 

You're biggest problem with this kind of system is the operating system getting in the way but there is not much you can do about it since all of the software requires it. The jitter performance will be determined by the DAC and the clock driving it which should be very low for a sabre DAC ;)

 

cheers

Yes, it was a rather weak/layperson explanation from me, sorry.

 

There some very detailed information on this in the thread(s), but you do have to be patient as it's buried within heaps of other discussion.

 

 

When the player does the SD modulation  (ie. renders something as 1bit)  ... AFAIUI, convolving with another impulse at this stage  (ie. your filter)  is only 'one extra small step'.

 

An 8 threaded processor running at 4Ghz .... has ~3000 clock cycles available per sample of DSD256   (of course, not the only thing it would need to do with its budget .... but you get the point) .... seems more than plausible to me.   I've never actually even ever heard anyone question it from a 'performance' plausibility standpoint.

 

Others are even running this modulating for 512 or 1024x 1bit rates   (ie. up to 50mhz sample rate)

 

There's a limit to the filter tap length and you need to remember that you are talking about a CISC processor with variable clock cycles per instruction so we can't assume RISC performance of single cycle instruction length. Also cache hits and OS overheads and latency are important in determining the processing speed.  For example if you wanted the penultimate filter with a 1 second impulse response that is 11 million single bit MAC's per DSD256 sample interval or (11x10^6)^2 MAC's or roughly 10^14 MAC's !! We can cheat a bit because single bit multiplies can done using a logical AND function so on a 64 bit processor we can do 64 single bit multiliplies in one instruction. The accumulate can be done using another instruction such as POPCNT etc. Anyway you get the idea. So now we are down to 10^14/64 MAC's which is still way more than even the best Intel processor can handle. So there is a limit to the filter tap length and we are only talking about one channel !

 

cheers

I also have another question - is it better to have plenty of RAM overhead, or is it better to use boutique RAM (like Paul Pang), even if these are not available in the capacities you get from "normal" RAM? Please don't ask me to post jitter measurements. I have no way to obtain such measurements, so I will

have to go by ear. But believe me, if I was able to take such measurements, I would do so. 

 

Measuring the jitter performance of a digital converter is a non-trivial exercise.   Serious gear, and knowledge is required, and interpreting the results is also not straightforward at all.

 

Measuring the "result" of jitter.    eg.   what it looks like in the analogue output of a DAC .....   isn't so hard, but it isn't very specific in that you don't where in the system it happened (cause / effect).

 

 

Listening tests are a huge murky element....   you don't have to carefully document them for long to see they are very unreliable  (quite counter-intuitive as it is human nature to trust out experiences and expectations) ....   You can move your head an inch or three, and get significant differences in the pressure waves.

 

 

So there is a limit to the filter tap length and we are only talking about one channel !

 

Indeed.   I don't believe I've heard of anyone running very sharp filters (ie. steep high/low/bandpass filters, like you might in a crossover) at these extreme sample rates.     I need to process 16 channels with sharp-ish filters in my system   :-S

Is there, Keith, a mechanism in HQPlayer to stream audio received from the Playback Designs transport via the RME card? Or is the plan to use Acourate Convolver for this source?

  • Author

Zydeco, after I get the NADAC the Playback Designs will be surplus to requirements. It may be possible to use it as a transport to send input to the computer via the RME card, I just need to get the correct cable made up. 

  • Author

Still waiting for all my new toys to arrive:

1. Rythmik Audio based subwoofers, designed by Paul Spencer. More than a year in the making, and should be ready soon!
2. Merging+ NADAC
3. Uptone Audio Linear PSU - mine is on reserve, waiting for enough cash to do it.

In the meantime, I had a mini-GTG today and compared the system in various configurations. These were with new measurements, and therefore newly generated convolution filters. The previously noted faults of the system - computer noise transmitted into the speakers, and intermittent pops - are still audible. So here are the results, ranked from worst sounding to best. 

 

5. PC (playing FLAC via SSD) with NO convolution --> system 

All agreed that this sounded the worst. Less coherent than having convolution. 

 

4. PC (playing FLAC via SSD) + convolution --> system

Before I started, I thought that this would be the ideal configuration. It does sound good, but it is noticably denser sounding with poorer instrument separation. 

 

3. PC (playing CD via optical drive) + convolution --> system

Surprisingly, this beat the pants off the FLAC. Noticably lighter sounding with better dynamics. Perhaps I didn't rip the FLAC files properly. 

 

2. Playback Designs MPS-5 --> system 

Noticably more body, better detail, better instrument separation. All the aforementioned computer noise is now gone. *gulp* ... this was the setup I was running before I started this whole computer audio thing! Have I embarked on a downgrade? However ... the computer is not yet optimized. People say that the linear PSU makes a dramatic difference. We'll see. 

 

1. PC (playing CD via optical drive) + convolution + digital volume control --> Playback Designs MPS-5 + NO PREAMP --> system

Removing the preamp, in my opinion, resulted in better sound. The preamp tended to round off the tops of notes and make the sound more "warm". Without the preamp, notes had much better attack and more bite. 

 

Given the results of this listening session, I have come to the following conclusions: 

 

1. Removing the preamp from the signal chain is probably the correct move. 

2. The PC's optical drive sounds better than FLAC. I am not sure why. I will re-attempt the experiment with a fresh FLAC rip and see if it sounds worse because of a faulty rip. 

3. The SACD player still sounds massively superior to the computer, even in a configuration where they should theoretically be equal if you believe in "bits is bits" (i.e. onboard transport on the MPS-5 vs. optical drive on the PC). 

computer noise transmitted into the speakers, and intermittent pops

 

This is not normal (for even a $20 DAC)

"SACD player sounds massively superior to the computer setup" and no frigging around required. That's what I love about my MSB setup. [emoji12]

Edited by Zammo

SACD player sounds massively superior to the computer setup and no frigging around required. That's what I love about my MSB setup. [emoji12]

 

This is also my experience.

Historically and for the most part this is true.

There are some plug and play type options for computer audio starting to appear. Much less fiddly.

I would think it's not like Keith hasn't tried SACDPs?

I would think it's not like Keith hasn't tried SACDPs?

 

 

It's clear he has davidro, as he's had the Playback Designs MPS-5 for some time. I was being somewhat tongue in cheek. I think it's noble Keith is trying his hand at a computer based DSD system allowing DSP, but it appears to be a lot of work, and also appears getting it sounding as good as a top notch stand alone player or transport/DAC combo ain't necessarily easy. I'm all for simplicity, even if I'm not eeking out the last 0.5% from my system, but that could just be because I'm lazy.

Edited by Zammo

  • Author

Once this whole thing is set up, it WILL be simple. The analog signal, once it is generated, will have far fewer components in the signal path than most systems - no preamp, and no passive crossover in the speaker. The preamp and crossover functions have been moved over to digital, where I have the additional advantage of being able to apply more DSP if needed. Oh yes, there is more ... I will be able to play DSD256 files on this system. Also: internet radio, Spotify, Tidal, and so on. And I have finally come across a way to play my SACD collection (will require the purchase of more hardware, sadly!). 

 

Most computer setups I have seen use the computer as another component. The PC is a source, it goes to a DAC, and from the DAC the rest of the system is as conventional as they come: pre/power amps, passive crossover, speakers. This system is completely different, each individual driver will have its own channel of power amplification, its own DAC, and each driver will be computer controlled and digitally corrected. I have to admit that the idea is not mine, but the money i am spending most definitely is :P 

 

The headache right now is to get the digital side of it sounding right. At the moment, it definitely does not sound right. 

 

For those who are interested in how I am implementing the NADAC, I have also ordered an RME Fireface UC: http://www.rme-audio.de/en/products/fireface_uc.php

 

I bought it to solve this problem: with the NADAC in my system, how do I take measurements? I have no way to get a microphone input to the computer (given that the NADAC lacks a mic input and ADC). Now, I could use my Focusrite 2i2 and plug it into the PC and use that as a mic preamp/ADC, but there are two issues with this:

 

1. The Focusrite is not going to be clock sync'ed to the NADAC. Apparently this is a huge problem. 

2. Acourate is not able to use one ASIO channel for input, and another for output. There are ways around this (e.g. Asio4All) but that is kludgy. 

 

Uli's suggestion was to remove the NADAC from the system and plug in the RME. Use the RME to take measurements and generate convolution filters, and then remove the RME and connect the NADAC for listening. This solution is not cheap - the RME Fireface UC costs $1500. To put this in perspective, it cost me more than my very first system in total. It costs more than what my entire HT setup (TV, receiver, DVD player, and speakers!) would currently fetch in the secondhand market. I had to swallow hard before I made the decision to buy an interface that I will use only for measurements, and then put away when I am not using it! 

 

I have to admit that the idea is not mine, but the money i am spending most definitely is :P



 

Just curious as to whose idea it is ?

 

cheers

Edited by Tranquility Bass

  • Author

First heard of this idea when I met a guy named Jayden a few years ago. Back then there was no such thing as a Merging NADAC, the software was more primitive, and the only way to implement multiple DAC outputs was via a soundcard like my RME HDSPe AES. More recently, the idea was floated by a guy named Blizzard in another forum. I am also talking to a guy named "Dallasjustice" (also in another forum) who has actually implemented a multi-channel crossover with a NADAC. 

 

As I have said earlier, I have always believed in DSP, but to my ears - current DSP solutions do not have enough processing power. I have read elsewhere that a current model GPU has as much processing power as 24,000 DEQX's. So why not leverage modern computing power (since it is so cheap) and do the calculations upsampled, in 64 bit floating point? 

First heard of this idea when I met a guy named Jayden a few years ago. Back then there was no such thing as a Merging NADAC, the software was more primitive, and the only way to implement multiple DAC outputs was via a soundcard like my RME HDSPe AES. More recently, the idea was floated by a guy named Blizzard in another forum. I am also talking to a guy named "Dallasjustice" (also in another forum) who has actually implemented a multi-channel crossover with a NADAC. 

 

As I have said earlier, I have always believed in DSP, but to my ears - current DSP solutions do not have enough processing power. I have read elsewhere that a current model GPU has as much processing power as 24,000 DEQX's. So why not leverage modern computing power (since it is so cheap) and do the calculations upsampled, in 64 bit floating point? 

 

Hi Keith

 

In case I have missed something what is your ultimate objectives with the DSP ?

 

Sometimes to much DSP processing can be a curse according to this white paper from Grimm Audio http://www.grimmaudio.com/site/assets/files/1088/speakers.pdf

 

cheers

As I have said earlier, I have always believed in DSP, but to my ears - current DSP solutions do not have enough processing power. I have read elsewhere that a current model GPU has as much processing power as 24,000 DEQX's. So why not leverage modern computing power (since it is so cheap) and do the calculations upsampled, in 64 bit floating point?

Maybe because with all the knowledge we've built up regarding human biology we can presume there's zero audible improvement. Hifi like any software only requires a certain amount of processing power. That said there are plenty of DSP algorithms we can apply, but as far as needing a modern GPU it seems like overkill gone mad.

Recently Browsing 0

  • No registered users viewing this page.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.