S-Audio.Systems |
The vast majority of recordings are produced by sound engineers in such a way that sound sources are correctly positioned in space when played via speakers in a dedicated listening room (a room with low levels of reflections). When playing music via acoustic systems, the sound signal from the left speaker enters the left ear and passing by the listener's head also enters the right ear (which results in a delay and change in amplitude/phase response), the same occurs with the sound signal from the right speaker, as shown in Figure 1a. The listening via acoustic speaker systems is affected by the following main problems: the huge effect of the listening room and the lack of pointed radiation of sound. For headphone listening, however, the radiation from the left earphone is fed only to the left ear and the radiation from the right earphone is fed only to the right ear (see Fig.1b). Therefore, despite the fact that headphones are not affected by the problems related to the acoustic systems, because of the improper localization of the sound sources ("in head localization phenomenon", see Fig. 3), the headphones are not suitable for high-quality reproduction of music or mixing phonograms at the studio. To get the correct localization of sound sources when listening to music via headphones, you need to use the electrical equivalent of the physical processes that occur when listening via the speakers — a binaural audio processor, also known as cross-feed. Most of the headphone amplifiers with cross-feed feature (RME ADI-2, SPL Phonitor, Meier audio, Grace m902 / m903 / m920, HeadRoom Ultra Desktop Amp, Headstage Arrow, etc.) are based on passive 2–4th order filters — their frequency/phase response are far from optimal (at 10 kHz, the phase-frequency variation is <- 70 degrees while more than -1,600 degrees are required, not optimal frequency response) [4-6, 15-18, 24-31], which in turn explains the ambiguous subjective results when using headphones with such cross-feed processor, because the already poor localization of the sound sources becomes even weirder. The existing software implementations [21][22] completely repeat their hardware counterparts in terms of amplitude/phase response and, therefore, are also not recommended for high-quality listening of music via headphones. Thus, after many years of development, a binaural audio processor was created, that ensures compliance with the real physical processes with good accuracy, namely, emulation of speakers located in an open space at the angle of +-45 degrees in front of a listener. Figure 2 shows a graph of the delay of a signal mixed by a binaural processor in the L (R) channel, corresponding to the location of the speakers at the angle of + -45 degrees. The use of such a complex binaural audio processor allows to emulate the sound of the near-field speakers in the open space with good azimuthal localization of sound sources when listening to headphones (see Fig. 3) The block diagram of the "Focus" headphone amplifier is shown in Figure 4. The device has USB input using the CM6631A chip in asynchronous mode with custom-made firmware written in C programming language. The low-jitter oscillator from Epson is used as a clock for the CM6631A. Unlike standard schematics, where the DSP is necessarily used with asynchronous sample rate conversion(ASRC), the DSP is used in a non-standard way, so there is no need for the ASRC, which eliminates the inevitable deterioration of sound quality due to its use. The upsampler converts the 44.1 kHz sampling rate audio to high resolution format using the unique custom digital filter:
The minor aliases are additionally delayed for a few milliseconds, which allows them to be masked due to the Haas effect. . Thus, we get a filter with the maximum possible bandwidth and no aliasing-related problems. For 44.1 kHz sampling rate, different digital filters can radically change the sound of the device, due to the fact that the sampling frequency is chosen too tight, because an ear needs a sound reproducing band slightly larger than the 20 kHz, which leads to a tough struggle to find a balance between computing resources, passband and aliases. Or some marketing factors come into play, and the digital filter is intentionally made to produce “distorted” sound which is recognizable and not similar to other brands. The digital filter (Figure 5) used in this device allows the listener to re-discover records with the 44.1 kHz sampling rate. Now the sound recorded with the classic 44.1 kHz sampling rate sounds as good as high-res recordings. When downsampling hi-res recordings to 44.1 kHz, it is very difficult to tell what is being playing at the moment, recording in an old 44.1 kHz sampling rate or hi-res recording. For 88.2–192 kHz sampling frequencies the minimal phase apodizing FIR digital filter [42][43] is used. Unlike the half-band digital filters typical for all DACs, this type of filter fulfills the requirements of the Nyquist–Shannon sampling theorem, namely, it completely eliminates aliases (free from spectral overlap), thereby reliably restoring the audio signal. Introduction of the apodizing filter into the playback chain does not remove any audio information, instead the filter removes pre-ringing and post-ringing of all half-band DFs, both the ones used during the music recording (!) and the ones used inside the DAC chip, while preserving the integrity of the original audio. All that remains is the postringing of this filter (Figure 6), which is effectively masked by the useful signal. Mathematical calculations with 24-bit input data are performed with 28/56-bit coefficients using a 56-bit accumulator. Also, a large amplitude subtractive dither is implemented in the DSP, which makes it possible to effectively randomize the modulator without increasing the resulting noise at the output of the device, and implement a digital volume control without any loss of quality. The power amplifier configuration is the class A amplifier, capable of operating with a complex load. To reduce the heat generation from the device, an output stage with almost a rail-to-rail output is used. The PCB design of the output stage is implemented with a minimum flow path of supply currents and a minimum inductance of the power supply circuits. This design features the following types of power amplifier protections:
Front panel of the device features (from left to right):
On the back panel of the device (from left to right):
Any device is just a modulated power supply, so PSU (power supply unit) quality is very important. The power transformer is a low capacitance split bobbin, which minimizes interference from the main power grid and minimizes ground current between devices. Regulators used in the analog part of the DAC are the best on the market today in terms of parameters and positive influence on sound quality series-parallel (“shunt”) stabilizers with low and fixed output impedance, with less than 1μV noise. Switching on/off of the device from AC mains supply is triggered automatically by the signal from the USB bus (suspend). The line output of the device is specifically designed for connecting to the balanced inputs of power amplifiers. Zero output impedance of the line output allows you to get all the advantages of a balanced connection (high CMRR) yet retaining the compatibility with the unbalanced connection of audio devices. Brief technical data:
*-excluding the resistance of the output connector and the mute relay. For MacOS, Linux, Android no USB drivers are required. For Windows 7/8/8.1/10 drivers (including ASIO) can be downloaded from download page . For correct output of audio data under Windows OS use only ASIO (needs to be configured for 24-bit/50ms), Kernel Streaming or WASAPI Exclusive modes.
References:
"Directly or indirectly, all questions connected with this subject must come for decision to the ear, as the organ of hearing; and from it there can be no appeal" J.W.S. Rayleigh |
© 2024 Nazar Shtybel |