S-Audio.Systems |
The vast majority of recordings are produced by sound engineers in that way, that sound sources are correctly positioned in space when played using speakers in a prepared (low reflections) listening room. When playing music using acoustic systems, sound radiation from the left speaker enters the left ear and pass by the listener's head (which leads to a delay and change in amplitude/phase response) also enters the right ear, similarly occurs with radiation from the right speaker, as shown in Figure 1a. For headphone listening, however, the radiation of the left earphone is fed only to the left ear and the radiation of the right earphone is fed only to the right ear (see Fig.1b). Respectively, due to improper localization of the sound sources ("in head localization phenomenon", see Fig. 3), the headphones are not suitable for high-quality reproduction of music or their use when mixing phonograms at the studio. To get the correct localization of sound sources when listening to music using headphones, you need to use the electrical equivalent of the physical processes that occur when listening using the speaker - a binaural audio processor, also called sometimes cross-feed. All headphone amplifiers with cross-feed (RME ADI-2, SPL Phonitor, Meier audio, Grace m902 / m903 / m920, HeadRoom Ultra Desktop Amp, Headstage Arrow, etc.) are based on passive 2..4-th order filters that frequency/phase response are far from optimal (at 10kHz phase <-70 degree with the required more than -1600 degree, not optimal frequency response) [4-6, 15-18, 24-31], which in turn explains the ambiguous subjective results when using headphones with such cross-feeds, because the poor localization of the sound sources becomes even stranger. The existing software implementations [21][22] completely repeat their hardware counterparts in terms of amplitude/phase response and, therefore, are also not of interest for high-quality listening of music using headphones. Therefore, after many years of development, a binaural audio processor has been created, that ensures compliance with real physical processes with good accuracy, namely, emulation of speakers located in open space at an angle of +-45 degree in front of listener. Figure 2 shows a graph of the delay of a signal mixed by a binaural processor in the L (R) channel, corresponding to the location of the speakers at an angle of + -45 degree. The use of such a complex binaural audio processor made it possible to get the sound of the near-field speakers with good localization of sound sources when listening with headphones (see Fig. 3) The block diagram of the "Focus" headphone amplifier is shown in Figure 4. The device has two of the most requested (and most commonly used) digital inputs - USB and SPDIF. The USB input is implemented on the CM6631A chip in asynchronous mode with our own firmware written in C. The clock generators for the CM6631A are low-jitter generators from Kyocera. The unused generator of one of the frequency grids (44.1/48 kHz) is turned off. S/PDIF input is implemented on WM8804 chip with FIFO, low PLL cutoff frequency and low jitter. Unlike standard schematics, where the DSP is necessarily used with asynchronous sample rate conversion(ASRC), in HA "Focus" DSP is used much more refined, so there is no need for asynchronous sampling frequency conversion (ASRC), which eliminates the inevitable deterioration of sound quality due to its use. Introducing the apodizing filter into the playback chain does not remove any audio information, instead the filter removes pre-ringing and post-ringing of all half-band DFs, both used when record (!) music and used inside the DAC chip, while preserving the integrity of the original audio. All that remains is the postringing of this filter (Figure 6a(44,1/48кГц), 6b(88,2/96кГц), 6c(176,4/192кГц)), which is effectively masked by the useful signal. Mathematical calculations with 24-bit input data are performed with 28/56-bit coefficients using a 56-bit accumulator.
Also, a large amplitude subtractive dither is implemented in the DSP, which makes it possible to effectively randomize the modulator without increasing the resulting noise at the output of the device. Volume adjustment is carried out in the digital domain. So that there is no loss in quality when using this type of volume control, several rules should be observed: In a multi-bit SDM, each bundle of SDM-DEM-DAC has its own unique reproduction coloration due to the strong dependence of distortion on the signal, one of the reasons for this is that DEM non-linearity averaging time is in inverse proportion to the frequency of the S/D Modulator and is directly proportional to the bit depth of the DAC, that is, higher DAC resolution and the lower the working frequency of the S/D Modulator, the more time is needed to average the nonlinearity. And DEM, which is actually a 1st order S/D Modulator, is subject to the same problems, namely, it produces strong limit cycles both in the Fmod/2(Nyquist) region and directly falling into the audio frequency region. Therefore, a 1-bit SDM was chosen for this design, because of the distortions that are not dependent on the type of signal, in order to obtain the maximum uncolored sound. Actually, the applied DAC chip is UDA1334BTS in differential mode, having on board a 5th order 1-bit Sigma-Delta modulator with a 75 tap Semidigital FIR (AFIR). Differential mode of UDA1334BTS makes it possible to compensate for signal currents in the reference voltage source and power supply, which in turn reduces the requirement for these circuits. The signal-to-noise ratio also increases, even harmonics are compensated, and it is possible to use large amplitude subtractive dither. The excellent filtration capabilities of Semidigital FIR (AFIR) made it possible to reduce to almost zero the influence of jitter and to make a minimalist analog part of the device. The oscillogram directly from the output of the UDA1334BTS (pin 14 or 16) of the 20 kHz sinewave at the 44.1 kHz sample rate in the 400 MHz band is shown below The power amplifier is made in the class AB, which allows 3-5 times to reduce the power of the PSU and accordingly make it more "quiet", with less penetration of interference and common mode noise from the main voltage network. But output stage in AB class works well only in monolithic circuits, so the LMH6321 high-speed buffer capable of operating at a complex load was used (most forget the fact that buffer can oscillate with a complex load no matter what negative feedback phase margin of the whole device is). The PCB design of the output stage is implemented with a minimum flow path of supply currents and a minimum inductance of the power supply circuits. Any device is just a modulated power supply, so PSU quality is very important. The power transformers are low capacitance split bobbin, which minimizes interference from the main power network and minimizes ground current between devices. Regulators used in the digital and analog part of the device are unsurpassed in terms of parameters and positive influence on sound quality “Nazar`s regulators”. In this design, these types of power amplifier protections are implemented:
The 1 kHz full-scale signal at the output of the device in the band from 0 to 40 MHz is shown below, as you can see no interference from the digital part of the device could be detected. Brief technical data:
*-excluding the resistance of the output connector and the mute relay. On the front panel of the device (from left to right):
On the back panel of the device (from left to right):
USB drivers for WinXP/Vista/7/8/8.1/10 The creation of the entire audio reproducing system as a single device made it possible to completely eliminate the interconnect problem and cables. References:
"Directly or indirectly, all questions connected with this subject must come for decision to the ear, as the organ of hearing; and from it there can be no appeal" J.W.S. Rayleigh |
© 2024 Nazar Shtybel |