Chat GPT is absolutely certain that interference and noise do affect a digital transfer.
It seems to have trawled audiophile sites for its information. This is hardly surprising as audiophile sites represent by far the highest quantity of information on this and related subjects because to engineers and science this is largely an irrelevant question. So the answers it has provided are based on audiophile marketing and all the audiophiles suckered by it. The answers are therefore either just false or effectively wrong due to a very old audiophile marketing technique, the “lie of omission”.
Here are some examples:
Digital signals are more immune to noise interference than analog signals, but they are not completely immune. Noise can still impact digital signals, especially when it is introduced into the signal path before the signal is converted back to an analog audio signal.
The first statement is true but is again a “lie of omission”. Digital transfer is completely immune to low/typical consumer levels of noise/interference but is not immune to particularly high levels, which can cause bit errors. However, such bit errors are detected and depending on the transfer protocol are usually either resent, error corrected or interpolated. So, noise/interference can impact the digital signal but does not affect the digital transfer. If it did, the internet would never work!
While it's true that digital signals are made up of 1s and 0s, these signals still rely on precise timing and voltage levels to accurately represent the original analog waveform. Any interference that affects these timing or voltage levels can result in audible distortion in the resulting analog audio signal.
This is just false! “
Any interference that affects these timing or voltage levels” does NOT result in audible distortion, they would with an analogue signal but not a digital signal, that’s why digital was invented! Each of the protocols for transferring digital audio specify a RANGE of timing and voltage levels. For example a voltage level could be specified as say 5 volts, which would represent a digital “1” while 0 volts would represents a digital “0”. A voltage of say 3 or 4v would still register a “1”, while a voltage of 0-2v would still register a “0”. So even pretty huge voltage errors would have absolutely no affect whatsoever, let alone an audible distortion! Similar sort of issue with timing/jitter, a RANGE of timings is specified for each protocol (the rise time of the “eye pattern”), if a receiver cannot resolve the timing variation specified by the protocol, it is not compliant with the protocol it’s claiming to support and is therefore faulty. In practice, the timing of the digital signal is not used directly by the DAC, it is either not used at all (in the case of asynchronous transfers) or is only used to modulate the timing generated by it’s own internal clock (synchronous transfer). In either case, jitter in the digital signal is irrelevant, unless it’s so extreme it’s significantly outside the range specified by the protocol, in which case errors would result.
A study conducted by the Audio Engineering Society (AES) in 2003 found that even relatively small amounts of jitter can have a noticeable impact on the sound quality of digital audio signals.
Exactly how are they defining “
relatively small amounts”? This is at best misrepresenting the findings and at worst, an outright lie! The lowest amount of jitter detected was about 3 nano-secs, with a test signal. With music it was 27ns, although that was one test subject with one specifically chosen (and very unusual) recording. In general the threshold for jitter audibility is around 200-500ns. Is say 200 billionths of a second “relatively small”? It’s relatively tiny compared to say the blink of an eye (around 200-500 milli-secs) but relative to the subject in question, jitter, it’s the exact opposite, “relatively massive”! An AES paper in 1997 (Benjamin and Gannon) measured the jitter of 50 cheap/common consumer CD and DVD drives and digital TV tuners from the mid 1990’s. On average they produced roughly 140 pico-secs of jitter, over a thousand times less than the lower 200ns typical threshold and roughly 200 times less than even the lowest jitter ever detected with music. So, even “relatively average amounts of jitter” from ~30 years ago would have absolutely no noticeable impact, let alone “
relatively small amounts” today. The bot is just quoting audiophile BS!
The rest of what you quoted is much the same, IE. Either just false or true but a “lie of omission”. The EBU showing micro-secs of jitter causing pronounced distortion is true but again omits the actual jitter produced. Same with THD and noise (SNR), reducing them can indeed “
improve sound quality”, “
reduce listener fatigue”, etc. But the quote omits the fact that this is ONLY true when reducing audible levels of THD and noise. So we have the “lie of omission” again because to be audible, the level of THD has to be greater than about 1% but what DACs have higher than 1% THD? The Apple dongle from 7 years ago for example, cost about $8 and had THD of 0.0011%!
G