What is the difference between sound quality and sound quality? What should I pay attention to
In sound quality, it includes three aspects: the pitch of sound, that is, the intensity and amplitude of audio; the tone of sound, that is, the frequency or the number of changes per second; the timbre of sound, that is, overtone or harmonic components. Talking about the sound quality of an audio is mainly to measure whether the above three aspects of sound reach a certain level, that is, whether the pitch relative to a certain frequency or frequency band has a certain intensity, and whether the amplitude of each frequency point is uniform, balanced and full, whether the frequency response curve is straight, and whether the sound intonation is accurate and loyal The original features of the frequency or component of the sound source are presented in the field, and the frequency distortion and phase shift meet the requirements. The overtone of the sound is moderate, the harmonic is rich, and the timbre is beautiful.
For analog audio, the more frequency components of the reproduced sound, the less distortion and interference, the higher the sound fidelity and the better the sound quality. For example, in communication science, the level of sound quality is measured not only by the frequency range of audio signal, but also by the indexes such as distortion and signal-to-noise ratio. For digital audio, the more components of reproducing sound frequency are, the smaller the bit error rate is and the better the sound quality is. Usually measured by the digital rate (or storage capacity), the higher the sampling frequency, the greater the number of quantization bits, the more channels, the greater the storage capacity. Of course, the higher the fidelity, the better the sound quality.
Different types of sound have different requirements for sound quality. For example, the fidelity of speech sound quality is mainly reflected in clear, undistorted and reproducing plane sound image; the fidelity requirements of music sound are high, and the creation of space sound image mainly reflects the methods of multi-channel simulation of three-dimensional surround sound, or virtual two-channel 3D surround sound to reproduce all the sound images of the original sound source.
Audio signals have different uses and different compression quality standards. For example, ITU-TG · 711 standard is adopted for telephone quality audio signal, 8 kHz sampling, 8 bit quantization and 64 Kbps bit rate. The AM broadcasting adopts ITU-TG · 722 standard, 16KHz sampling, 14bit quantization and 224kbps code rate. The high fidelity stereo audio compression standard is jointly formulated by ISO and ITU-T. the cd11172-3mpeg audio standard is 48Khz, 44.1KHz, 32kHz sampling, and the digital rate of each channel is 32kbps ~ 448kbps, which is suitable for CD-DA discs.
If the sound quality is too high, the equipment will be complex; otherwise, it can not meet the application requirements.
Sound is really something that is easily interfered by psychological and visual factors. For example, when you look at a musician in a concert hall, you will naturally feel that the musician's playing voice is relatively loud.
First of all, we must say that people are easy to produce the concept of high fidelity. The high fidelity in music refers to the guarantee of clarity, frequency response, distortion and signal-to-noise ratio, rather than "restoring the instrument itself in reality". In fact, the more real the sound is, the worse it will be. Almost all the "good" music we hear now is almost "unreal". No matter what kind of music it is, mixers will use a variety of means to make everyone feel "good". Mixers don't pursue truth, they don't pursue falsehood. They just want to listen. And "nice" is not true most of the time. Every instrument in this song is not real. All instruments are not in one space. But because it's lossless, the sound quality is good.