Why 44.1khz sampling rate




















A theorem called the Nyquist sampling theorem states that in order to sample a signal of X Hz without significant loss of quality, you need to sample at 2X the frequency. The limit of human hearing is approximately 20kHz, which hence requires a sample rate of approximately 40Khz. This is why CDs are sampled at 44Khz.

Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.

Create a free Team What is Teams? Learn more. Why do we choose Ask Question. Asked 7 years, 3 months ago. Active 3 years, 5 months ago. Viewed 18k times. Improve this question. Peter Mortensen 7 7 bronze badges. See en. Most people over 40 will have little useful. Add a comment. Active Oldest Votes. Home Tips and Tricks. We are all familiar with the two audio file descriptors sample rate and bit depth. While these specifications sound routine, I often get questions from producers and mixers about the optimum settings for a given project.

This article will cover the basics and best practices for setting sample rates. Sample rate tells us how many times per second we take a measurement of an analog audio waveform as it is converted to a digital signal. Since sample rate has a speed, or frequency, the sample rate defines the frequency response of an audio recording.

Specifically, the Nyquist Theorem states that the highest frequency we can record is half of the sampling rate. This means a sample rate of Accordingly, a 96 kHz sample rate allows for 48 kHz of audio bandwidth. If we attempt to record above half the sample rate, or the Nyquist limit, audible artifacts called aliases occur. Analog to digital converters eliminate aliasing by low pass filtering the analog signal at half the sample rate. This low pass filter is referred to as an anti-aliasing filter.

In practice, the low pass filter requires a range to operate, so we state 20 kHz as the practical upper limit for We know that human hearing covers from about 20Hz to 20 kHz, so why would we need sampling rates above One answer is that many people, including scientists, claim that humans can perceive sounds as high as 50 kHz through bone conduction. Frequencies above However, if this audio were recorded at kHz, for example, frequencies of up to 96 kHz in the original audio would be recorded.

This is obviously way outside of what humans can hear, but pitching the audio down causes these inaudible frequencies to become audible. For more information on audio sample rate, be sure to check out the video below. Analog audio is a continuous wave, with an effectively infinite number of possible amplitude values. The audio bit depth determines the number of possible amplitude values we can record for each sample.

The most common audio bit depths are bit, bit, and bit. Each is a binary term, representing a number of possible values. Systems of higher audio bit depths are able to express more possible values:. With a higher audio bit depth—and therefore a higher resolution—more amplitude values are available for us to record.

Therefore, a digital approximation of the amplitude becomes closer to the original fluid analog wave.

Increasing the audio bit depth, along with increasing the audio sample rate, creates more total points to reconstruct the analog wave.

However, the fluid analog wave does not always perfectly line up with a possible value, regardless of the resolution. As a result, the last bit in the data denoting the amplitude is rounded to either 0 or 1, in a process called quantization. This means there is an essentially randomized part of the signal. In digital audio, we hear this randomization as a low white noise , which we call the noise floor. Like the mechanical noise introduced in an analog context or background noise in a live acoustic setting, digital quantization error introduces noise into our audio.

Harmonic relationships between the sample rate and audio, along with the bit depth, can cause certain patterns in quantization.

This is known as correlated noise, which we hear as resonances in the noise floor at certain frequencies. Here, our noise floor is actually higher, taking up potential amplitude values for a recorded signal. In a process called dithering , we can randomize how this last bit gets rounded. The amplitude of the noise floor becomes the bottom of our possible dynamic range. On the other side of the spectrum, a digital system can distort if the amplitude is too high when a signal exceeds the maximum value the binary system can create.

This level is referred to as 0 dBFS. In the end, our audio bit depth determines the number of possible amplitude values between the noise floor and 0 dBFS. This is a valid question. The noise floor, even in a bit system, is incredibly low. Unless you need more than 96 dB of effective dynamic range , bit is viable for the final bounce of a project.

Because the noise floor drops, you essentially have more room before distortion occurs—also known as headroom. Go back about 10 years when the most popular form of music distribution was audio CD. It turns out that CD's store the audio digitally using a sample rate of But again, why specifically ?

This device attached to a video tape recording machine. For recording audio, it would first convert analog audio to digital, then encode the digital audio data in a way that it could be recorded to the tape.

For playback, it would perform the reverse operation. Complexity arose in that you cannot simply send the digital data directly to the video recorder.

The video recorder was designed only to accept a specific type of waveform called a composite video signal. The most basic version of this video signal type is RS With a bit of a hack, a noninterlaced progressive scanline system can be used, with visible scanlines at 60 FPS. The signal is monochrome, meaning it is only black and white.

You may notice that only the number of horizontal lines vertical resolution is mentioned. RS does not define horizontal resolution, this is simply a product of the frequency bandwidth of your signal basically the maximum frequency that you can have in a signal before something in the system can't handle it. I also mentioned that only scanlines are visible. A scanline is a single horizontal line on the picture.

The 45 invisible scanlines are spent waiting for old CRT TV technology to 'retrace' or move the electron beam back to the top of the screen. Each bit of data is encoded using something known as biphase mark encoding.



0コメント

  • 1000 / 1000