chevron_left
Back to "Audio KnowHow > read_me_first.txt"
Continue to "Audio KnowHow > Voice-Over"
chevron_rightAudio editing
There are countless factors when it comes to enhancing audio recordings. It mainly depends on what kind of recording it is - spoken recordings may well require a completely different range of functions than, for example, each channel of a piece of music.
I have been optimizing music-related recordings since 1998 now, working pretty much exclusively by ear for the first 15 years. However, since I bought the paid version of WavePad, I've been using frequency analysis more and more, which reduces the overall effort quite a bit. For example, I also use some old instrument samples in my music, which were recorded with a relatively strong noise at that time. By using frequency analysis, I was able to recreate the precise noise and thus had enough pure noise to completely filter it out of the instrument samples.
However, once it comes to processing voice recordings, other methods come into play. Most microphones, even those in a high price range, pick up background noise - though I should mention that in most cases this is due to the environment, not due to the microphones. Nevertheless, technologies have long existed to filter out noise like this in real time with high quality. If a recording does NOT have noise filtering, the noise must be removed in post-processing, e.g. by using WavePad. Here it is crucial first of all that the noise is constant for the entire recording and does not keep changing in volume due to a noise gate. As soon as you get a few seconds of pure noise in the recording, you can use this to remove the noise from the entire recording. What remains then, depending on the microphone used, is sometimes digital bubbling, which can be removed perfectly using WavePad's "multi-band noise gating".
In addition, many microphones distort the frequencies of voices - so it almost always makes sense to make adjustments via an equalizer. However, optimal equalizer settings depend on the particular voice and can vary massively, so I can't provide a general recipe here. But with low voices, for example, it often makes sense to greatly reduce the frequency ranges below 100 Hz. Unless it's vocals in a genre of music that relies on low vocal frequencies - then this kind of change would be totally counterproductive.
For purely spoken recordings - e.g. in case of explanations, vlogs and more - the focus is on removing unnecessary segments such as pauses, coughs, too much breathing noise and so on. Sometimes there are sudden volume fluctuations in the areas where such segments have been removed. FadeOut-FadeIn transitions can then be useful here.
These were just a few examples, compared to the potential you really have when editing audio recordings. I can only advise to take a closer look at the functionality of both WavePad and Audacity.
I have been optimizing music-related recordings since 1998 now, working pretty much exclusively by ear for the first 15 years. However, since I bought the paid version of WavePad, I've been using frequency analysis more and more, which reduces the overall effort quite a bit. For example, I also use some old instrument samples in my music, which were recorded with a relatively strong noise at that time. By using frequency analysis, I was able to recreate the precise noise and thus had enough pure noise to completely filter it out of the instrument samples.
However, once it comes to processing voice recordings, other methods come into play. Most microphones, even those in a high price range, pick up background noise - though I should mention that in most cases this is due to the environment, not due to the microphones. Nevertheless, technologies have long existed to filter out noise like this in real time with high quality. If a recording does NOT have noise filtering, the noise must be removed in post-processing, e.g. by using WavePad. Here it is crucial first of all that the noise is constant for the entire recording and does not keep changing in volume due to a noise gate. As soon as you get a few seconds of pure noise in the recording, you can use this to remove the noise from the entire recording. What remains then, depending on the microphone used, is sometimes digital bubbling, which can be removed perfectly using WavePad's "multi-band noise gating".
In addition, many microphones distort the frequencies of voices - so it almost always makes sense to make adjustments via an equalizer. However, optimal equalizer settings depend on the particular voice and can vary massively, so I can't provide a general recipe here. But with low voices, for example, it often makes sense to greatly reduce the frequency ranges below 100 Hz. Unless it's vocals in a genre of music that relies on low vocal frequencies - then this kind of change would be totally counterproductive.
For purely spoken recordings - e.g. in case of explanations, vlogs and more - the focus is on removing unnecessary segments such as pauses, coughs, too much breathing noise and so on. Sometimes there are sudden volume fluctuations in the areas where such segments have been removed. FadeOut-FadeIn transitions can then be useful here.
These were just a few examples, compared to the potential you really have when editing audio recordings. I can only advise to take a closer look at the functionality of both WavePad and Audacity.