The process of normalization often confuses newcomers to digital audio production. The word itself, “normalize,” has various meanings, and this certainly contributes to the confusion. However, beginners and experts alike are also tripped up by the myths and misinformation that abound on the topic.
I address the 10 most common myths, and the truth behind each, below.
Peak Normalization
First, some background: While “normalize” can mean several things (see below), the myths below primarily involve peak normalization.
Peak normalization is an automated process that changes the level of each sample in a digital audio signal by the same amount, such that the loudest sample reaches a specified level. Traditionally, the process is used to ensure that the signal peaks at 0dBfs, the loudest level allowed in a digital system.
Normalizing is indistinguishable from moving a volume knob or fader. The entire signal changes by the same fixed amount, up or down, as required. But the process is automated: The digital audio system scans the entire signal to find the loudest peak, then adjusts each sample accordingly.
Some of the myths below reflect nothing more than a misunderstanding of this process. As usual with common misconceptions, though, some of the myths also stem from a more fundamental misunderstanding – in this case, about sound, mixing, and digital audio.
Myths and misinformation
Myth #1: Normalizing makes each track the same volume
Normalizing a set of tracks to a common level ensures only that the loudest peak in each track is the same. However, our perception of loudness depends on many factors, including sound intensity, duration, and frequency. While the peak signal level is important, it has no consistent relationship to the overall loudness of a track – think of the cannon blasts in the 1812 Overture.
Myth #2: Normalizing makes a track as loud as it can be
Consider these two mp3 files, each normalized to -3dB:
The second is, by any subjective standard, “louder” than the first. And while the normalized level of the first file obviously depends on a single peak, the snare drum hit at 0:04, this serves to better illustrate the point: Our perception of loudness is largely unrelated to the peaks in a track, and much more dependent on the average level throughout the track.
Myth #3: Normalizing makes mixing easier
I suspect this myth stems from a desire to remove some mystery from the mixing process. Especially for beginners, the challenge of learning to mix can seem insurmountable, and the promise of a “trick” to simplify the process is compelling.
In this case, unfortunately, there are no short cuts. A track’s level pre-fader has no bearing on how that track will sit in a mix. With the audio files above, for example, the guitar must come down in level at least 12dB to mix properly with the drums.
Simply put, there is no “correct” track volume – let alone a correct track peak level.
Myth #4: Normalizing increases (or decreases) the dynamic range
A normalized track can sound as though it has more punch. However, this is an illusion dependent on our tendency to mistake “louder” for “better.”
By definition, the dynamic range of a recording is the difference between the loudest and softest parts. Peak normalization affects these equally, and as such leaves the difference between them unchanged. You can affect a recording’s dynamics with fader moves & volume automation, or with processors like compressors and limiters. But a simple volume change that moves everything up or down in level by the same amount doesn’t alter the dynamic range.
Myth #5: Normalized tracks “use all the bits”
With the relationship between bit depth and dynamic range, each bit in a digital audio sample represents 6dB of dynamic range. An 8-bit sample can capture a maximum range of 48dB between silence and the loudest sound, where a 16-bit sample can capture a 96dB range.
In a 16-bit system, a signal peaking at -36dBfs has a maximum dynamic range of 60dB. So in effect, this signal doesn’t use the top 6 bits of each sample*. The thinking goes, then, that by normalizing the signal peak to 0dBfs, we “reclaim” those bits and make use of the full 96dB dynamic range.
But as shown above, normalization doesn’t affect the dynamic range of a recording. Normalizing may increase the range of sample values used, but the actual dynamic range of the encoded audio doesn’t change. To the extent it even makes sense to think of a signal in these terms*, normalization only changes which bits are used to represent the signal.
*NOTE: This myth also rests on a fundamental misunderstanding of digital audio, and perhaps binary numbering. Every sample in a digital (PCM) audio stream uses all the bits, all the time. Some bits may be set to 0, or “turned off,” but they still carry information.
Myth #6: Normalizing can’t hurt the audio, so why not just do it?
Best mixing practices dictate that you never apply processing “just because.” But even setting that aside, there are at least 3 reasons NOT to normalize:
- Normalizing raises the signal level, but also raises the noise level. Louder tracks inevitably mean louder noise. You can turn the level of a normalized track down to lower the noise, of course, but then why normalize in the first place?
- Louder tracks leave less headroom before clipping occurs. Tracks that peak near 0dBfs are more likely to clip when processed with EQ and effects.
- Normalizing to near 0dbfs can introduce inter sample peaks.
Myth #7: One should always normalize
As mixing and recording engineers, “always” and “never” are the closest we have to dirty words. Every mixing decision depends on the mix itself, and since every mix is different, no single technique will be correct 100% of the time.
And so it goes with normalization. Normalizing has valid applications, but you should decide on a track-by-track basis whether or not the process is required.
Myth #8: Normalizing is a complete waste of time.
There are at least 2 instances when your DAW’s ‘normalize’ feature is a great tool:
- When a track’s level is so low that you can’t use gain and volume faders to make the track loud enough for your mix. This points to an issue with the recording, and ideally you’d re-record the track at a more appropriate level. But at times when that’s not possible, normalizing can salvage an otherwise unusable take.
- When you explicitly need to set a track’s peak level without regard to its perceived loudness. For example, when working with test tones, white noise, and other non-musical content. You can set the peak level manually – play through the track once, note the peak, and raise the track’s level accordingly – but the normalize feature does the work for you.
Myth #9: Normalizing ensures a track won’t clip
A single track normalized to 0dBfs won’t clip. However, that track may be processed or filtered (e.g. an EQ boost,) causing it to clip. And if the track is part of a mix that includes other tracks, all normalized to 0dB, it’s virtually guaranteed that the sum of all the tracks will exceed the loudest peak in any single track. In other words, normalizing only protects you against clipping in the simplest possible case.
Myth #10: Normalizing requires an extra dithering step
(Note: Please read Adam’s comment below for a great description of how I oversimplified this myth.) This last myth is a little esoteric, but it pops up sporadically in online recording discussions. Usually, in the form of a claim, “it’s OK to normalize in 24 bits but not in 16 bits, because …” followed by an explanation that betrays a misunderstanding of digital audio.
Simply put: A digital system dithers when changing bit depth. (i.e. Converting from 24-bits to 16-bits.) Normalizing operates independent of bit depth, changing only the level of each sample. Since no bit-rate conversion takes place, no dithering is required.
Other Definitions
Normalizing can mean a few other things. In the context of mastering an album, engineers often normalize the album’s tracks to the same level. This refers to the perceived level, though, as judged by the mastering engineer, and bears no relationship to the peak level of each track.
Some systems (e.g. Sound Forge) also offer “RMS Normalization,” designed to adjust a track based on its average, rather than peak, level. This approach closer matches how we interpret loudness. However, as with peak normalization, it ultimately still requires human judgment to confirm that the change works as intended.
84 comments
Trackback URI Comments feed for this article
Great article with some good info. Though I do almost always normalize my tracks to -3db before mixing and post editing. I find that it leaves me with enough headroom to go almost anywhere from there.
Thanks for posting
Dave
A post above mentioned that there’s a story that Stevie Wonder owned a radio station in the 70’s and use to have test mixes played over the air at night to see how they sounded. Although I don’t know if it’s true I personally know of a similar situation.
Back in the 70’s I worked at a radio station in Florida that was right across the street from a recording studio that was associated with the Shelby Singleton group (the man who bought Sun Records from Sam Phillips). This studio was used by the artists on the various Shelby Singleton labels (SSS, Sun and mostly Plantation Records artists). I was the D.J. that worked 6:00 P.M. to midnight at the radio station and it was not uncommon for the studios owner, producer and chief engineer to have someone run a tape over to our radio station and ask me to play it so they could judge what the song sounded like coming through our equipment and out over the air. We had a fairly nice Ampex reel-to-reel in the main studio hooked into the main mixing console that went to a rack mounted compressor/limiter then out to a line amp that sent the signal to our transmitter. That alone was probbaly adding a lot of extra “coloring” to the sound. Although I didn’t know much about recording at the time I have often wondered just how much the radio stations compression hurt or helped those songs. As you know record companies use to supply radio stations with “Promo” records that were often mixed just for radio.
I don’t know about the Stevie Wonder myth, but I know I did it quite a few times for another recording studio.
Awesome stuff….. ;-)
Thx_4_sharin’
Thanks for normalizing that for us
Normalisation is nothing more than adding gain.
Nice post there! Normalization is one of the most used tools, and apparently not a lot of people know how to do it properly
best regards
Nice list. It took me a while to realize that normalization didn’t change the dynamic range. Even though it sounded like it did.
Terrific job at explaining this misunderstood topic. There are so many subcategories that normalization could be a part of, that I have always tried to use a different name to describe it’s function. Thanks for touching on the subject of “why and where it shouldn’t be used”.
Terrific job at explaining this misunderstood topic. There are so many subcategories that normalization could be a part of, that I have always tried to use a different name to describe it’s function. Thanks for touching on the subject of “why and where it shouldn’t be usedâ€.
Excellent now I will allow Headroom in my samples for when I EQ Them..+1dB..+2dB etc..Gratitude.
Back in college, when I casually asked for some tips on improving the quality of my voice recordings, I had an “audio engineer” try to tell me that normalization and compression are the same thing. There was nothing I could say to un-convince him of that. And he was the production manager/de-facto person-in-charge of the university’s radio station. Eek.
Also I didn’t realize until just before I left this comment that this post is from 2008! Still good advice and just as relevant six years later. I guess a lot of audio principles hold true despite any recording/mixing technology advances (or maybe there haven’t really been any advances). Thanks for busting these myths that still remain in a lot of sound amateurs and probably even audio pros’ heads today.
Regarding myth #5 on using “all available bits”:
A lot of musicians and producers tend to use effects and plugins that fully utilize all the bits available. (Consider for example adding a synth plugin, or even dynamic compression tools).
If your average track level is way below 0dBFS you will naturally need to decrease the level of the added synthesized track to match the remaining tracks, thus reducing the overall quality of the production.
Thus, in my view, unless you are working exclusively with recorded audio subject to no plugins or your recorded audio peaks just below 0dBFS, your mix will potentially gain some bit related quality from normalization.
Also, regarding Myth #4 on affecting the dynamic range:
For most practical purposes, it is true that normalization (or simply altering the track level) does not affect the dynamic range. In theory this is false.
Consider a signal consisting of only a square wave. On any DAW this signal will consist of a series of discrete values, either A or B. Because of the discrete nature of such signals, it is not possible to guarantee that the ratio A/B (proportional to the dynamic range) remains the same when you change the overall volume (multiply by a number not equal to 1).
As an example you may consider the integers A = 4, B = 3, and increase to A = 6. Now, A*1.5 = 6 is still an integer value, but 3*1.5 = 4.5 is not and is thus not possible to represent accurately. This means that we have the ratios A/B = 4/3 prior to the gain and 6/4 after, representing a change in the dynamic range of the signal.
That being said, having an humongous number of discrete values available, the effect from normalization or volume altering is practically negligible.
More goes into normalization than I had previously thought.
I never did peak normalization.. Just rms normalization for the final mix.. A big thanks to soundforge 9. Still using old software bcause they operate faster with old cpu. Hehe
Your suggestions, do not seem to be reflected in commercial downloaded music. (Explain that?) I find they rarely if ever leave any headroom, in fact, many times there is some clipping. Not that any distortion is detectable… Granted I have not been messing (editing) with my music as long as others, but I’ve found that, just a tiny bit of clipping, is not hurtful.
I’ll answer my own comment:
https://en.wikipedia.org/wiki/Loudness_war – So I guess I should leave headroom, and these commercial productions should also, but do not.
Is it just me, or is this “normalization” the literal exact same thing as…. turning up the volume? or Amplify? Tell me the definition 1000 times if you’d like, but it still doesn’t make any sense. and feels like an inside joke everyone is in on.
“It turns up every signal at the same time, the exact same level. Dynamic range doesn’t change.” Ect ect ect.
Am I missing something here? Why not press ‘Amplify’ instead?
and no, don’t explain the difference. You will just say the exact same thing with 2 different sets of similar words, as if one had any varying properties from the other.
Is normalization like a hipster word or something? Too school for cool to say “Amplify”, or “Turn up the volume”?
I’d ask for help, but again, this inside joke you’re all in on trolls me.
Bullies…
Normalize means “amplify such that the highest peak reaches 0dB”
Let’s call chicken beef now. It’s still chicken… but it’s beef now, too.
Thanks for the advice ! really helpful – love your blogs.
what about replay gain and ebu r128???
@choops.. instead of talking BS you should use you brain and try to learn something.
amplify just means to amplify.
but when you normalize, you normalize “against” something you have measured before.
you normalize a BUNCH of songs. you don´t just amplify the volume randomly. that´s is quite a difference.
sorry for my english.. but i hope because it is so simple you will get it…..
Hi. I need help with “NORMALIZATION”. I’m trying to digitize my Vinyl and Tapes collection (50+ years worth) and I’m wondering if I should normalize the files I’m accumulating from my cassettes and Reel/Reels (Line output from my equipment yields very low signals, about -12 to -6 db). Any help would be greatly appreciated.
Jose, you should try to get your levels matched such that your recorded file almost clips but not quite. Then you won’t need to normalize (to bring the peak volume up) and you’ll have captured the greatest dynamic range possible with your equipment. You may need some type of preamp between your source equipment and recorder to get where you need to be with the levels
More Comments: ‹ Previous · 1 · 2 · 3 · 4 · Next ›