In the music world, the “Loudness Wars” have been a topic of discussion for decades. Even Wikipedia has a page about it! Over time, music has become increasingly loud but critics say it decreases fidelity and enjoyment for the listener. Good mastering can tighten up a mix and make a song pop but bad mastering can make a mix so bad it’s unlistenable.
History of Loudness in Television
Television also had its struggles with loudness. You used to have to go running for the remote cause commercials were louder (by an uncomfortable amount) than tv shows. The commercials didn’t sound good by any means but the goal was to make it as loud as possible. I worked with mixers who took pride in “putting the meat in the sausage” even if their mixes sounded like a distorted, compressed mess.
Luckily, in the US this was remedied in 2012 with the Calm Act, which basically forced broadcasters into having to regulate this. Without getting into the nitty gritty, it basically created loudness specs, where anyone mixing for television had to take a reading of average loudness for a mix. If it’s too loud, Quality Control (QC) would kick it back and it wouldn’t go to air.
Loudness for Music
In the past few years, we’ve seen loudness measurements start to creep in to music mixing. Some of the streaming services (like Spotify, Youtube and Pandora) automatically adjust song playback for more consistency in loudness for listeners. As a result, music mixers, songwriters, mastering engineers and other content creators are starting to learn about and pay attention to loudness. But, when there’s no restrictions being placed, it’s hard to know what is “right” or “wrong” to do.
What I’ve learned from working with broadcast specs in television is that ultimately it’s just a number. If a mix sounds good, it sounds good. If my broadcast mix sounds good but it’s out of spec, I’ll just globally lower the mix til it is in spec. Part of making this work is how you do your processing – but more on that in a bit.
In tv/film, how important it is for the masters of music tracks to be at a certain loudness level?
The short answer is: loudness level of the music track doesn’t matter as long as a mix sounds good.
If you’ve written a song (or have a song in a music library) that’s getting used in a tv show, film, commercial, etc, when the tracks get to the mix stage, the re-recording mixer will probably drop it by 6-10dB right off the bat. We’re used to getting tracks from a lot of different sources that are mastered differently and at different loudness levels. When a popular song is licensed, the record label usually delivers the same version that was on the album. The actual loudness number of a song doesn’t matter as much as sound quality. If a song is audibly over-compressed or the compression is distracting, that can make someone pass on a song. The same goes for a bad mix. If someone wouldn’t listen to your song cause of the quality or rough mix they probably won’t put it in a movie. If your instrument balances are all over the place, weird panning, too bright or not enough low end that can get in the way of the song.
Don’t assume anyone will remix your song to use it. You might only get a few seconds attention of a producer, picture editor, music supervisor or music editor who’s looking for a track. It’s worth putting your best work for consideration, when you can.
The other reason loudness levels don’t matter in the music tracks we get is because in tv, we’re measuring loudness over a length of time. We get one reading for an entire spot (like 30 seconds), an act of television (the time between commercial breaks – usually 7-10 minutes), or an entire film. That reading includes dialog and sound fx and the music is probably lower than the original mix.
Mixing Score Full Range
In popular music, it’s best to keep mixes full range. With film scoring, it may be ok to mix in context when you’re mixing the whole music score. If I’m mixing music for a composer (where it’s one composer doing the bulk of cues for a film or show) I might mix the music closer to film mix level (which ends up down 6-10 dB). But that’s usually more the case for orchestral scores (vs popular music styles) and we’re mixing in 5.1. If they decide to release a film soundtrack, we’ll have to go back and remix it in stereo anyhow and that’s when I would mix full range and be thinking about loudness.
Advice for composers/songwriters
Do your processing/mastering on the stems. It’s a common music mixing technique to process on the master aux (vs processing just the drums, vocals, etc) but this is bad for tv/film cause the stems don’t sonically match the original mix.
If your track has 5 major elements (a beat, bass line, vocals, rhythm instruments, anything odd/unique that would stick out or make it hard to loop) – have those 5 stems output/ready in addition to your final mix. The stems (when played together) should match the sound and level of the original mix. This is where people tend to mess up. As a re-recording mixer, I don’t want to get more than 8 stems for a song unless there is some really incredibly special reason for it. I’ve had songwriters given me 30 “stems” for a song – but that’s not stems, in the film/tv sense of the word. It may take some combining to get down to 8 and it takes a little planning to combine these tracks (especially if you’re using processing cause your reverbs, effects processing etc have to all be separated). But in general, lead vocals (all tracks summed), other vocals (summed), and drums (summed) should have their own stems.
I’ve had many times where a producer or editor (film/tv) picked out a music track to use, had something minor they didn’t like about it (a vocal part, some frill like a cymbal or harp, etc), and we had to ditch the track cause we didn’t have stems (or couldn’t get them quickly enough), couldn’t edit out the issue, or the stems didn’t match the original. On the other hand, I’ve seen a single track get used for dozens of commercials and none of the mixes sounded the same because the stems allowed a lot of options. A good track (especially to a music editor) can get a lot of different combinations out of the same stems.
Pro tip: Check how your mixes sounds in context. Rip a movie or video that has no music, stick your music track under it (lowered to where it’d sit in a mix), and just see how it feels. Music sounds different when it’s loud and the only thing you’re focused on but when it’s used for tv/film it’s probably going to be underscore where it’s more of an instrumental (with dialog as the lead). It might change what samples you use or how loud/bright your cymbals are when you can tell what’s getting in the way of dialog.