In the world of layering hundreds of sound sources together, Hollywood re-recording engineers have all the tools and tricks they need to insure that every sonic texture is heard, felt, even seen and gives the audience its maximum impact. What they do is non-intuitive to the uninitiated and are not things you can watch some Youtube videos about and figure it out. They’re also not going to tell you. In this article I’ve broken down the most important element of their work into a simple visual concept for an audio truth: the Audio Cube.
1. Faders Aren’t the Sharpest Tool in the Mix Shed

Beginning mixers or uninitiated independent filmmakers grab faders to solve mix issues. It’s cool. It’s the obvious thing to do. It certainly allows one to insure that the dialog is loudest and the music combats it and the sound effects. It also insures that the ambiences and foley aren’t heard, and the entire mix sounds tinny or shrill. But, yes, it’s what most folks who haven’t watched the MZed Pro Member Education – or who haven’t been mixing for 30 years – do. But faders are like an axe when you’re carving masterpieces. They’re good for getting the wood off the tree, but they’re no finessing tool. Remember that faders move the entire frequency spectrum of a sound up or down in volume. 80% of the issues we find in a mix have to do with specific frequencies in a sound which are problematic: like in a dialog basic grade . If we never address those frequencies or deal with the intricacies of Distance to Camera Mixing, our mixes are never going to have the $50,000,000 impact we need them to have. How do we solve this?
2. Use EQ to Shape a Sound to Fit

Equalization or EQ is the tool we use to whittle away the chaff and reveal the beauty of a sound. It also allows us to layer it next to other sounds so we can have a harmonious mix. You can also use multiband compression for this, but the basic tool is EQ. There are so many aspects of how to shape a sound to fit into the various genres – even in music – that it would take hundreds of exhaustive pages of reading to learn it all. Forget that nonsense. Instead, I give you something which can be the axiom for your inspiration, education and success: the Audio Cube.
3. Each Sound Must Have Its Unique Space in the Audio Cube
No matter what kind of genre or sense of impact you’re creating, every sound in your mix must have its own unique space in the Audio Cube? What is

the Audio Cube? It’s a fictional way of looking at what your mix sounds like. A mix has a multi-faceted existence including structural placement, frequency relationships and distance. These distinctions allow us to have a holophonic potential for answering the “where does this sound go” question. Think of your mix as this cube, the sum of which holds the sound of your mix. Sounds that are toward the front of the cube are more present; less reverb/more compression. Sounds toward the back are very reverberant and are difficult to hear distinctly. Sounds toward the left of the cube exist on the left speakers and on the right of the cube are sounds on the right speakers. Sounds that are toward the top of the cube represent those in a high frequency spectrum and those at the bottom represent the low frequencies. Lastly, the louder a sound is the brighter and easier to “see” in the mix they are. Thus, by altering the frequencies, pans and presence of a sound, a mixer can place any sonic signal in any number of positions for any number of reasons. This can also work in surround environments. Instead of you being front of the cube, you’re in the middle of it.
This cube example is a perfect algorithm, or set of procedures, for mixing in film and music. We use tools like EQ to shape the “footprint” of a sound on the vertical axis of the square. We use reverb and compression to shape the presence of a sound front-to-back. The shown example of the Audio Cube demonstrates a musical experience. So, for example, if your bass drum sound and your electric bass sound seem to take up the same space, you have to decide who needs to move where. Is the bass drum to woofy? Too sharp? Does the nature of the genre of music lend to a hard or “subby” electric bass sound? All of these binary questions give you clues as to where to place your instruments or sounds in your mix. You can also notice the guitar and the lead vocal. Imagine a searing sound effect against dialog in a similar fashion. You would need to carve out the frequencies of the dialog (lead vocal) from the sound effect (guitar). “Mark, doesn’t that mess with the sound of the SFX?!” Yep. But the crazy thing about the brain is that it’ll automatically replace what’s missing as long as the higher overtones of the sound are present. In other words, you want to avoid carving out the highest frequencies of a sound and try to carve out anything else. Fortunately, dialog, which is usually what needs to take the “center stage” of a mix, doesn’t require high high frequencies to be carved out of another sound, because those frequencies aren’t critical to the voice (most of the time). So, what happens in this case is, the sound of the dialog is carved out of the sound effect, the brain makes up the difference, and both the sound effect and the dialog can be strong in the mix – while the dialog is easily able to be heard. Dope.
This is a draconian example, and most of the time you can position sounds free of each other just by panning or judicious use of EQ contouring sounds. Know this: if you have two sounds which cannot be separated into distinct places, than you have a war. And the only way that war can be won is with a sledge-hammer-fader – and one sound will lose.
The final point to keep in mind is that it is extremely rare to pull up a channel and have it sound perfect. An individual instrument or signal almost always has frequencies that are sticking out and making it difficult to mix. It is always a good practice to get good sounds from each of your clips or tracks before attempting to blend it with the rest of the mix. The same paradigm applies to an individual channel’s audio footprint as it does when trying to integrate it into the overall mix. Try to fix not only frequency and compression problems at the micro level (one channel/instrument at a time), but also think about where you want that sound’s footprint to sit. If you “premix” your individual channels/instruments in this way, it makes the process of integrating them in a mix much easier.
4. Reverb Is As Important As EQ and Compression
Don’t forget that although EQ and Compression are powerful tools – your most powerful tools – for moving sounds around in the Audio Cube, Reverb acts just as powerfully for sending sounds backward and free of loud forward sounds. In fact, sometimes you can push the fader louder on sounds which have been given a rearword presentation, and sometimes some wonderful mix surprises result.
5. The Size of the Cube Warps with Presentation Volume
The available space in the Audio Cube literally warps depending on how loud your mix is. Ironically, it’s backward to what you’d think: the louder the presentation, the smaller the cube. “Mark, What!? Why?!” It’s the nature of how the brain is always lying to you. And how the brain perceives sound dramatically shifts as sound pressure levels get louder. The louder a sound is the more the brain perceives a “loudness contour” effect. Loudness contour is the idea that highs and lows are enhanced leaving mids feeling softer. This is actually a great thing as an audience member in an dialog norm 85 dB theater, because it’s a lot more FUN!! But as mixer who is mixing in a studio at 85 dB dialog norm, you have to know that human brains also compress those highs and lows neurologically, so the dynamic range is severely reduced. Is it actually reduced? Nope. But after 60 seconds of 85 dB, your audience’s brains will begin shutting down their ears’ sensitivity, and it’ll be a while before they get it back. The good news is that this loss of sensitivity only occurs on the vertical axis of the cube. In other words, your Audio Cube is going to get shorter. Pan and depth are still very much in tact. So be aware of this as you consider things in the highs and lows when you’re mixing at a strong volume, there will be a sense of “bunching up” of sounds, and you’ll have to try extra hard to keep separation.
What’s worse is that if the 85 dB mix is supposed to also play at 75 dB, it’s critical that you test your mix at 75 dB and see if you have the same separation and impact. Of course, if the 75 dB mix is for the internet then you also need to take into account that you’ll actually lose all frequencies below 85 Hz, because no one’s iPhone or computer speaker will output those frequencies – shortened vertical axis again.
The way to solve all of this is to test your mix on all versions of the possible performance of it, and make a careful “average” so that it works on all. I seek to mix at 80dB dialog norm, because it tends to translate the best “average” to internet, DVD/BRD and theatrical presentation. It also saves the heck out of my hearing.
Whether Hollywood mixers know it overtly or only intuitively, they’re always taking these 5 things into account in their mixes (along with 400 other things). But if you can keep these bits in mind as you do your mix, you’ll be a 1/4 mile ahead of other independent filmmakers who push faders. Blech.
Had your own experience with the Audio Cube? Let us know or Tweet about it!