EDUCATIONAL INFO

Learning Materials For The Aspiring Music Producer & Engineer

Enhancing Rhythm and Musical Tonality Through Equalization

The three fundamental roles of music are Melodic structure, Harmonic Structure and Rhythmic Structure. All three elements contain certain variations of the audio waveform that can be manipulated in the mixing stage in order to assist the mixer in altering and/or enhancing an instrument’s role in the performance. One of the most fundamental processes is the equalization of the original waveform.

With a lot engineer/producers these days, the general approach is varied in how equalization is applied. Often one approach is to make every instrument sound as good as possible through Eq’ing the instrument in the “solo” mode. Another approach is to follow other people’s EQ recommendations or at times, just simply take a guess. In too many cases, the type of EQ curve, Q-bandwidth, and amount of gain or cut is simply not considered. I have often seen sessions where every instrument is EQ’d to sound its best, where in the end, the overall mix would have sounded better if no EQ was applied and the engineer just basically balanced the levels of the instruments. What is happening in this situation is that the engineer is Eq’ing the instrument with no clear strategy of the vision of the final mix.

Another argument I hear from up and coming engineers is that analog EQ is always superior to digital. This is simply not true, for a lot of Digital EQ plug-ins have come a long way in quality and especially in versatility. Digital equalizers such as notch filters and de-essers are vastly superior to analog ones.

The biggest dilemma I see today is that very few engineers take the time to understand what the role of an instrument is playing in the production. Is the guitar performing a rhythmic, melodic or harmonic role or a combination of one or more of these roles? What role are the keyboards playing? In most productions, certain instruments will perform specific musical roles that have been worked out on pre-production. For example: a finger picking guitar part will perform a rhythmic role, the grand piano will perform a harmonic role and the saxophone will perform a melodic role. To establish each particular role of the instruments, the equalization needed should enhance the role the instrument is performing.

What will be presented in this handout is how to enhance and at times alter an instrumental performance for a designed role in a production. The goal is to take a performance and shape it for the role of a melodic, rhythmic and and/or harmonic function through equalization.

When mixing, a good practice is to first set all the faders to a good balance level without any EQ or compression. It is never a good idea to just jump into a mix and start soloing and equalizing instruments individually. It is imperative to EQ and then always return to a fuller mix to see if the EQ is filling the desired need. Seasoned engineers are more successful at equalizing audio at the beginning of a mix, for they can acknowledge that they know exactly how the final mix should sound. Once a mix is roughly balanced, the engineer will hear where EQ is needed in the mix.

When the engineer starts to immediately EQ, the overall mix will usually end up sounding too bright. A lot of engineers start with adding mid- range and top end to drums and then adding more mid-range and top end to guitars. By the time the lead vocal needs to be treated and balanced, the vocal will sound incredibly dull in relation to the previous equalized instruments. When this occurs, the engineer will at times add 8db to the top end and 6db in the mid-range of the vocal to match the mix. This is a ominous process to be undertaken in mixing, for what has happened now, is the engineer has now split the lead vocal into two audio elements with the presence of the vocal now totally dominating the resonance of the vocal!

First we will look at sections of the frequency bandwidth and what role they play in relation to its musical function.

20hz-40hz

In the 20hz-40hz-range we hear sub low-end, where the listener tends to “feel” instead of hear the lower frequencies. This is the frequency range that dance clubs and heavy rock use to maximize the low-end punch of a kick drum and booming bass usually from a synth bass. It also the range that sound designers like to utilize when generating and enhancing a related visual sound effect such as an earthquake, collision, thunder, rumble… etc.

This range needs to be rolled off when the audio has been recorded through a microphone especially a condenser mic, in order to reduce undesirable rumble and plosives. It also needs to be rolled off to remove undesirable low frequencies on vocals and acoustic instruments.

The Fundamental Range of Music: 40hz-2khz

The frequency range between 40hz-2khz is where most of the music tonality resides. The sound characteristics associated are a sound with “body”, “music”, and “pitch recognition”. Almost all music fundamentals used in pop-rock are in this range. This range is where the sustain and resonance of an instrument resides, such as an acoustic guitar strum, a piano chord. (In the diagram this is the frequency content of the “C” & “D” sections).

The Audio Waveform

40hz-200hz

In the 40hz-200hz range, there is the audible low end where we can hear the fundamental bass note, otherwise musically known as the “root”. This is where the fundamental notes of the strings of a Bass guitar are with the open “E” at 41hz. The low notes of the bass guitar are reasonably noticeable in this range, but are also heard from the higher overtones in the 80hz-200hz range and at times even higher frequency ranges.

200hz-800hz

In the 200hz-800hz range, we have low frequency music fundamentals. This is where the tonal sound of an instrument resides, especially acoustic instrument like the guitar and piano. This is also the range that gives a strong sense of tonality and body to a vocal. It is also where there are a lot of unrelated overtones of a kick drum, and is why a lot of mixers prefer to lower the frequency content of this range. When an artist asks the engineer to add more body or music to their instrument, this is the range that is equalized. This is also the range that can easily built up and become cloudy when there are a lot of instruments in the production.

800hz-2khz

In the 800hz-2khz range are higher frequency music fundamentals. This is where the tonal sound of electric instruments resides, such as electric guitars and synthesizers. The vocal, if congested with a cold will generate a very nasal sound. This is also the range where a lot of dialogue frequency content sits. This a good range for dynamic electric bass guitar sound when played with a pick.

The PresenceRange2khz-15khz

The frequency range between 2khz-15khz is where most of the sonic presence resides. This is also the area where rhythm is defined in a performance. Almost all frequency content of the attack of drums, piano, guitars…etc. resides in this range. Looking at the diagram it can be concluded that the audio content in the “A” & “B” sections is in a frequency range above 2khz.

2khz-5khz

The 2khz—5khz range is considered low mid-range. This is where the midrange on electric instruments resides. A boost in this range will give an electric guitar presence and will articulate a rhythmic idea. Also affected in this range is the attack of drums, rock vocals, the pick sound of a bass guitar, the attack of an electric keyboard like a Fender Rhodes.

5khz-10khz

In the 5khz-10khz range we have high mid-range. This is where the midrange on a lot of acoustic instruments resides. A boost in this range will give presence to an acoustic guitar and piano and help define its rhythmic role in a performance. With rock singers that do not have a lot of high frequency content in their voices; this is a range to emphasize. Cymbals sound trashy and harsh in this area. A lot of vocal sibilance sits in this range.

10khz-15khz

In the 10khz-15khz-range we have top-end. This is where overtones reside that lends an instrument its brilliance and sheen. This is the range to enhance when condenser microphones are used as drum overheads. A boost in this range will give an acoustic guitar a sparkle sound, and lead vocals will sound very intimate. It’s also a good area to add an “air” quality to string and woodwind instruments.

15khz >Above

In the 15khz > Above 15khz we have the ultra top-end. This is the “air and silkiness” range that some engineers call.

Music as it relates to frequency range is easily determined and recognized by the ear. As previously stated most musical tone content exists below 2khz and above 40hz. Once we start to climb above 2khz we get into higher harmonics with a noise-like quality. Anything below 40hz is extraneous unwanted low end and anything above 15khz is usually for the dogs.

Equalization

If we look at the waveform of a piano note we will notice that the initial audio we hear is mostly transient “noise” in the mid-range and high-end. This is principally a percussive sound (“A” the attack) consisting of mostly noise and non-harmonic elements (Frequencies above 2khz). This is the part of the waveform that establishes the rhythm.

What I describe as noise elements that are generated when a piano is played is; the physical sound of the “hammer hitting a metallic object”, the vibration of the string(s), and corresponding music has yet to be generated and recognized. This is the frequency content component that is generated when two hard objects come into contact with each other. This also the same process when a drumstick initially strikes the drum, when the guitar pick hits the string.

The next audio we hear is when the string starts to vibrate, the initial creation of the fundamental note and first series of overtones. The frequencies of the tonal characteristics in this section are mostly below 2khz. (“B” & “C” sections). The soundboard is initially excited and starts to resonate.

The next section of sound is when the soundboard vibrates sympathetically producing frequencies mostly below 2khz (“C” section). The strings are still vibrating and continue to excite the soundboard with the volume slowly dissipating.

The last sound we can identify is the reverb of the enclosed environment, where almost all frequency content is mostly below 1.0khz (“D” section). If the environment is very reflective and large the decay-reverb can last for seconds. If the environment is small and dead sounding, the decay will be very short and well under a second.

So to conclude, the attack of the note is generally composed of mid-high frequencies above 2khz and the musical tonality and resonance is mainly composed of frequencies below 2khz.

If you want to enhance a musical idea in a production with more pitched based elements, then add EQ in the 200hz-2khz-range. Find a “Q” factor (bandwidth) of approximately “1” and do a frequency sweep through that range until you find the desired sweet spot where the central notes of the instrument stand out. This will add music and body to the sound where the pitch characteristics of the instrument will be enriched.

If you want the instrument to sound more rhythmic, add EQ in the 2khz-10khz-range. Electric instruments will sound more rhythmic in the 2khz-5khz range and acoustic instruments will sound more rhythmic in the 5khz-10khz range. This frequency enhancement will promote the rhythmic role of an instrument’s performance (“A” & “B” sections).

It is also important to mention that cutting frequencies in certain ranges also aides in shaping the role of an instruments performance. For example; if I wanted to enhance the musical role of an electric guitar, I could boost in the 400hz-1.4khz range +3db and dip all of the frequency content above 2khz by -3db. If I wanted to enhance the rhythmic role of an electric guitar, I could dip the 400hz-1.4khz range by -3db and boost the frequency content between 2khz-5khz by +3db.

When dealing with the equalizer’s “Q” bandwidth, the higher the center frequency of the frequency band you are dealing with, the wider the Q (0.7-1.0). It is only when the center frequency drops below 2khz that we start to tighten up the Q bandwidth (1.0-1.2).

In the low frequency range a tighter Q is often desirable. However when equalizing a bass guitar the engineer must be careful not to make the Q so narrow that only 1 or 2 bass strings are being affected. If the engineer wanted the bass guitar to sound fuller in the low end he should use a center frequency of 140hz with a Q (1.0) wide enough to also reasonably affect 80hz and 200hz.

If an engineer desires to add more mid-range to an electric guitar to enhance it’s rhythmic role and give the sound a sense of presence than he should find a sweet spot between 2khz and 5khz (Q= 0.8). If the engineer discovers he is using a lot of gain, then he should tighten the Q (1.0) and use less gain. In some equalizers, the more you boost, the tighter the Q will get as you increase the gain.