Today, I will continue discussing the types of EQ from the previous post, focusing on Parametric EQ, Dynamic EQ, and Baxandall EQ.
5) Parametric EQ
Parametric EQ was designed by the renowned engineer George Massenburg. It is commonly found as a built-in EQ in DAWs.
This type of EQ allows you to select the frequency you want to adjust and set the Q Factor, which determines the bandwidth of the adjustment.
These are examples of Parametric EQs!
Parametric EQ is used not only for musical purposes but also for addressing issues with sound sources. It can handle problems such as proximity effect from microphones, Singer’s Formant (a specific resonance found in vocalists), room resonance, and high-pitched squeaking noises from strings.
6) Dynamic EQ
Dynamic EQ combines the functions of a compressor with an EQ. It works similarly to a multiband compressor but with slight differences in how bands are set and how it operates.
A multiband compressor uses cut filters to define bands, while a Dynamic EQ defines bands according to the EQ settings.
As mentioned in the previous post, cut filters cause a phase shift of π/2 for every 6dB/oct. This means that multiband compressors can alter the sound due to phase changes even without any settings.
Depending on the Dynamic EQ, most do not allow you to set the ratio, attack, and decay like a compressor does. Typically, you can only set the threshold and the amount of volume reduction or increase.
Because of this, the compression curve of a Dynamic EQ can appear different. The pink curve is from Fabfilter’s Pro-Q3, and the red curve is from Initial Audio’s Dynamic EQ. Although both are Dynamic EQs, their behavior is quite different.
Interpreting Fabfilter’s unique curve, it seems to compress the sound up to a certain point, after which it stops compressing and simply reduces the volume of the sound.
In simpler terms, for Pro-Q3, if the sound exceeds a certain level, it follows the dynamic curve (green) rather than the set yellow curve, acting more like a simple EQ.
However, their usage is generally similar. Dynamic EQs are often used when you want to reduce specific frequency bands. You can choose based on your preference.
7) Baxandall EQ
Baxandall EQ was introduced by Mr. Baxandall in 1952 through Wireless World magazine without royalties, making it widely used.
Due to its minimal phase issues, it is frequently used in mastering. It’s effective for adjusting broad frequency ranges, such as lifting high or low frequencies.
Examples include Tilt EQ and Dangerous’s Bax EQ.
This concludes the overview of various types of EQ. Besides these functional classifications, there are also categories like Zero Latency/Linear Phase/Normal EQs, and whether they are used for musical (Tone Shaping) or technical (Surgical) purposes.
I will continue with these topics in the next post.
Hello, this is Jooyoung Kim, an engineer and music producer.
There are numerous types of EQs available.
Today, I will describe some of these EQs.
Cut Filter, Band Pass Filter
Shelving EQ
Notch Filter
Graphic EQ
That’s it for today.
Parametric EQ
Dynamic EQ
Baxandall EQ
1) Cut Filter, Band Pass Filter
Cut filters are quite common and widely used. Low Cut and High Cut filters are frequently applied.
Low Cut filters are used to reduce low-frequency noises like vibrations from the floor or other low-frequency disturbances.
High Cut filters reduce high frequencies to create a lo-fi sound or to achieve a specific sound characteristic.
Low Cut filters are also known as High Pass filters because they let higher frequencies pass through. Similarly, High Cut filters are known as Low Pass filters.
The amount of reduction is often labeled as -6dB per octave (-6dB/oct) or Pole (with 1 Pole equating to -6dB/oct). Typical values include -6dB/oct, -12dB/oct, -18dB/oct, -24dB/oct, and so on.
While not exactly the same, a Band Pass filter can be thought of as a combination of these two filters.
These filters significantly alter the phase.
The phase shift graph above shows the phase change when a -12dB/oct Low Cut filter is applied. You can see a phase shift of π (3.14) in the low-frequency range.
Comparing this with other phase graphs, you will realize that this is quite a significant phase shift. A large phase shift means that the sound will be quite different from the original. Therefore, using Cut filters indiscriminately can result in a sound that is far from the intended one.
I have previously discussed issues caused by phase cancellation.
Each Pole causes a phase shift of π/2. Using a steep Low Cut filter like -24dB/oct can result in a phase shift of up to 2π, so it’s generally not recommended to use it excessively.
However, use it when necessary.
2) Shelving EQ
Shelving EQ, also known as Shelving Filter, adjusts the volume of frequencies in a shelf-like shape, as the name suggests.
It is used to lift or lower an entire frequency band.
As shown in the image above, Shelving EQs cause less phase shift, making them a good alternative to Cut filters.
3) Notch Filter
Notch filters can be used to eliminate resonances that are difficult to control with other EQs or to create specific musical effects.
It is quite rare to use Notch filters in mixing. They are typically used for problematic sources that are hard to manage otherwise. I personally use them perhaps once a year in mixing.
In music production, Notch filters can be used on synthesizers to create interesting effects by modulating frequencies over time with an LFO.
4) Graphic EQ
With a Constant Q setting, the Q value remains the same as the volume changes. With a Variable (Non-Constant) Q setting, the Q value changes with the volume adjustments.
These internal settings are usually described in the manual, so it’s best to read it for proper usage.
The phase shift is minimal. The common Bell-type Parametric EQ, which I will explain next time, also changes phase in a similar way.
In studio mixing, Graphic EQs are rarely used due to convenience. However, knowing these theories might be useful, especially if you also do live mixing.
Describing Parametric EQ, Dynamic EQ, and Baxandall EQ would make this post too long, so I will continue in the next article.
The main point I wanted to convey today is the importance of considering phase changes when using EQs.
If the sound is different from what you intended after adjusting the frequencies, it is often due to phase changes.
If it sounds good to your ears, that’s what matters. However, understanding what to watch out for and why can lead to more efficient and faster decision-making.
Hello, this is Jooyoung Kim, engineer and music producer.
Today, I want to talk about panning, which controls the left and right placement of instruments. To explain panning, let’s first discuss how to create a stereo image.
Creating a Stereo Image
Stereo imaging starts with microphone recording techniques. On the left, we have AB stereo miking, and on the right, we have XY stereo miking.
AB stereo miking forms a stereo image by utilizing the time difference between sounds arriving from the left and right. In contrast, XY stereo miking relies on the volume difference between sounds arriving from the left and right.
Inspired by this, panning moves the audio source left and right by manipulating volume differences, much like the XY stereo miking method.
A question may arise: How do we create these volume differences to achieve panning? This is defined by the Pan Law.
Pan Law
Pan Law settings in DAWs typically include 0dB, -3dB, -4.5dB, and -6dB.
The reason for these settings is that if you move an instrument to the left or right by simply lowering the volume of one side, the overall volume decreases as the instrument moves. This scenario occurs when the Pan Law is set to 0dB.
In the case of -3dB, the center volume is reduced by 3dB, ensuring the volume remains consistent when moving stereo sources left or right.
For -4.5dB and -6dB, the center volume is reduced by the respective amounts, making the sound appear louder as it is panned left or right.
This might sound complicated, but there’s no need to overthink it. Just be aware that there are various panning settings.
In practice, adjusting the volume balance while panning is common, so you don’t need to worry too much about it.
Haas Effect
I also want to discuss the Haas Effect. As mentioned earlier, AB stereo miking creates a stereo image by the time difference in sounds arriving at two microphones.
Similarly, what happens if the same sound is played with a time delay between the left and right speakers? The answer is that it will sound biased towards the side that plays first.
This technique can make mono sources sound like stereo. However, from my experience, recording a double take sounds more natural and fuller than creating a stereo image with the Haas Effect.
While it’s useful to know, it’s generally better to use this technique only when necessary.
There are various ways to express panning.
For example, Logic uses a scale from -64 to +63, while Cubase and Pro Tools use -100 to +100. Some DAWs use clock-face representations.
There’s also Balanced Panning, which allows free adjustment of left and right panning.
When sending signals externally using Send from a panned source, the panning settings do not apply. Therefore, each DAW provides a Send Panning function to send the signal with the applied panning.
Hello! I’m Jooyoung Kim, a mixing engineer and music producer. In this post, I’ll be sharing the process behind my single album released in 2021. Although the timeline might feel a bit off since I wrote this a while ago, I hope you enjoy reading about the journey.
The inspiration for the song began near a construction site close to my home.
From the year before last to last year, there was continuous construction right next to my house, which was extremely noisy. Towards the end of the construction, they seemed to be in a hurry, working even in the early mornings and sometimes on weekends, which was incredibly stressful. At the same time, I was dealing with some personal family issues that made things even more suffocating.
So, I pre-wrote the lyrics to express my desire to escape this situation. I started composing the song more concretely from February this year.
I wanted the lyrics to be fairly simple but impactful, so I aimed to incorporate rock elements. I also thought using unconventional industrial percussion could effectively convey the feeling of noise.
Thus, I used a snare drum with significant reverb and a strong attack, along with various metal percussion sounds and tubular bells used in orchestras.
When I reopened the project file, I noticed I had used over 100 tracks for the mix (with 46 tracks for the instruments).
For the vocals, I used a Peluso P87 microphone, and for the guitar, a Peluso CEMC6. Both were recorded through the HA-73EQ.
Additionally, I recorded the guitars using different Nashville tunings for the left and right channels.
Apart from the percussion, I paid special attention to the strings. I arranged the parts separately using sheet music and included them through BBCSO.
Looking at some of the sheet music, you can see there are quite a few notes.. ^^;;
Orchestrating these instruments took as long as mixing them. Once I get more familiar with it, I’ll be able to work faster, but for now, I put a lot of intent into each part.
Another critical part I focused on was the beginning and end guitar sections. I tried to enhance the groove and rhythm through bass movement, and I hope it came across as intended.
In terms of mixing, the EMT250 I purchased at the time played a significant role. Its broad and expansive reverb matched the song perfectly.
I got the R2 at a very reasonable price, yet its natural reverb exceeded its cost. So, I used it for the overall hall reverb.
Explaining every detail of the mix would take too long.. Another unique aspect of this mix was adding parallel saturation and heavy compression at the final mix stage to make the sound a bit noisier.
Other than that, there wasn’t any notable processing. The pitch and rhythm tuning, along with the volume automation, took the most time. Aligning the string tracks with the timing and articulation was a painstaking process, revisited countless times.
Thus, after completing all the composing, arranging, recording, and mixing, I sent the track to a mastering studio.
I’m always curious about how different places produce sound, and I wanted to visit the studio to check the sound directly if possible. So, I booked a session at 821 Sound Studio but had to conduct it online due to COVID-19.
On the mastering day, the engineer sent me the files, and I was immediately impressed by how well the sound was captured, so I gave my approval right away.
I had been focusing on the higher frequencies and missed some details in the lower range, but the engineer brought out the groove in the low end perfectly, which was very satisfying.
(The engineer provided two versions: one with more aggressive processing and another balanced version. The aggressive one brought out the groove I intended.)
My track is listed in the discography on the lower left side of the 821 Sound website.
Examining the waveform later, I was surprised by how dynamic it remained despite looking heavily compressed. It reaffirmed the importance of mastering engineers.
I had to wait for about three weeks for the appointment and felt bad about not being able to visit the studio. The engineer kindly invited me to visit once the COVID-19 situation improved, and I promised to do so when possible.
I also worked on the music video simultaneously. I collaborated with PD Kilsangmun, who operates as a one-man team.
We had a Zoom meeting before the shoot to discuss the direction, color, and feel of the music video.
I wanted to shoot in a setting similar to the backgrounds in Rain’s “Gang” music video and Henry’s loop station scenes from JTBC’s “Begin Again.” I also wanted the color tones similar to Kim Dong-ryul’s “Lonely Voyage” and Replay by Wonder K.
The PD had a different interpretation of “noise,” viewing it as the overwhelming information in the world and wanting to depict an escape from it.
After exchanging ideas, the PD sent a plot. Unlike my previous music videos, the PD handled the entire story this time.
The initial plan was to film people overwhelmed by information like noise in a 3D effect, but we decided to go with a more traditional music video approach if that proved too challenging.
We chose an abandoned factory in Pohang for its low cost and nearby sea for additional shooting locations.
A few days before the shoot, I realized we hadn’t discussed costumes. Since I didn’t have the budget to buy new outfits, I used my own clothes.
The PD wanted a work uniform or khaki pants look, but I mostly had semi-formal or denim styles. So, we decided on a white t-shirt and blue jeans, with the PD providing an additional shirt.
We scheduled the shoot for Sunday, July 18th. As the PD works solo, he drove us to the location (I felt bad since I have a dormant license..).
We started early at 5 AM and arrived at the beach in Pohang around 10 AM. We began with the beach scenes as it would be difficult to shoot after sunset.
The sky seemed ready for rain, and the forecast predicted rain in the afternoon.
We shot the beach scenes, including multiple takes of running scenes from various angles. The PD ran alongside me, both of us sweating but with some relief from the breeze.
After about two hours of shooting until noon, we debated whether to eat. Both of us tend to feel unwell if we eat on important days, so we skipped lunch as well.
Instead, we had some snacks and rested at a cafe before heading to the abandoned factory.
The factory was dusty, with lots of dust kicked up during drone shots.. ^^;;
Eventually, my clothes were soaked, and shooting became difficult. I had a spare white t-shirt, so I switched between the two.
Despite the physical toll, it felt liberating once everything was done. As we hadn’t eaten all day, I offered to treat the PD to anything he wanted on the way back. However, most places at the rest stop were closed except for a few restaurants.
Feeling sorry, I promised to buy him a drink once the music video was out, but COVID-19 has made meeting up difficult.
We also took photos for the album cover at the factory. Ironically, the first photo we took turned out to be the best, which I edited for the album cover.
The PD drove us back, and we returned around midnight.
Two weeks later, on July 27th, the editing was complete. We decided to go with the second option without the 3D effects as the PD found it challenging.
This time, it seemed more niche, and it didn’t get featured prominently elsewhere, but thankfully Bugs featured it on their main page.
That’s the end of this song’s production diary. I couldn’t write in great detail as time has faded some memories, but I hope you found it interesting! 🙂