Basics of Mixing – 8.2 The History and Types of Reverb

Hello, this is Jooyoung Kim, music producer and audio engineer. Today, I’ll be discussing the history and various types of reverb.

Shall we dive in?

Valiere, Jean-Christophe & Palazzo-Bertholon, Benedicte & Polack, Jean-Dominique & Carvalho, Pauline. (2013). Acoustic Pots in Ancient and Medieval Buildings: Literary Analysis of Ancient Texts and Comparison with Recent Observations in French Churches. Acta Acustica united with Acustica. 99. 10.3813/AAA.918590.

The image above is from a paper on “Acoustic Pots” found in ancient and medieval architecture. These pots were embedded in walls to function as a type of Helmholtz Resonator.

That might sound too technical, but a Helmholtz resonator is essentially a device that reduces specific frequencies. Modern-day applications include using this concept in car design, though that’s a more recent development.

In ancient times, the Aztecs built temples like Kukulkan Temple, which used echo to create fascinating sounds.

The Greeks also designed spaces with excellent acoustic properties, enabling sound to be heard clearly from specific spots.

Back then, without microphones, these architectural advances allowed sound to be projected effectively, and this often included reverb.

Moving to the Modern Era

In the modern era, Bill Putnam, the founder of Urei, which later became Universal Audio, was the first to experiment with reverb. He used it in the song “Peg o’ My Heart” by the Harmonicats.

For this track, they recorded instruments, played the sound in a studio bathroom, and re-recorded it to capture the reverb. If you’ve ever sung in the shower, you’ll know exactly the kind of reverb I’m talking about! This was the earliest form of what we now call an echo chamber.

Echo Chamber (Chamber Reverb)

Inspired by this, studios started building dedicated rooms for reverb, known as echo chambers.

The image above shows one of the echo chambers at the famous Abbey Road Studio. For those familiar with plugins, this might ring a bell.

Waves created a plugin called Abbey Road Chambers, which is based on impulse responses (IR) from these very rooms. Notice the tile walls—similar to bathroom tiles—used to reflect sound. The process involves playing sound through speakers and capturing it with microphones.

By the way, the classic speakers you see in that plugin are B&W 800D speakers. The 800 series is a dream for many, though the price is quite steep, even second-hand. Hopefully, I’ll own a pair of 801D4s someday…

Anyway, there are plenty of plugins that emulate these echo chambers. While the sound quality is great, the cost of building these rooms is astronomical.

Imagine dedicating an entire room just for reverb—it’s quite an investment! Unless, of course, money is no object…

Plate Reverb

Plate reverb was developed as a more cost-effective alternative to the echo chamber.

This type of reverb works by vibrating a metal plate, with microphones attached to capture the sound. The tone varies depending on the type of metal used, giving plate reverb its distinctive sound.

While these units could weigh up to 250kg and were still quite expensive, they were far more affordable than building a dedicated reverb room.

Digital Reverb

To reduce the size and cost further, digital reverb was invented. The image above shows the first commercial digital reverb, the EMT 250.

Spring Reverb

Spring reverb was originally developed for use in Hammond organs to create reverb effects.

The technology was later licensed to Fender, leading to the inclusion of spring reverb in Fender guitar amps. Its function is similar to plate reverb, and because it’s been embedded in guitar amps for so long, it has a familiar and pleasant sound when used with guitars.

Shimmer Reverb

Shimmer reverb adds pitch modulation to the reverb, producing a characteristic shimmering effect. It’s perfect for when you want that lush, expansive sound.

Hall, Studio, and Other Reverbs (Convolution Reverb)

Reverbs like hall or room reverb are actually quite tricky to classify. These reverbs are generated using an impulse response (IR) of real spaces through a process called convolution.

Let me briefly explain what an impulse is: it’s a very short, high-amplitude signal. Mathematically, it’s known as a Dirac delta function, where:

  • if x = 0, y = ∞
  • if x ≠ 0, y = 0
  • and its integral from -∞ to ∞ equals 1.

This impulse can be used to measure the frequency response of speakers. When measuring a space, we use a signal known as a sine sweep, which is recorded and mathematically transformed through deconvolution to generate the impulse response.

While this might sound complicated, you can think of it as recording a sine sweep in WAV format and using it to create a reverb through calculation.

With IR reverbs, you can also use other sounds like snare hits or kick drum samples as IR files to create unique effects.

Logic has its Space Designer plugin for this,

Cubase uses Reverence,

and Pro Tools has Space.

In the end, the reverbs we use on our computers can be divided into two types: algorithmic reverb and convolution reverb.

Conclusion

That covers the history and types of reverb. I may have gone off on a tangent at times, but if some of it was too complex, don’t worry! You don’t need to fully understand every detail—music is all about what sounds good, after all.

On a different note, I finally received permission from Universal Audio to use some photos for my book. I’ll post an update when the book is ready!

See you in the next post!

Basics of Mixing – 6.1 Compressor

Hello, this is Jooyoung Kim, an engineer and music producer.

Today, I’d like to talk about compressors.

Why do we use compressors in mixing?

First, the most fundamental role of a compressor is to level the dynamics.

When the dynamic range (the difference between the loudest and softest sounds of an instrument) is large, it can cause issues where vocals or individual instruments are not clearly heard. It can also result in instruments sounding like they are moving forward and backward in the mix when listening through speakers. By controlling dynamics well, it becomes easier to increase the overall loudness during mastering.

Second, compressors can change the groove of the music.

Depending on when the compressor kicks in and out, and how it compresses, it can alter the groove of the instrument source.

Third, compressors can change the tone of the source through saturation.

Based on the harmonic distortion and frequency response characteristics of the compressor, it can add different textures to the original source.

Fourth, compressors can provide a sense of unity.

A compressor applied to a bus can impart its unique saturation and groove to the entire group of instruments, helping them blend well together.

For these various complex reasons, we use compressors.

In this sixth chapter of Mixing Basics, we will cover:

  1. How to use a compressor
  2. Types of compressors based on their operating principles
  3. Noteworthy compressors
  4. Various other dynamic processors (decompressors, expanders, gates, de-essers, multiband compressors, etc.)

In the next post, we’ll start by discussing how to use a compressor.

Basics of Mixing – 5.4 Phase Issues in EQ

Hello, this is Jooyoung Kim, an engineer and music producer.

Today, I’d like to discuss a crucial aspect to consider when adjusting EQ: phase issues.

The image above shows the phase change graph when using the Brickwall feature in Fabfilter Pro Q3.

Phase change is generally represented as a continuous line. However, when drawing the graph continuously, the size becomes too large, so the vertical range is usually set to 2π, and the line continues from the top or bottom when it breaks. It’s quite difficult to explain in words.

Anyway, considering such factors, the jagged phase changes can still significantly affect the sound. Extreme phase changes can make the sound seem as if an unintended modulation effect is applied, so it’s important to use it carefully.

Because of these issues, Linear Phase EQ was developed. Linear Phase EQ does not cause phase issues. However, it introduces a phenomenon known as Pre-Ringing.

  • Pre-Ringing Phenomenon

Pre-Ringing occurs when using Linear Phase EQ, causing the sound to ring before the waveform. Try bouncing your track using Linear Phase EQ. As shown in the image above, you’ll notice a waveform appearing at the front that wasn’t there originally.

Other than digital EQs, many plugin emulations of analog EQs alter the phase and frequency response graphs just by being applied.

For instance, consider the commonly used Maag EQ4 for boosting high frequencies.

On the left is the frequency response graph when only the Maag EQ4 plugin is applied without any adjustments, and on the right is the phase change graph under the same conditions.

Here’s what we can deduce about using EQ:

  1. Applying an EQ can change the basic frequency response from the start.
  2. Non-Linear Phase EQs will inevitably cause phase changes.
  3. Linear Phase EQs can introduce Pre-Ringing, creating new sounds that were not there originally.
  4. EQ plugins or hardware with Harmonic Distortion can add extra saturation to the sound.

Understanding these points is crucial when adjusting EQ.

Of course, there are many excellent engineers who achieve great results without knowing all these details. Ultimately, the most important thing is that the sound comes out well, regardless of understanding the underlying principles.

However, I personally feel more comfortable when I have a solid understanding of the fundamentals. So, knowing this information can never hurt.

That’s all for today. I’ll see you in the next post!

Basics of Mixing – 5.3 Using EQ for Different Purposes

Hello, I’m Jooyoung Kim, an audio engineer and music producer.

Today, we’ll explore the use of EQ for different purposes. EQ is generally categorized into two types: Tone Shaping and Surgical.

1) Tone Shaping EQ

Tone Shaping EQ is used for:

  1. Altering the tone of instruments
  2. Changing the tone of instruments through the saturation provided by the EQ itself
  3. Adjusting the vertical position of instruments within the stereo image

Examples of Tone Shaping EQs include the Pultec EQ,

The renowned Neve 1073,

And the API 550 and 560 EQs.

Digital EQs like the Pro Q3 can also be used for Tone Shaping, though they lack saturation.

2) Surgical EQ

Surgical EQ is used to solve problems in the audio source. It’s used for addressing proximity effects, resonances, sibilance (often handled by a de-esser but sometimes with EQ), and various other unpleasant sounds that can occur during recording.

For these tasks, EQs without inherent coloration are preferred, typically with a high Q factor (narrow bandwidth). It’s beneficial to use EQs with an internal sidechain function (often labeled as an audition feature) that allows you to listen to the affected frequencies in isolation.

I mainly use the bx_hybrid V2 because I’m familiar with it, but most modern digital EQs come with an internal sidechain function, so any of them should work fine.

  • Conclusion

Using Tone Shaping EQ effectively requires an understanding of stereo imaging and tonal concepts. Surgical EQ, on the other hand, necessitates the ability to identify problems by ear. Ultimately, it takes practical experience to develop these skills.

I’m not claiming to be a highly experienced or notable expert, but I’ve found that there’s a significant difference between knowing these concepts in theory and applying them in practice.

Good luck to everyone studying sound engineering!