Basics of Mixing – 6.7 Limiters and Clipping

Hello, I’m Jooyoung Kim, an engineer and music producer.

We’ve discussed various processors that control dynamics. Today, let’s talk about limiters and clipping.

Let’s dive right in!

Limiters

A limiter is a type of compressor. Generally, when the ratio exceeds 10:1, we call it a limiter. When it reaches ∞:1, it’s often referred to as a brickwall limiter.

Limiters are processors that aggressively compress sound to prevent it from exceeding a certain volume level. A simple example of this would be guitar effects like distortion or overdrive, which are types of limiters. In mastering, limiters are used at the final stage to ensure the volume doesn’t exceed a certain level.

Any limiter, when viewed on a waveform, shows the top and bottom parts being cut off. This truncation introduces strong harmonic distortion, known as clipping, which we can perceive as a distorted sound.

Distortion-type limiters result in noticeable clipping, producing a heavily distorted sound. To minimize such distortion, some compressors/limiters include a feature called soft clipping.

Clipping / Soft Clipping

Elysia Alpha Compressor with Soft Clipping Function

Soft clipping gently smooths out the sharp edges of clipping. When a sine wave undergoes limiting with soft clipping, the result is a waveform that doesn’t have the abrupt cuts seen in regular clipping.

While soft clipping still introduces distortion, the sound is smoother compared to hard clipping. Using limiters or soft clipping helps to increase the overall loudness of a track. The reason for boosting volume is that people tend to perceive louder music as higher quality. However, equal LUFS (Loudness Units relative to Full Scale) values do not always mean the perceived volume is the same. For example, in vocal music, if the vocals are prominent, the music may seem louder even with similar LUFS values.

Even if you’re not mastering your own tracks, considering these aspects during mixing can help you create better productions.

Next time, I’ll explore reverb effects like delay. See you then!

Choosing Speakers by Reading Spinorama Charts!

Hello! this is Jooyoung Kim, an engineer and music producer.

Today, I’d like to explain Spinorama, a concept anyone interested in sound and speakers should know. Let’s get started!

Example of a Spinorama Graph

First, let’s briefly look at the history of how Spinorama measurements were developed.

Spinorama was created in the 1980s by Dr. Floyd Toole, a leading authority on speaker acoustics, while he was working at the National Research Council of Canada. In the 1990s, it was further refined in collaboration with Harman International. It has since been incorporated into standards issued by the American National Standards Institute (ANSI) and the Consumer Electronics Association (CEA).

Standard Method Of Measurement For In-Home Loudspeakers

The measurement process, as shown above, involves taking measurements every 10 degrees horizontally and vertically in an anechoic chamber, resulting in a total of 70 data points.

This looks intense…

The collected data is represented in six frequency response graphs known as Spinorama charts.

KEF R3 META

Let’s look at the Spinorama graph for my recently purchased KEF R3 META. The vertical axis is dB SPL (the unit we often use to measure sound levels, like airplane noise), and the horizontal axis is Hz (the unit of frequency).

  1. The top blue line is the On Axis response, representing the frequency response directly in front of the speaker. Manufacturers commonly provide this graph, but it lacks comprehensive information.
  2. The second orange line is the Listening Window response, which averages the frequency responses from ±10 degrees vertically and ±30 degrees horizontally, totaling 9 measurements. This approximates the expected response in a typical listening environment.
  3. The third red line represents Early Reflections, showing the response of early reflected sounds. It averages 8 measurements taken at ±40, ±60, and ±80 degrees horizontally, and ±50 degrees vertically. A significant difference from the On Axis and Listening Window responses helps distinguish between direct and reflected sounds.
  4. The light blue Sound Power response averages all 70 measurements. The more this graph parallels the other graphs without significant fluctuations, the better the speaker’s acoustic performance.
  5. The green Early Reflections DI (Directivity Index) is the difference between the On Axis and Early Reflections responses. This graph helps to quickly understand the difference between direct and reflected sounds.
  6. The brown Sound Power DI is the difference between the On Axis and Sound Power responses. Research suggests that smoother changes in both DI graphs are preferred by listeners (I’d provide the exact study, but finding it would take some time… I’ll update if I come across it later).
Genelec 8351B
  1. The On Axis chart shows the basic frequency response.
  2. The closer the Listening Window response is to the On Axis response, the more similar the sound will be for the listener and those around them. This indicates good off-axis performance, meaning the sound remains consistent even if the listener moves slightly.
  3. The more aligned the Early Reflections, Sound Power, and On Axis graphs are, the higher the preference among listeners. If it’s hard to judge, check the DI graphs for a consistent slope.

This gives a basic understanding of Spinorama charts.

Of course, Spinorama charts have their limitations. As the title suggests, you shouldn’t choose a speaker based solely on these charts. However, they are a fundamental indicator for understanding a speaker’s performance, making them valuable knowledge for anyone in music or sound.

In future posts, I’ll discuss near-field measurements by the German company Klippel.

Finally,

https://www.spinorama.org/

This site offers Spinorama charts for many speakers measured so far. Since it aggregates data from various sources, make sure to choose highly reliable sources in the settings tab for accurate information.

I hope this post is helpful for you! See you in the next post!

Basics of Mixing – 5.4 Phase Issues in EQ

Hello, this is Jooyoung Kim, an engineer and music producer.

Today, I’d like to discuss a crucial aspect to consider when adjusting EQ: phase issues.

The image above shows the phase change graph when using the Brickwall feature in Fabfilter Pro Q3.

Phase change is generally represented as a continuous line. However, when drawing the graph continuously, the size becomes too large, so the vertical range is usually set to 2π, and the line continues from the top or bottom when it breaks. It’s quite difficult to explain in words.

Anyway, considering such factors, the jagged phase changes can still significantly affect the sound. Extreme phase changes can make the sound seem as if an unintended modulation effect is applied, so it’s important to use it carefully.

Because of these issues, Linear Phase EQ was developed. Linear Phase EQ does not cause phase issues. However, it introduces a phenomenon known as Pre-Ringing.

  • Pre-Ringing Phenomenon

Pre-Ringing occurs when using Linear Phase EQ, causing the sound to ring before the waveform. Try bouncing your track using Linear Phase EQ. As shown in the image above, you’ll notice a waveform appearing at the front that wasn’t there originally.

Other than digital EQs, many plugin emulations of analog EQs alter the phase and frequency response graphs just by being applied.

For instance, consider the commonly used Maag EQ4 for boosting high frequencies.

On the left is the frequency response graph when only the Maag EQ4 plugin is applied without any adjustments, and on the right is the phase change graph under the same conditions.

Here’s what we can deduce about using EQ:

  1. Applying an EQ can change the basic frequency response from the start.
  2. Non-Linear Phase EQs will inevitably cause phase changes.
  3. Linear Phase EQs can introduce Pre-Ringing, creating new sounds that were not there originally.
  4. EQ plugins or hardware with Harmonic Distortion can add extra saturation to the sound.

Understanding these points is crucial when adjusting EQ.

Of course, there are many excellent engineers who achieve great results without knowing all these details. Ultimately, the most important thing is that the sound comes out well, regardless of understanding the underlying principles.

However, I personally feel more comfortable when I have a solid understanding of the fundamentals. So, knowing this information can never hurt.

That’s all for today. I’ll see you in the next post!

Basics of Mixing – 2.3 Digitalization of Sound

Hello, I’m Jooyoung Kim, a mixing engineer and music producer.

Today, I want to talk about how analog sound signals are digitized in a computer.

The electrical signals outputted through a microphone preamp or DI box are continuous analog signals. Since computers cannot record these continuous signals, they need to be converted into discrete signals. This is where the ADC (Analog to Digital Converter) comes into play.

Here, the concepts of Sample Rate and Bit Depth come into the picture.

The sample rate refers to how many times per second the signal is sampled.

The bit depth refers to how finely the amplitude of the electrical signal is divided.

For example, consider a WAV file with a sample rate of 44.1kHz and a bit depth of 16 bits. This file records sound by sampling it 44,100 times per second and divides the amplitude into 65,536 levels (2^16).

A file with a sample rate of 48kHz and a bit depth of 24 bits samples the sound 48,000 times per second and divides the amplitude into 16,777,216 levels (2^24).

In a DAW (Digital Audio Workstation), these digital signals are manipulated. To listen to these digital signals, they need to be converted back into analog electrical signals.

This conversion is done by the DAC (Digital to Analog Converter), often referred to as a “DAC”.

The image above shows a simple DAC circuit that converts a 4-bit digital signal into an analog signal.

These analog signals can pass through analog processors like compressors or EQs and then go back into the ADC, or they can be sent to the power amp of speakers to produce sound.

Various audio interfaces

Audio interfaces contain these converters, along with other features like microphone preamps, monitor controllers, and signal transmission to and from the computer, making them essential for music production.

Topping’s DAC

However, those who do not need input functionality might use products with only DAC functionality.

Inside these digital devices, there are usually IC chips that use a signal called a Word Clock to synchronize different parts of the circuit.

To synchronize this, devices called Clock Generators or Frequency Synthesizers are used.

In a studio, there can be multiple digital devices, and if their clocks are not synchronized, it can cause a mismatch called jitter. Jitter can result in unwanted noises like clicks or cause the sound to gradually shift during recording (I experienced this while recording a long jazz session in a school studio where the master clocks of two devices were set differently).

To prevent this, digital devices are synchronized using an external clock generator. If you are not using multiple digital devices, the internal clock generator of the device should suffice, and there is no need for an external clock generator.

An article in the journal SOS (Sound On Sound) even mentioned that using an external clock generator does not necessarily improve sound quality.

Today, we covered Sample Rate, Bit Depth, ADC (Analog to Digital Converter), DAC (Digital to Analog Converter), Word Clock, and Jitter.

While these fundamental concepts can be a bit challenging, knowing that they exist is essential if you’re dealing with audio and mixing. If you find it difficult, just think, “Oh, so that’s how it works!” and move on.

See you in the next post!