Basics of Mixing – 3.1 Console and DAW

Hello! This is Jooyoung Kim, a mixing engineer and music producer.

Today, I will finally talk about the functionalities.

Shall we begin?

In the days when all recording processes were done analog, mixing was performed using analog mixers and tape.

Here is a video I found related to this topic. If you are interested in analog recording, you might find it interesting to watch.

The transition from analog to digital began with the release of Digidesign’s (now AVID) Sound Tools.

Sound Tools included a DAW program called Sound Designer, various chipsets, and devices that acted as audio interfaces, all designed exclusively for Mac.

Later, this program evolved into Pro Tools, a representative DAW.

Such systems, integrated with DAWs, show why Pro Tools has become the industry standard and why Macs are commonly used in studios today.

As we moved from analog to digital, DAWs developed by incorporating analog functionalities into computers. Therefore, understanding the functions of an analog mixer can make it easier to approach mixing with a DAW.

The DAW mixer window that you need to get familiar with if you’re into mixing

The interface of the mixer window is also designed similarly to an analog mixer. Let’s take a closer look at a mixer.

  • Analog Mixer and Signal Flow

I wanted to bring a larger one, but it was difficult to see clearly.

Let’s start from the left.

Each channel has a series of stages: Pre section with mic preamp and input gain, Insert section with compressor and EQ, Send/Return section for external effects, and Post section with panning and output gain.

This configuration of a single channel is called a channel strip, and a mixer consists of multiple channel strips. The DAW mixer window is organized in a similar sequence.

The signal usually flows from top to bottom, and this path is called the ‘signal flow.’ Each DAW has a different signal flow, so you need to learn the signal flow of your specific DAW.

I usually prefer Cubase for mixing, but the current project is in Logic, so I brought the Logic mixer window. Here, you can see that each channel strip is quite similar to an analog mixer.

Let’s check the Send section in the DAW mixer window and then return to the analog mixer.

  • Send Section

The analog mixer I brought doesn’t specifically say Send but is labeled FX. This Send function allows you to send the signal from each channel strip to a separate Send channel to apply effects independently.

Some might wonder why not just apply effects in the Insert section.

In the past, studio reverb and delay units were large and expensive. Applying such effects to each channel individually was nearly impossible. Additionally, sending the sound separately through the Send section provided the advantage of processing it independently.

This feature remains in modern DAWs.

In mixing, the Send section is primarily used for applying delay, reverb, and sometimes modulation effects like phaser or chorus, as well as saturation effects like distortion.

Next, we need to look at the group/send section and bus.

  • Group and Aux Channels, and Bus

Group/Aux channels are mostly seen in large analog mixers. They are used to bundle similar instrument groups for collective control.

In Cubase, the concept of a bus isn’t used, making it more intuitive. However, in Logic and Pro Tools, the bus concept can be a bit confusing.

A bus is a signal path that combines audio signals from multiple tracks. This explanation might sound complex, but think of it as an additional step before the Aux track.

In Logic and Pro Tools, the bus function is used to create groups or apply effects like reverb or delay through Send.

  • Master Channel

All tracks ultimately converge at the master channel, which is usually the Stereo Out channel in standard mixing.

It is crucial to ensure that the digital peak does not exceed 0dB in the master channel.

Although the 32-bit float format prevents audio quality destruction even if peaking occurs, it’s good practice to manage digital peaks for industry standard compliance and effective communication.

This should provide a basic understanding of the tracks and their functionalities.

See you in the next post!

Basics of Mixing – 2.5 Volume and Signal Level

Hello! I’m Jooyoung Kim, a mixing engineer and music producer.

Today, let’s talk about volume and signal levels.

If you’re interested in sound, you’ve likely heard the term “Equal Loudness Contour.”

This term refers to the fact that our ears perceive different frequencies at different volumes. The curves that connect sounds perceived as being of equal loudness are known as equal loudness contours.

Looking at the graph, you can see that humans tend to hear less of the low and ultra-high frequencies and more of the high frequencies at the same volume level.

(*Recently, the standard for Equal Loudness Contour was revised from ISO 226:2003 to ISO 226:2023.)

This phenomenon occurs because our ears are shaped like a closed tube.

A closed tube resonates at wavelengths that are at least one-fourth the length of the tube. The external auditory meatus (ear canal) is typically about 2.5 cm long, so it can resonate with sound waves that are around 10 cm in wavelength.

Since the speed of sound is typically calculated at 340 m/s, we can determine that the resonant frequency is approximately 1700 Hz. Additionally, the resonances in the ossicles make the high frequencies more audible.

Why is this important to know?

The equal loudness contours show that as volume increases, the lows and highs sound more balanced. This is why music mixed at high sound pressure levels (SPL) can sound tinny or weak when played at lower volumes.

So, what is an appropriate volume level for mixing? Generally, 80 dB SPL is used as a standard. Famous mastering engineer Bob Katz recommends using 83 dB SPL.

(Here, dB SPL refers to the decibel unit used to express sound levels, such as airplane noise or inter-floor noise.)

I found a video from Presonus that discusses how to set speaker volume. For those who work in home studios, 80 dB SPL might sound quite loud. Personally, I work around 75-70 dB SPL as 80 dB can be painful for my ears. Just make sure you’re not working with the volume too low.

Now, let’s move on to basic signal levels.

In audio, there are four main types of levels:

1) Microphone Level / Instrument Level

  • These signals are very weak and need to be boosted to line level using a microphone preamp or DI box.

2) Line Level

  • This is the level at which audio equipment typically communicates. It’s used in audio interfaces, mixers, hardware EQs, compressors, and other devices.

3) Speaker Level

  • To play the signal through speakers, it needs to be amplified to speaker level. This requires a power amplifier. Active speakers have built-in power amplifiers, whereas passive speakers need an external power amplifier.

4) Mixing Level

  • The levels dealt with during mixing are almost all line levels.

Line level is usually divided into two categories: Pro Line Level and Consumer Line Level. Pro Line Level is based on +4 dBu, while Consumer Line Level is based on -10 dBV.

  • dBu is a unit based on 0.775 Vrms.
  • dBV is a unit based on 1 Vrms.

(Brief Explanation of RMS : Since electrical signals are AC, simply averaging them would yield a value of zero. Therefore, the root mean square (RMS) is used to find the average.)

When converting between these units, nominal level and peak level differ. Pro levels are higher, and consumer equipment typically has lower headroom, which can cause compatibility issues with pro equipment.

However, modern high-fidelity equipment often has high signal levels, so it’s becoming less of a concern.

That’s about all you need to know about signal levels.

With this foundational knowledge, we’ve covered the basics needed for mixing. In the next article, we’ll look at DAW functions in detail.

See you in the next post!

Basics of Mixing – 2.4 Speaker Placement and Listening Techniques

Hello, This is Jooyoung Kim, a mixing engineer and singer-songwriter.

To mix effectively, you need to listen to sound accurately.

What does it mean to listen to sound accurately? It can be a long discussion, but let’s focus on two main points:

  1. Minimize distortion (from the room, objects, speaker baffle, speaker unit limitations, etc.)
  2. Listen from the correct position.

These two principles form the foundation.

Generally, stereo speakers are arranged in an equilateral triangle. The angle marked as 30 degrees in the diagram above is called the Toe-In Angle. This angle can be adjusted slightly based on personal preference.

Additionally, the tweeter, which reproduces high frequencies, should be positioned close to ear level. This is because high frequencies are more directional and may not be heard well if the tweeter is placed too high or too low. Various stands are used to achieve this positioning.

However, recommended angles and placements can vary by manufacturer, so it’s best to start with the manual and then adjust as needed.

When changing placements, it’s important to measure and identify where the issues are. With some training, you can listen to a track and identify boosted or cut frequencies, giving you an idea of where the problems lie. Measurement, however, makes it easier to pinpoint specific issues you might miss by ear.

One of the simplest and free measurement programs is REW (Room EQ Wizard), which I introduced a long time ago.

You can use an affordable USB microphone like the miniDSP UMIK-1 for easy measurement, or, if budget allows, a measurement microphone like the Earthworks M50.

By measuring, you can understand various factors beyond just frequency response, such as phase, harmonic distortion, and reverberation time. This helps you identify and solve problems in your workspace.

Doing all this ensures you hear the sound as accurately as possible, allowing you to understand what proper sound and mixing should be.

So, you’ve set up your speakers correctly. How should you listen to the sound?

Of course, you listen with your ears, but I’m not just saying that. I’m suggesting you listen to the sound in layers.

In a typical 2-way speaker, the tweeter is on top, and the woofer is on the bottom, so high frequencies come from above and low frequencies from below. Consequently, low-frequency instruments seem to be positioned lower, and high-frequency instruments higher.

If your listening distance and room support it, well-made hi-fi tallboy speakers can make mixing easier.

That was about the vertical plane. Now, let’s talk about the front-to-back dimension.

When someone whispers in your ear versus speaking from afar, there are noticeable differences:

  1. Whispering sounds clearer (more high frequencies, less reverb)
  2. Whispering sounds louder.

These principles determine whether instrument images appear in the front or back. Panning also moves them left and right.

If you’re not familiar with this concept, try closing your eyes and identifying where each instrument is located in a mix.

Since stereo images vary with different speakers, it’s crucial to understand how your speakers reproduce images. Reference tracks are essential for this.

For example, I always listen to Michael Jackson’s albums and the MTV live version of “Hotel California” when I switch speakers. Michael Jackson’s songs are well-mixed for their age, and the live version of “Hotel California” is superbly mixed except for the vocals.

Let’s wrap it up for today. Creating the best acoustic environment in your room is essential for effective mixing.

My environment isn’t perfect either, but I’m continuously improving it..!

See you in the next post!

Basics of Mixing – 2.3 Digitalization of Sound

Hello, I’m Jooyoung Kim, a mixing engineer and music producer.

Today, I want to talk about how analog sound signals are digitized in a computer.

The electrical signals outputted through a microphone preamp or DI box are continuous analog signals. Since computers cannot record these continuous signals, they need to be converted into discrete signals. This is where the ADC (Analog to Digital Converter) comes into play.

Here, the concepts of Sample Rate and Bit Depth come into the picture.

The sample rate refers to how many times per second the signal is sampled.

The bit depth refers to how finely the amplitude of the electrical signal is divided.

For example, consider a WAV file with a sample rate of 44.1kHz and a bit depth of 16 bits. This file records sound by sampling it 44,100 times per second and divides the amplitude into 65,536 levels (2^16).

A file with a sample rate of 48kHz and a bit depth of 24 bits samples the sound 48,000 times per second and divides the amplitude into 16,777,216 levels (2^24).

In a DAW (Digital Audio Workstation), these digital signals are manipulated. To listen to these digital signals, they need to be converted back into analog electrical signals.

This conversion is done by the DAC (Digital to Analog Converter), often referred to as a “DAC”.

The image above shows a simple DAC circuit that converts a 4-bit digital signal into an analog signal.

These analog signals can pass through analog processors like compressors or EQs and then go back into the ADC, or they can be sent to the power amp of speakers to produce sound.

Various audio interfaces

Audio interfaces contain these converters, along with other features like microphone preamps, monitor controllers, and signal transmission to and from the computer, making them essential for music production.

Topping’s DAC

However, those who do not need input functionality might use products with only DAC functionality.

Inside these digital devices, there are usually IC chips that use a signal called a Word Clock to synchronize different parts of the circuit.

To synchronize this, devices called Clock Generators or Frequency Synthesizers are used.

In a studio, there can be multiple digital devices, and if their clocks are not synchronized, it can cause a mismatch called jitter. Jitter can result in unwanted noises like clicks or cause the sound to gradually shift during recording (I experienced this while recording a long jazz session in a school studio where the master clocks of two devices were set differently).

To prevent this, digital devices are synchronized using an external clock generator. If you are not using multiple digital devices, the internal clock generator of the device should suffice, and there is no need for an external clock generator.

An article in the journal SOS (Sound On Sound) even mentioned that using an external clock generator does not necessarily improve sound quality.

Today, we covered Sample Rate, Bit Depth, ADC (Analog to Digital Converter), DAC (Digital to Analog Converter), Word Clock, and Jitter.

While these fundamental concepts can be a bit challenging, knowing that they exist is essential if you’re dealing with audio and mixing. If you find it difficult, just think, “Oh, so that’s how it works!” and move on.

See you in the next post!