Life Update: Live Sound Engineer, Mixing Instructor, and Thesis Work (Oct 26, 2025)

Hello everyone, this is mixing engineer and music producer Jooyoung Kim.

It’s been another busy week, so this one’s going to be a short life update post. ^^;


Last Saturday, we held the second “Frisketch x Yeonjun Yoon | Um” concert.

I worked as the sound director again. We used the same venue as the June concert, but this time we positioned the piano differently.

In the meantime, I picked up another Peluso P87 and added two RØDE NT55s for ambience. The sound came out much closer to what I had in mind this time.

Because of the speaker placement, I decided to run the mix in mono. Since the only instrument was the piano, the main mic (P87) captured it beautifully.

That said, when the artist mentioned, “I wish the piano tone were a bit less metallic,” I completely agreed. Haha.


Then on Tuesday and Wednesday, I assisted Sound Director Sung-won Yang in his class “Mixing with IR Reverb” at the Arko Arts Human Resources Institute in Ilsan.

On Wednesday afternoon, I took over and led the mixing lecture myself.

Sometimes I wonder if I made things too difficult, but the topics I consider most important in mixing tend to be the challenging ones.

So I told the students, “It’ll all make sense later—trust me,” and just went for it. Haha.

Honestly, I had so much I wanted to cover, but time was short. I trimmed and trimmed until the lecture fit the schedule perfectly—but it still felt a bit like a Spartan session.

Hopefully it wasn’t too much to absorb all at once.


By the way, I don’t think I’ve mentioned this here before, but I’ve been teaching major practical courses at my graduate school since finishing my master’s degree.

I used to give private lessons only to a few close acquaintances, but now that I’m officially teaching as part of the program, I’ve been thinking a lot more seriously about pedagogy and teaching methods.


Also, my master’s thesis has finally appeared on D-Collection (The archive of theses in South Korea).

The topic is the same as my journal publication, though since it’s an earlier research version, the experimental conditions may feel a bit rougher.

I wanted to cite my published journal paper in the thesis abstract (which is normally standard practice), but unfortunately the publication and submission dates overlapped too closely.

I even contacted the university library afterward, but they said the submission was already finalized and online revisions weren’t possible.

Still, since the journal was published first, there’s no real issue academically.

(For context: a thesis isn’t considered an official publication—once your advisor approves it, the degree is granted.)

Interestingly, I couldn’t find another case online where the timing overlapped this perfectly. ^^;


As for my recent live recordings, I’ve finished most of the mixing, and now my Mac Studio is running endless deep learning sessions again.

Compared to my old Windows PC with a GTX 1080, the Mac runs quieter and stays much cooler.

I’m redoing the experiment I failed back in May, and this time I plan to take my time and turn it into a proper paper.

I already got IRB approval, so I’m hoping the training finishes soon.

(Each CNN run takes about 20 days, by the way… hahaha… ha… 😭)

I’m planning to try a WaveNet model as well, but I’m slightly worried it might overrun the IRB deadline. 😭


That’s how things have been lately.
See you in the next post!

Won the Gold Prize in the DTM Koshien (甲子園) + Life Update (Oct 17, 2025)

Hello, this is Mixing Engineer and Music Producer Jooyoung Kim.

Lately I’ve mostly been posting about sound engineering projects, but this time, I finally did something more in line with my role as a music producer.

Last month, I submitted one of my tracks to the DTM Koshien, organized by Movement Production in Japan.

To be honest, I’d been so busy afterward that I almost forgot about it—but out of a total of 431 entries, my track placed in the top 11 and was officially awarded the Gold Prize.

Everything is labeled as a “Gold Prize,” so I suppose it’s more like being recognized as a finalist. (The video thumbnail says “Gold Prize Nominee,” but it really is a Gold Prize…^^)

The song I submitted is called Seiun (星雲, “Seongun”, means “Constellation”).
Originally, it was a full track with two verses, but since the competition required only a one-verse submission, I merged the first and second verses into a single version.

Also, the bass in the song was performed by me and painstakingly edited note by note. As with most of my works, I handled everything on my own—vocals, instruments, arrangement, mixing, and mastering.

I really appreciate that, just like with Sonicwire before, Movement Production judged the entries fairly and still awarded a foreigner like me. I’d like to release the full version with both verses someday, but that’s something I’ll need to discuss further.

The awards ceremony and event, along with the grand prize announcement, will take place in Tokyo next Sunday. I’d really love to attend, but between my limited Japanese and the cost of the flight ticket… it’s not easy.

Money’s been draining fast these days. For instance, I just got my TOEFL results today—and I completely messed up the speaking section, so I’ll need to retake it… another 300 USD gone. This will be my third attempt…

These photos are from last Tuesday’s performance of Practice Piece: Triptych, held at a venue called Seoul National University Power Plant. It featured simultaneous live recording and amplification of saenghwang (Korean mouth organ), drums, processed vocals, piano, and synthesizers. The artist is considering releasing a live album, and I’m currently working on that.

Tomorrow I’m also working as a sound director for a small concert, and next week I’ll be assisting as a lecturer at a place called Arko.

Fortunately, I do have some projects lined up, like live recording for concerts and teaching—but all my earnings immediately go into buying gear like microphones and cables, so I’m broke again. ^^; I suppose that’s just the fate of those of us in music and sound.

If only I had passed the TOEFL this time, I could have worried less for a while…

On top of that, I’ve also submitted a paper with a fairly simple topic. If it gets rejected again, it’s going to hurt, but at least I managed to finish it well enough to submit. Time to start writing the next one.

Anyway, that’s a quick update from me!
See you in the next post. 🙂

I Earned the Stage Sound Engineer Level 3 Certification

Hello, this is Jooyoung Kim—sound engineer and music producer.

In Korea, there is a government-issued certification called Stage Sound Engineer (Level 3, 3 is the first (or beginner) level, followed by 2 and 1.).
It doesn’t have a direct equivalent in the US, UK, or Canada, but you can think of it as something like a formal audio engineering license, proving both practical and theoretical knowledge in live sound.

As I’ve been working in the audio field, I realized that while practical skills are essential, having an official certification also helps when listing credentials on a résumé. For a long time, I wasn’t sure if it was worth pursuing—but I figured if I didn’t get it this year, it would only become harder later. So, I decided to take the exam.


Studying for the Written Exam

I had already bought some textbooks back when I ambitiously wanted to “master all of audio engineering.”
Unfortunately, the exam content had been updated recently, which meant my older materials were out of date.

At first, I tried to get by without buying the new edition, but after checking last year’s exam questions, I realized too many things had changed. So, I finally bought the updated books just two days before the exam and studied them intensely.

In total, I prepared for about ten days—definitely a crash course. The audio-related parts were manageable thanks to my background, but the legal regulations and stage-specific terminology were quite difficult. Memorization has never been my strong suit (even in English vocabulary study these days, I struggle a lot!).

I didn’t go through the entire book cover to cover, but I solved past exams one set per day and focused on reviewing the parts I got wrong. It was a very “efficient cramming” strategy.


The Practical Exam

Since much of the practical portion overlapped with my usual work, I didn’t need to prepare too heavily.

The main part was a listening test: adjusting pink noise with a 15-band graphic EQ to balance different frequency ranges, and identifying test tones across the EQ bands.

Because I couldn’t find a simple 15-band graphic EQ plugin anywhere, I actually built one myself as a VST3 and AU plugin. If anyone needs it, I uploaded it here:

🔗 GitHub – JYKlabs/15-Band-Graphic-EQ

Mac users can simply extract the files and place them in /Library/Audio/Plug-Ins/VST3 and /Library/Audio/Plug-Ins/Components.

Windows users can place the VST3 file in their VST3 plugin directory. (Since I only built it on Mac, I haven’t tested it on Windows yet.)

The plugin is extremely minimal—no extra features, just a straightforward EQ.

During the actual exam, there were 10 listening questions in total. The first five (identifying effects) were fairly easy, but the last five (detecting EQ adjustments applied to music or noise) were much harder. Since the exam environment was different from my usual studio setup, I struggled a bit.

Also, I tend to think of EQ in terms of musical intervals, but the test was structured entirely in octave relationships, which threw me off at first.

Still, since passing only required 6 correct answers out of 10, I managed to make it through. Thankfully, my hearing was in decent condition that day (sometimes ear fatigue can really mess me up).


Final Thoughts

Unlike in South Korea, many Western countries don’t offer official government-issued certifications specifically for live or stage sound engineering. Instead, recognition and credibility often come from trusted industry certifications, educational credentials, or portfolio evidence.

For example, the Certified Audio Engineer (CEA) credential from the Society of Broadcast Engineers (SBE) is well-regarded and requires both experience and passing a technical exam. For those focused on live sound, programs like Berklee’s Live Events Sound Engineering Professional Certificate offer structured, practical training.

Even if you already have solid skills, it can sometimes be difficult to secure projects or convince clients without something official to show. That’s where certifications and structured programs help: they provide a clear, external validation of your abilities and open doors that pure experience alone may not.

At the end of the day, audio work is unpredictable: sometimes you’re mixing in a studio, other times you’re troubleshooting live sound under pressure. The more prepared you are, the easier it is to adapt.

Thanks for reading, and I’ll see you in the next post!

How to Set the Subwoofer Crossover Frequency?

Hello, I’m Jooyoung Kim, an engineer and music producer.

In my previous post, I mentioned that I had written a paper on subwoofers, right? On August 12, my paper, Group Delay-Driven Crossover Optimization for Subwoofer Satellite Systems at Listening Position, was officially published.

I had planned to write about it as soon as the paper was out, but time has been tight lately… ^^;;

This post is about how to set the crossover frequency for subwoofers.

The motivation behind this was pretty straightforward. Not only studios but also many individual users incorporate subwoofers into their setups. However, there’s surprisingly little guidance out there on how to properly set the crossover frequency.

I myself use two subwoofers!

From a perceptual perspective, there are papers suggesting that humans don’t easily perceive directionality below a certain frequency (say, a few Hz), so the crossover should be set below that threshold. But when it comes to numerical analysis, the only paper I could find was Dr. Bharitkar’s Automatic Crossover Frequency Selection for Multichannel Home-Theater Applications.

In that paper, the claim was that a flatter frequency response in the low-frequency range is ideal. However, dips in very narrow frequency bands often don’t show up clearly in numerical calculations of variance.

I was convinced there had to be a better approach. So last summer, I bought a measurement microphone and started taking measurements without a clear plan.

By experimenting with different crossover frequencies, I collected a ton of data and made an initial discovery: there’s a correlation between Group Delay (or Excess Group Delay) and the frequency response.

After trying various configurations, I found that Excess Group Delay wasn’t as strongly correlated, but peaks in the Group Delay (whether positive or negative) corresponded to dips in the frequency response. Moreover, the smaller the absolute value of the Group Delay, the less pronounced those dips became.

I conducted experiments in a university classroom and my own workspace, using a Finite Element (FE) model to demonstrate this correlation. My conclusion was that the crossover frequency should be chosen to minimize the maximum absolute value of the Group Delay in the low-frequency range.

Here’s the mathematical expression for it:

Looks a bit daunting, doesn’t it? ^^ Let me break down the terms:

  1. ω_oc: The optimal crossover frequency (frequency is typically denoted by ω).
  2. ω_LC: The lower bound of the crossover frequency (Low Crossover).
  3. ω_HC: The upper bound of the crossover frequency (High Crossover).
  4. GD(ω_i): The Group Delay value at frequency ω_i.
  5. α, β: Correction factors for the low-frequency range.

I included α and β because I noticed that Group Delay can vary significantly outside the adjustable crossover frequency range. These correction factors help account for that.

Setting the crossover frequency this way not only benefits phase response (since Group Delay is the rate of change of phase) but also improves the frequency response. (For those diving deeper: this is trivial in minimum-phase systems, but real-world systems aren’t always minimum-phase, which makes this approach meaningful.)

Additionally, I applied 4th-order Linkwitz-Riley filters to both the satellite speakers (the main speakers in a subwoofer-satellite system are often called “satellite speakers”) and the subwoofer, while carefully aligning timing and phase. These conditions are critical for the approach to work.

I was working on a tool to automatically measure and output audio based on this method, but analyzing the data to select the optimal crossover frequency turned out to be quite time-consuming. With other papers and projects piling up, I’ve had to put it on hold for now.

If I get some free time, I’d love to revisit it. It’d be amazing if a company like Genelec saw this and added it as a feature… haha. And if they wanted to sponsor me, that’d be even better… ^^;;

I tried to explain this in a straightforward way, but the topic itself isn’t exactly simple, so I hope I got the point across clearly! 😅

If you’re curious about the detailed setup or experimental process, feel free to check out the paper or reach out to me directly.

Until next time! 😊