Liberty’s Highlights
Liberty's Highlights
Going deep on Constellation Software with Mostly Borrowed Ideas (MBI)
2
0:00
-1:09:04

Going deep on Constellation Software with Mostly Borrowed Ideas (MBI)

Podcast #7 with MBI
2

This is podcast #7, I hope you enjoy it! 🧡

📃 Full transcript:

🎧 Listen on Spotify:

🎧 Listen on Apple Podcasts:

📺 Watch on YouTube

🔥 Constellation Software deep dive by MBI:

🎧 See also my other podcast with MBI:

Liberty’s Highlights is reader-supported. To support my work, consider becoming a free or paid subscriber. 💙


🎤 What I learned recording this one 🎧

This was my first podcast edited on Descript (thanks to friend-of-the-show David Senra (🎙) of Founders Podcast for the recommendation).

It truly is 10x software.

I still spent way too much time tweaking and editing the pod, but if I had done it using my previous Audacity workflow, it would have taken me *wayyyy* longer.

The way Descript does it is incredibly clever.

You upload your audio file to the cloud, they use machine learning to transcribe it, and the text is matched to the audio. So when you make changes to the text, the same changes are made to the audio (ie. you cut unnecessary words, or even move around or delete whole paragraphs).

When you think about it, it’s amazing that so much compute is being used to help us more conveniently edit an audio file.

A cluster of powerful GPUs and complex AI models in the cloud went over my audio just so I could more easily cut some “uh” and “ums” and delete repeated words... 🤯

Descript has a “Studio Sound” filter which also uses AI to try to clean up audio (removing echos, normalizing volume, compressing dynamic range, removing background noises like computer fan noises, etc). It works pretty well, especially if you tweak its intensity, since at 100% it can make voices sound a bit unnatural — but at 30-40%, it makes them sound better in a pretty naturalistic and transparent manner (MBI recorded on a laptop internal microphone, and to me, he sounds much better than a lot of podcasts I’ve heard).

Beatles vs Neil Young

I probably spent *way* too much time editing the podcast, making tiny edits, tightening things up, trying to improve how the voices sound with various plug-ins…

As with music, I feel like there’s a spectrum for podcast editing that goes from “Beatles” to “Neil Young”. Either you spend months in the studio looking for just the right sound, or you play it live once and then put that directly on the album.

Lots of podcasters basically release whatever they recorded without touching it much, if at all, and others will re-listen to it 2-3x while doing edit passes each time, removing the breathing in sounds, crutch words, stuttering, long pauses, etc.

Some of my favorite podcasts are more on the Beatles side, like Cortex or ATP, for example, with great sound quality and everything being very tight. So that’s what I’m trying to emulate. Not saying I’m on that level, but I enjoy trying and learning…

You can’t really judge how well I’m doing because you haven’t heard the raw audio, but I’m doing it largely for my own pleasure, because I enjoy polishing things up. (raw audio was 1h30mins while the final edit is 1h09mins, but I didn’t cut any real content!)

Hopefully, the end result is fairly transparent and you can tell that there’s like 800 edits, and whatever you can notice is still a better experience than what the raw audio would’ve been ¯\_(ツ)_/¯

💙 Subscribe Now! 💙

2 Comments