The Lazy (and Efficient Way) to Sort and Label My Interview Video Footage for My Clients

Speed has a way of unlocking so many different efficiencies and new workflows. It gets me giddy!

Case in point, I just finished filming massive set of interviews, at least a dozen people, coordinating two different camera angles and backup audio.

Normally, I’d have to either manually go through every single video. Usually, if you have a very good process, this is not really that difficult. It just takes a second to sit down, think it through, and put all of those different resources together.

But also, it’s hard to take out the SD card in between takes. And I truly despise renaming files and moving them and sorting them at the end of the day.

My Pre-Production Tagging Plan

Usually in advance, I plan with my clients a full-on video plan including approving our overall script, what goes on the teleprompter, and the questions that are asked for the interview. This saves me so much time in post-production for reference and spelling and what not.

The nice part? Now I have data that I can export as markdown files, a happy time for all AIs, specifically feeding it into Claude Code as well as Google’s Gemini CLI.

I’m able to prompt the CLI, “Okay, Go ahead and make directory structure for me.” This includes naming the primary video that the person is in, and then creating the subfolders for each person.

I can also have it create a rename schema in a csv file. This allows me to put in a job code, the name of the person interviewed, the film take number, etc.

Poof: a consistent naming scheme.

Helping the AI to Recognize Files with Transcripts

How do I get it the AI to understand that a certain file specifically corresponds to this folder?

Transcripts. Specifically, videos and audio make it easy to understand context via audio transcripts. I go ahead and create transcript really quickly using MacWhisper and NVIDIA Parakeet, a new model that’s 750x faster than OpenAI’s Whisper.

That speed is phenomenal. I just drag and drop a bunch of files in there, bulk transcribe in place where the file is located, and then I let the AI do the work of studying the transcript files, renaming the files based on the transcript files, and then copying them over into the folders that I need. All while I’m letting it sort my files in the background. In one run!

It’s actually pretty incredible in how much mental load this takes off your brain. And you may be wondering, well, how can you trust its capability? What if it renames the wrong file?

This is where context engineering is so important. I make sure that my markdown file had the right cues inside of it and that whatever language is being used. And, to make it easier for me, during the filming process, I would announce the relevant video tag out loud for the mics to pick up (aka the transcript to process).

For example, if the markdown file said ‘video one introduction,’ then I would also say that out loud during the filming. I would also give other cues, almost like live tagging my video with my voice. Then I transcribe the text, that tag comes up and the AI can pick up those resources.

I found that Gemini does a better job of handling a larger context with all of the transcripts, especially if you’re doing, thirty minute to an hour long interviews and you’re splitting them over so many different clips. In addition, it’s really good at handling large copy commands because my files tend to be multi gigabytes.

I’ve also noticed that Claude Code crashed several times trying to run massive copy commands. Still, I prefer Claude Code over Gemini at this time.

Automate it all!

It’s just incredible that basically I can let the computer interpret the context, rename the files, and sort them into place, what typically an intern would do for me!

There is so much headroom for automation. I can build this workflow now, but I could have a watch folder via Hazel that grabs relevant files, runs it through a transcriber, sends it to the AI, analyzes it, and renames it. All in one drop. It just becomes that much easier to organize all of your files.

Some people would be scared to consider this methodology and workflow. Because who wants to trust their computer with file commands that can rm -rf my /home directory?

Well (mostly) I like to stay on the riskier side of things. Ready to dangerously skip permissions with me?