I built a rot factory

technology·December 27, 2025
brain-rot

Intro

Have you ever thought that the world would be better off with more AI slop? Well I didn't but that didn't stop me from building an app that creates it!

Brain Rot Web App

High level overview

This app automatically creates short videos using AI. You define a persona (an audience or content idea), and the system generates a video made up of images and prompts. Once the profile is set up, everything else runs on its own through a daily automated cron job.

Web app screen shot
Web app screen shot

Tech Stack

  • Next.js
  • Python Fast API
  • Google cloud (Cloud run, Cloud Scheduler, Youtube Data API, FireStore, FireBase, Vertex AI)
  • Elevenlabs
  • Tilt
  • Docker
  • FFMPEG

Demo Video

Technical Break Down

The app is built around three main pieces: the UI, the backend, and the worker. The UI is where users upload their videos and music, which later get enhanced with AI-generated vocals and captions using ElevenLabs. The backend mostly acts as the glue, coordinating requests and passing things along, but the real work happens in the worker.

The worker is where all the heavy lifting lives. It handles video processing, audio overlays, caption generation, and all the long-running tasks that would be way too slow to do in real time.

One thing worth calling out is how uploads work. Before building this app, I had zero experience with resumable uploads, but it ended up being a core part of the system. To support long-form videos, the app uses GCS-native resumable uploads, which let users upload large files straight from the frontend to cloud storage. The video gets broken into 10MB chunks and uploaded piece by piece, which makes the whole process way more reliable, even for files up to 5GB.

This setup keeps the backend lightweight, improves upload stability, and makes it much easier for users to upload large videos without things randomly failing.

Once a video is fully uploaded, the worker grabs the bucket URL, temporarily downloads the file, and starts processing it. So what does that processing actually look like?

There are four main things the worker does:

  • Trims the video to the selected length
  • Adds background music
  • Generates and overlays captions
  • Applies subtitles

All of this is done using FFMPEG, which is honestly one of the most powerful tools that doesn’t get talked about enough. With FFMPEG, we’re able to manipulate pretty much any media format we need, all directly from Python.

Diagram
Diagram

Wrap it up

That’s about as deep as I’ll go for now, but it works! I’m already posting YouTube Shorts, sending AI slop to a screen near you!