AI08: For the love of stable diffusion.

Back when Runway ML confidently hallucinated physics and Stable Diffusion thought "deeper" meant "infinitely recursive"—a collection of early generative video experiments from 2022-2023, complete with all the glorious AI slop.


The Trash We're Attached To

Look, I know this looks like trash. A lot of this is exactly what people call "AI slop" now—that distinctive jankiness of early generative tools struggling to understand basic physics while confidently rendering impossible scenarios. But there's a certain kind of attachment to it because this is a new technique.

Imagine learning the fundamentals of animation over a week. That's what learning these tools was like.

Especially setting up your own Stable Diffusion setup on your computer. On a Mac. Which keeps giving errors and solutions that aren't on any online forum, so you have to figure these things out yourself. So yeah, I'm still putting up the slop. And believe it or not, each of these took a whole night to create. A few of the earlier ones I made with a Google Colab notebook too. That's right, that's how old this technique is.

Sometimes the most valuable experiments are the ones that fail in ways nobody's documented yet. You learn by breaking things that barely work to begin with.



Deeper in the Flower

Made with: Stable Diffusion (Google Colab), After Effects

This is the first one where I took a photo I shot and tried to make an image to explore the darker side of it. An early experiment with recursive zooming through an orange flower's fractal geometry. The animation uses iterative prompting to maintain visual coherence while continuously diving deeper into the flower's structure, where petals become patterns become new petals.

It's less about technical perfection and more about discovering what happens when you let AI interpret "deeper" literally. Made in Google Colab back when I was figuring out how to bridge traditional motion design sensibilities with generative tools that have their own ideas about what comes next.

The whole thing took an entire night to render and composite. For something that lasts maybe 15 seconds.


Vancouver Animation (Floating Everything Edition)

Made with: Stable Diffusion, After Effects

I thought I should make something for my community since I was in Vancouver when I made this. It's a bit of animation of Vancouver, but as you can see, the animation is more than happy to make everything float.

The buildings drift. The perspective warps. Nothing stays anchored to anything else because early Stable Diffusion had zero concept of structural integrity or gravity. There's no point fighting the noise, so I figured trying to make an artwork that includes the noise is the best way forward.

When your tools insist on being broken in specific ways, you either fight it or design around it. I chose the latter.


Forest Time Lapse (One Million Years, Give or Take)

Made with: Stable Diffusion, After Effects

In this one, I tried to create a time lapse of a forest. It is not accurate, but it does get the mood right. Feels like a time lapse of one million years or so.

Trees morph into other trees. Seasons blend into each other without transition. The forest breathes and shifts in ways no actual forest would, but there's something compelling about watching an algorithm hallucinate deep time. It doesn't understand geological timescales or ecological succession, but it understands change, and that's enough.

This is why I kept exploring. The results weren't what I asked for, but they were often more interesting than what I imagined.


Style Transfer Experiments (Before We Knew What to Call It)

Made with: Stable Diffusion (prompt-based style transfer)

In this example, you'll see me trying what today we know as style transfer, but I'm using Stable Diffusion and trying to do some prompt thing. Back then, there wasn't a clean "style transfer" button—you just threw prompts at the algorithm and hoped it understood what you meant by "in the style of..."

Sometimes it worked. Sometimes it produced something adjacent to what you wanted. Sometimes it gave you complete nonsense. But the nonsense often had character, a distinctive AI-hallucination quality that felt worth preserving.


Cat on a Tightrope (Between Skyscrapers)

Made with: Runway ML (early version)

A cat cycling across a tightrope strung between New York skyscrapers. This was made back when AI video generation was still figuring itself out—the animation is janky, the physics don't quite work, and the cat's relationship with the bicycle is... interpretive at best.

The AI had big ideas about urban tightrope cycling but zero understanding of how cats, bikes, or gravity actually work together. Another whole night of rendering for something that barely makes sense.

But that's kind of the point. This was about testing what Runway could do before these tools got polished, when the results were unpredictable and the mistakes were half the charm.


Surfing Over New York

Made with: Runway ML (early version)

A surfer rides invisible waves through the New York skyline. The physics make no sense, the surfboard's relationship with gravity is purely theoretical, and the city below exists mostly as a suggestion.

This was early-stage Runway, before the tools learned to smooth everything out. The AI confidently generated a surfer gliding through air above skyscrapers without bothering to explain how or why. The result is less "action sports" and more "fever dream with a surfboard."

The most interesting results sometimes come from asking AI tools to do things they absolutely shouldn't be able to do. You learn their limitations by watching them confidently produce impossible scenarios.


Astronaut in Steampunk Airship

Made with: Runway ML (early version)

An astronaut aboard a steampunk airship, looking down at... something. The specifics don't really matter because early Runway had its own ideas about what "steampunk airship" meant, and those ideas were only loosely tethered to mechanical logic or narrative coherence.

The astronaut exists, the airship exists, the looking-down happens—but the relationship between these elements is more vibes than physics. Astronauts plus steampunk plus aerial perspective equals... well, this.

Another night spent rendering something that makes no structural sense but somehow still works as an image.


What I Learned From Broken AI

These experiments taught me more about generative tools than any successful render could. You learn an algorithm's boundaries by watching it fail. You understand its biases by seeing what it invents when it doesn't know what you're asking for.

Setting up Stable Diffusion on a Mac that didn't want to cooperate. Troubleshooting errors that had no Stack Overflow answers. Spending entire nights rendering 15-second clips that looked like what everyone now dismisses as "AI slop."

But here's the thing about slop: it's only slop if you're trying to make it look clean. If you accept that early generative tools have a specific aesthetic—warped physics, floating objects, morphing structures, hallucinated details—then you can design with those limitations instead of against them.

Early Runway ML and Stable Diffusion were like collaborators who spoke a different language. They'd interpret your prompts through their own strange logic, producing results that were technically what you asked for but executed in ways you'd never imagine.

That gap between intention and execution is where the interesting work happens. Now that these tools have gotten more sophisticated, that unpredictable chaos has mostly disappeared. Which makes these early experiments feel like artifacts from a specific moment in AI development—when the algorithms were confident but wrong, ambitious but confused.

I still use generative video tools (Runway, Luma.ai, Veo, Krea.ai), but the current versions are too good at what they do. They've learned to produce technically correct results, which is great for production work but less interesting for experimentation.

These videos remain reminders that sometimes the best creative partner is one that doesn't fully understand what you're asking for. And sometimes spending a whole night rendering something that barely works teaches you more than a week of perfect outputs ever could









Comments

Popular Posts