OpenAI is shutting down Sora this week. The web app goes dark April 26th. I’ve been poking at it for a while, and I’ll be honest: I’m a little sad about it.
Sora was genuinely fun to play with. Not in a “this is going to replace my camera” way, but in a “huh, that’s impressive and strange” way. You’d describe a scene, hit generate, and out would come something that looked like footage from a dream — usually wrong about physics, often wrong about hands and faces, but legitimately cinematic in ways that felt like they shouldn’t be possible yet. I had a good time. Now it’s gone. OpenAI is pivoting to enterprise and coding tools, which is where the money apparently lives.
Here’s the thing, though: that closing door doesn’t really change where AI video stands for working videographers. Because the actual story isn’t about Sora — it’s about how fast this whole space moved, and what the real blockers still are.
It got here faster than I expected.
Two years ago, AI video generation was a party trick. Blurry, five-second clips of morphing shapes that sort of resembled what you asked for. The technology felt like it was a decade away from being useful. It wasn’t. Costs have dropped roughly 60 percent since early 2025, 1080p output is now table stakes on most major platforms, and the generation time for a usable clip has gone from “go get lunch” to “go get coffee.” That’s not nothing. That’s actually kind of remarkable.
But it’s still expensive, and the math doesn’t lie.
If you’re thinking about incorporating AI-generated footage into actual client work, the billing model will get your attention fast. Kling — currently one of the better options — runs about $0.50 per second at 1080p. Which means a 30-second clip costs roughly $15. That might sound manageable until you realize you’re not generating one clip — you’re generating twenty, iterating on prompts, throwing out the ones where the physics went wrong, trying again. A morning of experimentation can burn through real money before you have a single usable shot.
Cloud APIs like Runway and Sora 2 price similarly. Runway’s entry plan gives you around 62 ten-second clips per month before you hit the ceiling. For casual experimentation, fine. For production volume, it adds up fast.
Local is an option, if your machine is up to it.
The open-source side of AI video has gotten legitimately good. Wan 2.1 (from Alibaba) runs on consumer hardware — you can get started with as little as 8GB of VRAM, which puts it in reach of a lot of modern creative workstations. HunyuanVideo, from Tencent, produces some of the best-looking open-source output I’ve seen, but it wants 48GB of VRAM to really stretch its legs, which means a serious GPU investment before you see the results. LTX-Video is the speed champion — genuinely fast generation at decent quality. All of these are real options, not vaporware.
The honest summary: local generation works, but VRAM is the hard ceiling. System RAM matters less than you’d think. Your GPU matters enormously. If you’re running 16–24GB of VRAM you have options. If you’re on an older card with 8GB, Wan 2.1 will technically run but you’ll feel the constraints.
Where I actually spend time with it.
Adobe Firefly is where I do most of my experimentation. It’s not the most powerful option, but it’s accessible, it integrates with tools I’m already in, and the credit system means I can play without watching a dollar counter tick up every time I hit generate. The output quality has improved a lot in the last year. For B-roll experiments and effects work, it’s a reasonable sandbox.
Here’s where I think this lands for working videographers: AI video isn’t ready to replace production work, and I don’t think it’s going to be anytime soon for anything that requires real control over real environments. But it’s also not a gimmick anymore. It’s a legitimate tool for the right use cases — and the right use cases are still being figured out by everyone, including me.
The technology got here faster than I thought it would. The business model for using it is still getting worked out.
If you want to see where I first started taking AI image generation seriously as a threat to a specific corner of the photography market, that post is here. And if you’re using AI tools in your workflow and running into context and efficiency problems, this one covers that.


