AI4AnimationPy: Every frame generated, not authored.
I stumbled on this repo a few days ago and genuinely couldn't believe it wasn't all over my feed. So I built a way for you to try it yourself. But first, let me explain why I think this matters.
What's actually happening here
TLDR: Every animation frame here is generated, not authored by animators.
You know how game characters move? In most games and animations, that's the result of enormous manual labor. Animators hand craft hundreds of individual motion clips---walking, running, jumping, turning---and engineers wire them together with complex logic: if the character is going faster than X and turning more than Y degrees, blend into this clip. It works, but it's brittle, expensive, and you can always feel the seams.
AI4AnimationPy does something different. Instead of hand crafting all that logic, it trains a neural network on real motion capture data---actual humans and animals moving---and then uses that network to generate the next pose, every single frame, in real time.
When you move the character left, the AI isn't looking up "turn left animation." It's computing, from everything it knows about how bodies move, what this body should look like right now.
The result is motion that feels alive in a way pre-authored animation rarely does.
AI4Animation to AI4AnimationPy
AI4AnimationPy isn't a new idea. Facebook Research has been working on this for years, originally built on top of Unity.
But Unity was always an awkward fit for AI research. Training data that should take minutes took hours. You couldn't easily experiment. The ML research community lives in Python, and Unity is not Python.
The new Python version (AI4AnimationPy) changes that.
- Setup that used to take 4+ hours now takes minutes.
- Researchers and hobbyists can actually tinker.
- And because it's native PyTorch, the neural network is fully accessible, not a black box bolted onto a game engine.
Why this matters beyond research
The implications are bigger than they first look. Traditional animation is one of the most expensive bottlenecks in game development and film. Systems like this point toward a future where you describe how you want something to move and a network handles the rest, not just for AAA studios, but for indie developers, hobbyists, and eventually anyone.
We're already at the point where you can load a human character, a dog, or your own custom mesh, and watch it move with AI-driven locomotion---fluid, responsive, requiring no hand authored animation at all.
That's not a research demo anymore. That's something you can run on your own machine today.
Try yourself
I was so impressed by this that I made a whole custom UI to experiment with not only the built-in example meshes (human and dog), but also custom meshes ,complete with a 1-click launcher.
This works on ALL OS, and even works on very old machines. It even works FAST on my old Intel mac! I can't believe this type of realtime inference works everywhere on a laptop today.
With the app, you get a local browser playground where you can:
- Load and control a human character with AI-driven locomotion
- Switch to a quadruped (dog) controller with gait transitions
- Try additional examples derived from the human/dog meshes
- Import your own rigged GLB to play
- Experiment with inverse kinematics in real time
This is early and rough around the edges, but the underlying technology is the real thing---published research from Facebook/Meta, running locally, for free. If you've ever been curious about how game animation actually works and where it's going, there's no easier way to see it in action.
Build your own!
This is genuinely exciting, and makes me super optimistic about locally hosted AI in 2026. If you love what you see and want to experiment, it's very easy with Pinokio. Simply go into "dev" mode, open your favorite agent, and ask if it can customize the app using your own idea. And if so, build it.
