Introducing SuperImg
This is a video. It's also a TypeScript function.
render gets called once per frame. sceneProgress goes from 0 to 1 over the clip's duration. The return value is an HTML string. That string gets rasterized into a pixel frame, and the frames get encoded into an MP4.
That's the whole model. A video is a pure function of time. Same input, same output — deterministic, testable, composable, like any other TypeScript module.
Adding easing
The std library ships with every template. std.tween() composes easing and interpolation in one call, and std.math.clamp bounds a value to a range:
Two functions turn a mechanical linear fade into smooth deceleration: clamp bounds time to a 0–1 range, and tween maps it to the value you need with your chosen easing curve. Swap in easeOutBounce or easeOutElastic and the character of motion changes completely.
Making it data-driven
Hardcoded text is a demo. Real templates need data. Add a defaults object and the values become overridable at render time:
defaults provides the fallback values. At render time, pass overrides — a different title, a different color — and the template produces a different video from the same code.
Rendering
One command to go from template to MP4:
npx superimg render template.ts -o video.mp4The pipeline: TypeScript compiles to a render function, each frame calls it to produce HTML, a headless browser rasterizes the HTML to pixels, and ffmpeg encodes the pixel frames into a video file. Templates also run in the browser with live preview at 60fps, and SuperImg ships a <Player> React component for embedding in apps.
Why a function, not a component tree
Other tools for programmatic video give you a component tree and a timeline. SuperImg gives you a function. No component lifecycle, no virtual DOM, no reconciler — just a string of HTML per frame, rasterized and encoded.
This makes templates portable. They're plain TypeScript modules that return strings. You can unit test them with assert.equal. You can generate them from other code. You can run them anywhere JavaScript runs — browser, CLI, CI, serverless. The mental model is (time) → HTML → pixels, and nothing else.
It also means AI can write, edit, and debug templates. There's no JSX tree to understand, no timeline API to learn — just a function that returns HTML. Any LLM can produce that without special training or tooling.
The trade-off is intentional: you give up declarative composition for total control over every pixel in every frame.
Try it
No install needed — open the playground and start building in the browser.
When you're ready to render locally:
npx superimg init my-project
cd my-project
npx superimg dev template.tsEvery frame is just HTML.