Last weekend I live coded visuals at the 2025 Net Gala using a new tool I've been working on. It's called DiMattina, can play with it here and it looks something like this:
It's a ton of fun! Super versatile and gives me a kind of satisfying analog squishy softness that I struggle to find working with digital visuals.
The tool was inspired by conversations I had with Anton Marini about shader tricks he used to emulate the Rutt-Etra, a legendary analog video synthesizer that worked by carefully bending the electron beam that formed images onto cathode ray tube monitors into geometric patterns.
The late Bill Etra said that the synthesizer gave him "complete plasticity with the raster." I am drawn to the softness of its edges and the simplicity of its constraints -- I love that you can see the lines that make up the images, like the moving electric brushstrokes of an impressionist painting.
Around 2009 Anton worked with Bill Etra to build a VDMX plugin that reproduced the effect on GPUs. Anton's observation was that you could drive it with two textures: one texture to determine the geometry of the scanlines and the other to determine their color. Any sources could be used for either texture, allowing the artist to mix and match and experiment.
In early 2023 Anton described this trick to me while our dogs played1 at the local dog park. This mastodon thread captures the aftermath of that conversation which ends in an appropriate shitpost.
My contribution to this legacy is the observation that texture sources driving the raster geometry and color could come from a live coded system, namely Hydra. My hunch was that this could give you the flexibility and liveness of Hydra with the "plasticity" of the Rutt-Etra on all on modern GPUs, running in the browser.
Not all hunches pan out but this one absolutely did.
The initial version of DiMattina was built in a spasm of hyperfocus late January 2023 and progressed rapidly from the first working prototype
towards something deeply satisfying
The full process was documented on Mastodon in a series of threads: Jan 25, Jan 26, Jan 28, Jan 29, Jan 29, Jan 30, Jan 30, Jan 31.
The underlying graphics code was a mess, the integration with Hydra limited, and nothing was optimized. But the fact that I kept making things with it that made me smile was evidence enough that I found something worth building on. It got put on the shelf as a successful prototype while I had to turn to other priorities until I picked it back up again last month to prepare for the 2025 Net Gala.
I always feel that the best thing you can do with a good prototype is throw it out and start from scratch with all the benefits of what you've learned and none of the baggage. So the current implementation is a total rewrite of what I did in January 2023.
The pipeline begins with a largely unmodified Hydra, integrated using the excellent hydra-synth npm package. DiMattina uses two of Hydra's outputs specially:
Other outputs are available for use as in normal Hydra but not treated specially by the system. Hydra never draws to a canvas -- we access the underlying framebuffers and textures directly and feed them into our own pipeline.
To get the buttery smoothness that makes this effect what it is we absolutely need unclamped floating point textures. Even though floating point numbers are the bread and butter of GPU programming, textures by default store color information with a single byte per channel. That's only 255 levels of red, green, blue, and alpha. Plenty to render an image, but not enough precision to avoid seeing stair-steps in this effect. Float textures use a floating point number for each channel, which is a full four bytes of precision. We also need a linear interpolation extension on float textures enabled, which is not always available.
Enabling floating point textures and linear interpolation in Hydra was a pain because its underlying graphics library regl is more or less hardcoded for WebGL 1 and written in a way that makes hacking on it difficult and frustrating. I need WebGL 2 to support another part of my approach, and making regl play nice with WebGL 2 + float textures + linear interpolation ended up being a hackjob that is probably leaving some performance on the table. Something to revisit in the future!
The framebuffers are never drawn to the screen but are visible in GPU debug tools like spector.js. Here's a dump from a live coding session with the geometry texture is visible as "Frame buffer: 0", the color texture is visible as "Frame buffer: 1" and the final image is visible in the corner as "Canvas frame buffer".
With Hyda and regl hacked to use WebGL 2 and float textures, the next step is to rasterize the scanlines.
The scanlines are rasterized as strips of triangles2. A conventional approach would be to set up attribute buffers on the CPU populated with triangle corner positions and an index buffer to stitch them together, but that's not what I did. DiMattina actually has no geometry, it computes all its vertex positions in the vertex shader using gl_VertexID (only available in WebGL 2 -- justifying the headache of the previous step!) and carefully coordinated uniforms and draw count. I knew from jump that I would want to play with the density of the scanlines during a performance, and that meant that updating the number of scanlines would have to be very fast, no slower than updating a uniform. If I had to recompute attribute buffers whenever scanline density changed then I couldn't do this:
And that would be a shame!
gl_VertexID is a built-in variable available in GLSL that resolves to the "ID" or index of the current vertex being processed, counting up from 0 to the value passed in for draw count to gl.drawArrays. We divide the screen into a grid, with rows defining scanlines and columns breaking them up horizontally. With rows, columns, and vertex IDs its straightforward to compute a UV coordinate for each point making up the scanlines.
Given a UV coordinate for each point on the scanline we can sample the geometry texture to determine the position of that scanline on the screen, treating the red component as the x coordinate and the green component as the y coordinate3. Incrementing the column number we can determine the positions of the next two points as well which, combined with a few uniforms, gives us enough information to build and rasterize our triangles without any seams between them, all in the vertex shader with no attributes.
// vert.glsl
uniform vec2 resolution;
uniform vec2 scan_resolution;
uniform sampler2D geometry_texture;
uniform float beam_thickness;
out vec2 uv;
void main() {
float i = float(gl_VertexID);
float triangleVertex = mod(i, 6.0);
float lineIndex = floor(i / 6.0);
float cols = scan_resolution.x;
float rows = scan_resolution.y;
float row = floor(lineIndex / cols);
float col = mod(lineIndex, cols);
vec2 t0 = vec2(col, row);
vec2 t1 = vec2(col + 1.0, row);
vec2 t2 = vec2(col + 2.0, row);
vec2 uv0 = (t0 + 0.5) / vec2(cols, rows);
vec2 uv1 = (t1 + 0.5) / vec2(cols, rows);
vec2 uv2 = (t2 + 0.5) / vec2(cols, rows);
vec2 p0 = texture(geometry_texture, uv0).rg;
vec2 p1 = texture(geometry_texture, uv1).rg;
vec2 p2 = texture(geometry_texture, uv2).rg;
vec2 dir = normalize(p1 - p0);
vec2 dir2 = normalize(p2 - p1);
vec2 normal = vec2(-dir.y, dir.x);
vec2 normal2 = vec2(-dir2.y, dir2.x);
vec2 positions[6];
positions[0] = p0 + normal * beam_thickness;
positions[1] = p1 + normal2 * beam_thickness;
positions[2] = p0 - normal * beam_thickness;
positions[3] = p1 + normal2 * beam_thickness;
positions[4] = p1 - normal2 * beam_thickness;
positions[5] = p0 - normal * beam_thickness;
vec2 uvs[6];
uvs[0] = uv0;
uvs[1] = uv1;
uvs[2] = uv0;
uvs[3] = uv1;
uvs[4] = uv1;
uvs[5] = uv0;
uv = uvs[int(triangleVertex)];
vec2 xy = positions[int(triangleVertex)];
vec2 aspect = vec2(resolution.y / resolution.x, 1.0);
gl_Position = vec4(xy * aspect, 0.0, 1.0);
}
// frag.glsl
in vec2 uv;
out vec4 fragColor;
uniform sampler2D color_texture;
void main() {
fragColor = texture(color_texture, uv);
}
// main.js
const drawCount = scan_resolution.w * (scan_resolution.h + 1) * 6
DiMattina has two "modes", solid mode (outlined above) and a non-solid mode that is more CRT / scanline-like. In this mode I draw the red, green, and blue components as separate draw calls, slightly offset, and blend them with
gl.blendFunc(gl.SRC_ALPHA, gl.ONE);
to give that gorgeous analog edge.
This could probably be done in one draw call rather than three but I didn't bother with it too much -- the performance sinks are elsewhere.
That's about it. Everything else is more or less conventional and unremarkable. DiMattina rests primarily on how good Hydra is and I am grateful to Olivia and the contributors for making such an amazing and adaptable live coding system. And of course indebted to Anton for his generosity in walking me through his techniques and opening my eyes to this approach.
Anton has an adorable dog named Joika who our late sweet old girl Kitty loved to death. Kitty would bark non-stop at Joika to play, which Joika would... tolerate. Admittedly "play" is used loosely in this text. Rest in peace sweet Kitty. ↩︎
My first prototype was rasterized with lines. I have an irrational affection for OpenGL line rasterization and will often turn to it in situations like this, but the reality is that they're very poorly supported and modern drivers will not support line thickness beyond a single pixel... I towed the line (🥁) as long as I could but as soon as I moved over to triangles everything was beautiful and perfect forever. ↩︎
I don't currently use the blue coordinate on the geometry texture for anything. ↩︎