TL;DR: I used optimal transport to reshape strange attractors into arbitrary text. Play with the demo.


Did you know that you can combine optimal transport / transport theory with chaos theory, specifically strange attractors, in order to draw arbitrary shapes?

I have long felt that reshaping the point densities of beautiful-looking strange attractors (some examples here from Paul Bourke's website) should be possible in a rather straightforward manner.1

I mean, look at these examples from Bourke's page, and tell me they can't become a "T" and an "O" with a wee bit of imagination:

Bending Chaos to Your Bidding

I've toyed with a few ways to do that. This includes allowing particles to evolve in their original Euclidean space, in order to preserve the richness of the chosen attractor's dynamics, while subjecting that space to a diffeomorphism that warps it such that the attractor's density matches that of the desired shape (typically binary: maximized in "filled" regions, and 0 elsewhere). That "works", but the results don't necessarily look as good as one would expect. It also tends to be a little slow, as one needs to learn the diffeomorphism (which can be done by training a small MLP on the fly).

There is a more direct and effective way to blend chaos and order: using gradient-based optimization on the particles' state vector. In particular, minimizing a loss based on a Sliced Wasserstein Distance can push the empirical distribution formed by the particles (each of which can be seen as a realization of a random variable over the 2D plane) to match the distribution defined by some target shape.

Here, the "sliced" aspect of SWD is meant to keep things fast enough to run in real-time; the \( \mathcal{O}(n^2)\) complexity of pairwise computation hits you really fast with 1,000-100,000 particles.

A key advantage of using gradient-based optimization on top of the original dynamics is that it's very easy to keep things looking interesting, and play around with the relative strength of the forces acting on the particles. It also feels very natural when one remembers that the continuous analog of gradient descent, gradient flow, is a dynamical system, just like the one that produces the original attractor.2

Who Knew an "O" Could Be So Trippy?

At first, I just tried to use this idea to reshape the particles into an "O", with a few other loss terms such as using a signed distance field to penalize the distance to the shape's boundary (this turns out to be unnecessary). My first attempts led to the discovery of very peculiar artifacts in extreme numerical regimes, which I turned into the following visualization (complete with music—courtesy of ElevenLabs).

It starts "normal" (as normal as a strange attractor can be!), morphs into an "O" for a brief moment, then all hell breaks loose. Very trippy. LSD ain't got nothing on nonlinear dynamics; perhaps because it is nonlinear dynamics all the way down? The brain agrees.

Hey Claude, Make It Run on the Web

After gleefully playing around with this glitch art and getting my proof-of-concept scripts to work locally, I fed two of my messy Python experiments, which used Taichi and PyTorch, to the freshly-released Claude 4.5 Opus, and asked it to make a Web-based, GPU-accelerated implementation of this idea.

Here is the result. It is pretty cool. I swear. It is recommended to view it on a computer.

If you let the demo run, the particles will settle into the shape of the target text, which can be edited freely in real-time (try it!). The weighting of the transportation loss will then be pushed to excessive values, then brought back down, in an endless cycle. I encourage you to play around with the system's various knobs; you can get very interesting-looking results by doing so.

Some things to try: when the system is in a state where the text is legible, try typing your own text and watch how the particles reshape themselves. Uncheck the "auto-ramp" feature to play around with the "SWD strength" slider by yourself—that parameter is arguably the system's most important knob, as it controls the contribution of the order imposed by the transportation loss.

Ew, Vibe-Coding

Claude converted my Python-Taichi-Torch mess into a runnable Web-based demo from the very first attempt; beyond that, I re-prompted it a few times to add a few quality-of-life features.

It technically runs on mobile devices, but the sliders get squished. While I have no doubt that it could do it, coaxing Claude into turning this from "demo" to "production-ready webapp" will lead to sharply diminishing returns. Still, not bad for an entirely vibe-coded port; Claude's Web interface won't even let me edit the code by hand, so every line had to come from the LLM.

I must say there is something refreshing about being able to feed an LLM the "hard" parts of a problem, and getting a readily-shareable React app out in less time than it would normally take to set up a build environment.

Before I get lynched by a rabid anti-vibe-coding mob, let me say that I, too, enter a state of profound existential sadness when I have the misfortune of coming across giant AI-generated PRs, whose authors have lost critical thinking to the point of offloading even their communication to the LLM.

And yes, I could have made it myself. This little demo is not nearly as polished as it would have been in that case. But would I have made it myself, given the time investment? Probably much later, or not at all. As much as I enjoy playing around with this, it is not my main activity or obligation.

This, right here, is the main value of vibe coding to me. It doesn't replace expert engineering; it replaces not doing something, not trying something, because competing demands win. There is so much to do, so much to learn, so much to experiment with, and so little time. If AI can catalyze this process, then it is a pure win, and it becomes a tool for growth rather than cognitive atrophy. This is an incredibly valuable balance to find.

What's Next? You Tell Me!

Is this useful? Probably not; at least not as an end product. The process, however, was definitely enriching.

Is it pretty? I think so. Do you?

I have decided to call this fun little side project "StrangeDraw" (or strangedraw—capitalization TBD), in honor of the strange attractors that inspired it and that give it texture. I find the idea of drawing with chaos fascinating.

There are countless ways to extend this. One doesn't have to be limited to text, nor even to vector shapes in general. Color-matching can be incorporated into the optimization objective. The target shape itself can change over time. Various "source" attractors can be used. Per-particle optimal transport can complement, rather than replace, a warping of the manifold on which the particles evolve. Music could be generated from the dynamics themselves. It's a great playground at crossroads between several fields: optimization, dynamical systems, differential geometry, art.

If I can find the time and there is interest, I might do a proper GitHub release at some point in the future, turning this from a one-off demo into a proper library or tool. If you think this would be cool, please shoot me an email, open an issue on this tracking repo, or leave a comment on Hacker News! I am debating how to approach it. Perhaps I will use it as an excuse to write some Julia code3. Or maybe I'll see how usable IREE is for writing retargetable compute kernels using MLIR Linalg4.

In the meantime, feel free to read the code of the React demo I linked; or better yet, to experiment with the relevant concepts by yourself.

I hope this sparked your interest as much as it did mine!


1

See Shashank Tomar's Show HN post from last month for a discussion of the topic, and to play around with more beautiful-looking strange attractors right from your browser.

2

Things do get a bit more hairy when you introduce Adam, as I did, in order to get better convergence.

3

Julia has the incredible DynamicalSystems.jl and ChaosTools.jl, which have been drawing my attention ever since I suffered at the metaphorical hands of the mad yet brilliant XPP/XPPAUT.

4

The promise of supporting CPU, Vulkan, CUDA, Metal and WebGPU all at once, for free, without hardcoding block sizes, seems very enticing. On top of that, did you know this intermediate representation is technically higher-level than OpenAI's Triton? That blew my mind.