In this post, I’ll go over making a fluid simulation that looks like this:
I’ll cover the details of sewing it, what parts you’ll need, and how to program it. All the source code can be found on my GitHub. The code for the standalone fluid simulation is in this repository: https://github.com/macmv/fluid-sim, and the code for the microcontroller is on this repository: https://github.com/macmv/fl-stm32.
This was heavily inspired by a project from mixtela, which can be viewed here: https://www.youtube.com/watch?v=jis1MC5Tm8k.
This project is meant to serve as an interesting design piece, that you could sew onto the shoulder of a jacket, or on the outside of a backpack. It is not washable, as it contains sensors that can’t withstand water. However, these sensors are underneat the outter fabric, so it could withstand light rain easily, as the LED strips on the outside are quite water resistant.
This runs off 3 AAA batteries. If around half the LEDs are on, which is the case most of the time, and they are at full brightness, combined the LEDS will draw about 0.6 amps at 5v. This is driven off of 3 AAA batteries, which hold about 1 amp hour of charge. So, we can expect this to last for about an hour and a half on a full charge. The microcontroller and sensors draw a couple milliamps in comparison, so they can be ignored for battery life purposes. Additionally, the brightness could be turned down, which would increase the battery life up to 10x when on a lower brightness setting.
The simulation itself revolves around a mictrocontroller and an inertial measurement unit (IMU). The microcontroller runs the fluid simulation, and the IMU measures what direction gravity is in, so that the fluid flows downwards. The display is built out of a strip of Neopixels, which are individually addressable LEDs.
The microcontroller I’m using is the STM32F401CCU6. The important bits of that name are the “STM32,” which is the type of microcontroller, and the “F4,” which is which class of microcontroller it is. The F4 class has native floating point instructions, which is why it’s fast enough to run this entire simulation. The IMU I’m using is the MPU-6050, which is a widely available chip (at least at the time of writing). This IMU is a combined accelerometer and gyroscope. Only the accelerometer is used in this project.
Lastly, I’m using a strip of Neopixels. These are cheap and can be easily purchased in a pre-assembled strip. I’m specifically using one that is built on a flexible PCB, so that the entire assembly is flexible.
Here’s all the materials you’ll need:
The first step is to cut the Neopixels into short strips. The goal is to make a grid of these LEDs, so cut them up in such a way that you have even length strips. You’ll want to attach these in alternating directions. The input/output side of the Neopixels is important, and they’ll only work in one direction. The goal is to attach the output of one row to the input of the next, so that’s why they’re in opposite directions.
The Neopixel strips are quite simple, as they only have three wires going through them. So, you can sew through them if you’re careful, and attach them directly to the fabric. If you aren’t comfortable sewing through the LED strips, you can also use fabric glue, or simply sew over them in a couple places. Once you’re done, you should have something that looks like this:
Now, we need to attach the LED strips together. This is where you’ll need a soldering iron, or if you don’t have one, to get creative with some conductive thread. Each +5v, GND, and signal line needs to be attached. The output of each row will be attached into the input of the next row, so you should attach them with short, 1-inch or so wires. I soldered mine directly together, but you can also use conductive thread for this part.
Next, we need to attach the microcontroller, IMU, and the battery. The orientation of the IMU is important. The code is written to expect the +Z direction to go vertically upwards with the LED strips. So, if you attach the accelerometer in a different orientation, you’ll need to edit the code to make the lights actually display in the correct orientation.
The microcontroller and battery can be attached anywhere. I put the microcontroller near the start of the LED strip, so that it was easy to attach them up.
The microcontroller and IMU are attached with an I2C bus. I2C (Inter-Integrated Circuit) is a protocol that allows multiple chips to communicate with each other with two wires, as opposed to connecting up a bunch of wires for each signal, or individual wires for each chip. We’re only going to use I2C to connect two chips in this circuit, but it’s still convenient for this.
The neopixels use a different communication protocol that uses a single signal wire. This protocol is far less flexible than I2C, and so can only control a single strip of neopixels at a time.
All that being said, the circuit we’re going to build will look a bit like so:
The purple squares on the right are the neopixels. We’ve already attached all of them together. So, the only wiring left to do is between the microcontroller, IMU, and the battery.
The IMU needs to be attached to the microcontroller over the SCL and SDA pins. It also needs 3.3v power, so attach it to the 3.3v pin on the microcontroller. The microcontroller and Neopixels both run off of 5v power, so we attach the 5v battery pack to both the first Neopixel and the VBAT pin on the microcontroller.
Once it’s finished, it should look like this:
Uploading the code to the microcontroller is simple. Clone the GitHub repository for the code, from here: https://github.com/macmv/fl-stm32. Then, install Rust from their website: https://rust-lang.org/tools/install/.
If you used a different orientation for the IMU, or a different length of LED strip, you might need to adjust some constants, and you should do that now. Once the code is updated, compile it with this command:
cargo build --target thumbv7em-none-eabihf --release
Then, hold the BOOT button the microcontroller, and plug it into the computer. Upload the file in target/thumbv7em-none-eabihf/debug/fl-stm32 to the board, and it should start working. If you need to change any constants again, you can always re-compile and re-upload the code, although you will need to unplug the board and press the BOOT button again each time.
The rest of this tutorial talks about the nitty gritty of the fluid simulation itself, so if you just wanted the fluid simulation patch, feel free to stop here. Read on for lots of math and interactive demos!
Particle based fluid simulations revolve around attempting to keep all the particles an even distance away from each other. Or, more specifically, trying to keep all the particles at an even density. The density at any particle can be estimated by counting how many particles are nearby, and how close they are. Then, the particles can be moved towards or away from neighboring particles to maintain a consistent density throughout.
This problem of maintaining constant density can be solved by repeatedly inching the particles towards/away from each other, and then re-computing all of their densities. This is done using an algorithm known as gradient descent. Most of the math done here is based off the paper “Position Based Fluids,” which can be read here: https://mmacklin.com/pbf_sig_preprint.pdf.
Before getting into any of the math, we’ll start off by defining our particles. Each particle is a point in space, with a position and mass. All particles have the same mass in this simulation. We’ll start by just applying a force of gravity to them. I’ll be using Verlet integration, which is a method of applying a force to a particle over discrete time steps, without losing accuracy over time. Specifically, for each particle, we run the following code:
let velocity = particle.position - particle.prev_position;let accel = GRAVITY * DELTA_TIME * DELTA_TIME;particle.position = particle.position + velocity + accel;
DELTA_TIME is 1/60th of a second, as we are running this simulation at 60 frames per second. GRAVITY is a vector pointing downwards, with the force of gravity that we’re using. If we run this in a simulation, we get something like so:
As you can see, that’s not very convincing. If you click on the simulation, you can push the particles around, but aside from that, the particles don’t react to each other at all.
The goal of a fluid simulation is to space out the particles as evenly as possible. To do that, we need to know two things: how “dense” the fluid is at every particle, and what direction to push each particle to make them spaced out evenly. Intuitively this makes sense—if there aren’t enough particles in a given space, then some nearby particles should expand to fill that space. If there are too many particles in a space, the particles should push outwards to take up more space.
Lets start with the density function. Density is defined as the number of particles in a given space. Because we only have a few particles, we’ll use the distance between particles to approximate density. As the particles get closer to each other, we’ll increase the density estimate. To make this density estimate behave nicely, we’ll pass the distance between particles through a function to smooth out these estimates. The function I’m using is the poly6 kernel. This function is defined as follows:
When graphed, this looks like so:
Of course, the distance is always positive. So, we will only be using the right half of this graph. The result is still clear: as particles get closer together, the density estimate goes up. Once the particles are right on top of each other, the force tapers off a bit. This helps the simulation not explode—if the particles right ontop of each other produced infinite density, the solver would try and push those particles apart with infinite force, and the whole simulation would break down.
Now, in code, we can compute the density for every particle:
for particle in self.particles { let mut estimated_density = 0.0; for neighbor in self.index.neighbors(particle) { let distance = (particle.position - neighbor.position).length(); let weight = kernel_poly6(distance, self.radius); estimated_density += PARTICLE_MASS * weight; } particle.density = estimated_density; // NOTE: This is wrong! I'll expand on this // later, but it'll at least get us started. particle.density_error = (particle.density / REST_DENSITY - 1.0).max(-0.005);}
Once we’ve computed the particle’s density, we compute the error in that density. That is, how far away the density is from our target density. I’ll get into why it’s wrong later, but this illustrates the basic idea: the density error is how far away the particle is from it’s ideal density.
Next, we need to figure out how particles know what direction to move in. We’ll do this using a gradient. The gradient is a function that takes a 2D vector, and returns a 2D vector. The vector passed into this function is the delta between a particle and one of its neighbors. The returned vector is the direction that the particle should move to increase it’s density. Graphing the gradient function looks like so:
Or, if we look at the magnitude of the gradient at Y=0:
This function is notably spiky. The gradient increases as the distance between the particles gets smaller. There is a gap right at X=0 and Y=0, but that’s because we don’t know what direction to move two overlapping particles.
We can now compute the gradient between every pair of particles, and multiply that gradient by our density_error. In code, that looks like this:
let mut position_deltas = [vector![0.0, 0.0]; N];for particle in self.particles { let mut total_position_delta = vector![0.0, 0.0]; for neighbor in self.index.neighbors(particle) { let delta = particle.predicted - neighbor.predicted; let distance = delta.length(); let kernel_gradient = kernel_spiky_gradient(delta, self.radius); total_position_delta += (particle.density_error + neighbor.density_error) * PARTICLE_MASS * kernel_gradient; } position_deltas[particle] = total_position_delta / REST_DENSITY;}
The density error between a particle and it’s neighbor is just the sum of their errors. So we multiply that by the gradient vector, and the result is a vector that points the particles towards or away from each other, depending on how close they are. This vector can be summed for a particle and all of it’s neighbors, and that sum represents the direction the particle should move to minimize it’s own error. Minimizing it’s error also means making the density even throughout, and so this loop will have the desired effect of maintaining a constant density.
Note that we store these position deltas in an array for later processing. We don’t want to move any particles before the other’s have been processed. This is crucial, as we want the force between any two particles to be equal and opposite to each other.
That’s pretty much the entire simulation. We just need to apply those position deltas, and then re-run this loop and the previous loop a handful of times. Each step of gradient descent isn’t all that accurate, but the more times it is run, the more accurate it becomes. Putting that all together gives us something like this:
This is looking pretty good! The particles are reacting to each other, and making a sort of squishy substance. But it doesn’t really look like water. To fix this, we’ve got to go fix that density_error I mentioned before.
Also, I’m glossing over a lot of tuning constants and such. If you want to recreate this yourself, I highly recommend taking a look at the paper I’ve linked, as well as my implementation on GitHub. The guts of the simulation itself are in this file: https://github.com/macmv/fluid-sim/blob/main/fl-sim/src/lib.rs.
To fix the density_error, we need to fix the compressibility of the fluid. Water is not compressible, and we’re trying to simulate something like water. The current density_error does not account for that. Instead, the particles at the bottom of the simulation are pushing outwards with the same force as the particles at the top of the simulation. But, the particles at the bottom are being squished by the particles above more, so they end up forming a denser mesh than the particles above. This is what makes the fluid feel so squishy in it’s current state.
The fix is to use the sum of the magnitude of the spiky gradient around each particle. While a bit of a mouthful, we can think of it as the amount that the particle is being pushed. If there’s a particle right in between two others, the density won’t be super high (because we have the poly6 kernel for density). But, the spiky gradient will produce a large vector between the middle particle and each of it’s neighbors. By using the magnitude of this spiky gradient, we can see that this particle is getting “pushed” strongly by it’s neighbors, and so the density error should be higher.
In code, that looks like modifying our first loop like so:
for particle in self.particles { let mut estimated_density = 0.0; let mut gradient_sum = vector![0.0, 0.0]; let mut gradient_sum_squared = 0.0; for neighbor in self.index.neighbors(id as u32) { let delta = particle.predicted - neighbor.predicted; let distance = delta.length(); // This is our old density estimate. let weight = kernel_poly6(distance, self.radius); estimated_density += PARTICLE_MASS * weight; // While we're here, let's also compute the gradient. let gradient = kernel_spiky_gradient(delta, self.radius) * (PARTICLE_MASS / REST_DENSITY); gradient_sum += gradient; gradient_sum_squared += gradient.norm_squared(); } gradient_sum_squared += gradient_sum.norm_squared(); // This is our old 'density error' let density_constraint = (estimated_density / REST_DENSITY - 1.0).max(-0.005); self.particles[id].density_error = -density_constraint / gradient_sum_squared;}
We need to also add the gradient from this particle itself, so that’s why we’re computing both gradient_sum and gradient_sum_squared. If we use this new density error, the simulation immediately looks far better:
Specifically, the particles at the bottom aren’t getting squished at all, and the fluid doesn’t feel squishy any more. It actually responds like a splash of water, and it takes up a consistent amount of space. We’ve got a new problem now though: the surface is constantly moving around. This can be fixed up with a new corrective force, known as tensile correction.
This is the last bit of making a convincing fluid simulation. The particles along the top are all quite noisy, and are allowed to get quite close to each other. We don’t really want that, and we can fix that with a tensile correction term. If we add a repelling force when particles are right next to each other, we can make the top of the fluid a lot more stable.
The tensile correction force I used looks like this:
While it’s quite small, it still has the desired effect. It’ll always be negative, because it just pushes particles away from each other when they get too close. We can add it into the position_deltas calculation like so:
let correction = tensile_correction(distance, self.radius);total_position_delta += (particle.density_error + neighbor.density_error + correction) * PARTICLE_MASS * kernel_gradient;
And that makes the surface of the fluid behave much more smoothly:
That’s just about it. With tensile correction enabled, that’s the complete simulation that I’ve been using. This produces good enough results for my purposes, and it is fast enough to run on a microcontroller. I would like the fluid to be a little more “splashy”—that is, I’d like to see more separated particles splash into the air when the fluid moves around. But it works well enough as is, and I still think it’s pretty convincing, especially given that there are only 128 particles in these examples.