Fluid still failing
It’s been a busy week with little to show for it. As I wrote last time, I more or less gave up on the SPH particle-based method, and opted to fix my grid method instead. That turned out to be harder than I expected.
As a first attempt, I tried a hybrid front-tracking method known as the MAC (marker-and-cell) method. In this method, the velocity calculations are done on a grid, but particles flow through this velocity field to keep track of where the fluid volume is. Cells containing a particle are considered “full”, the others are “empty”.
As it turns out, this approach is not without its own slew of problems. First, you need a heck of a lot of particles, because shearing of the fluid causes groups of particles to be stretched out over large regions. If the last particle departs from a cell that should contain fluid, it is marked as empty, with the corresponding difference in pressure and thereby a large influence on the calculations. Even with 25 particles per cell initially, substantially slowing down the simulation, I still had this problem. In the long term, you would need some scheme to redestribute particles evenly, but scientific literature is notoriously vague on this issue.
Second, it’s very black-and-white. Either a cell is full, or it’s empty. This means that you will need a finer grid than with the volume-of-fluid (VOF) methods to get the same accuracy. A grid twice as fine means four times the memory consumption, but that’s the least of my concerns. Because it also requires a smaller timestep and more iterations of the pressure solver, doubling the grid resolution requires about ten times the computational power.
So that’s that for the hybrid method. I went back to the literature to see if another VOF-style method would suit me better. Eventually, I ran into a wonderful paper from 1997 by Murray Rudman titled Volume tracking methods for interfacial flow calculations. Rudman compares several VOF schemes and concludes that the classical 1981 scheme by Hirt and Nichols that I’d been using isn’t so good at all. He even mentions the “flotsam and jetsam” problem that I’d been seeing, of water droplets being shedded all over the place. But there are several good alternatives.
First, there is the SLIC scheme by Noh and Woodward (1979), which is supposedly even simpler than Hirt-Nichols, but I couldn’t access the original paper online or find another description of it. Too bad; maybe I will go to the university library to look it up.
Second, there is the Youngs scheme from 1982, which is highly accurate and doesn’t suffer from the “flotsam and jetsam” problem. Youngs’s paper is also not freely available, but according to Rudman, his description of the algorithm is not sufficient to be unambiguously implementated. Luckily, Rudman himself describes in detail how his own implementation works. Most of the description is taken up by a page-sized table full of equations. Fairly easy to type in, but a nightmare to debug … especially because Rudman seems to have made some small mistakes here and there. You can imagine that my current implementation does not quite do what I want.
Another thing I tried is to hack around the problems in the working Hirt-Nichols code. I tried to detect loose droplets, but this didn’t work because they sometimes also occur across two cells. Where would you draw the line? And how to do this without complicating the algorithms? Also, what to do with droplets once you’ve detected them? I tried to mitigate fluid loss by keeping track of lost fluid, and redistributing it equally over partially filled cells. Then it struck me that this is unacceptable for my game: it would mean that the algorithm can be exploited to “teleport” fluid across walls!
I have now spent three weeks on the fluid dynamics. I’ve just been stumbling around in the dark during this last week, and it’s time to move on. I decided I need a solution, and I need it quick. So I sent an e-mail to my computational fluid dynamics professor asking for advice. We’ll see.