Another way to do blobby geometry are signed distance fields (SDFs). You define different geometry solids by specifying function which returns for any point in space how far that point is from the surface of the solid. [1]
If they are defined this way, it is easy to smoothly interpolate between two or more solids. When rendering to screen, similar techniques have to be used to convert to screen geometry in realtime.
The latest technical marvel that uses this tech is Media Molecules Dreams. With SDFs they made rich modelling tools (constructive solid geometry) that produce geometry for physics and painting-style renderer, all running on PS4. The Learning From Failure talk [2] was quite inspiring on lengths they went to achieve their artistic goals.
Shameless plug: a couple of years ago I implemented a node editor for distance functions and renderer as a final project for a UI course at uni. https://github.com/frabert/MarchOfTheRays Raymarching is a very interesting concept, no polygons at all!
It shows how naively aggregating charge from across the entire field leaves a lot on the table and there are huge efficiency gains that can be achieved. The page also includes descriptions of metacubes and metatorii.
I checked this as a reference when I implemented this p5js sketch (note that the link takes you to a blog post with how it works and inspiration, the first link in the post is to the sketch proper) [1] but never went into the hard optimisations once I got it working. I should revisit it, since it's relatively slow.
Let me just point out how amazing online resources are, now, compared to "not that long ago".
Let's go back in time to 1994. You are a teenager armed with a computer, modem, possibly internet access but more likely an online service such as CompuServe. Since you also have a C or Pascal compiler, and are inspired by the latest trend of 3D graphics in PC games, you download the comp.graphics.algorithims FAQ.
You read something about 'Marching Cubes' but don't really grasp anything from it. You also probably don't have access to the textbook or papers that also describe it.
Instead of this incredible visual demonstration, you get a couple paragraphs of text and some ASCII art:
----------------------------------------------------------------------
Subject 5.10: What is the marching cubes algorithm?
The marching cubes algorithm is used in volume rendering to
construct an isosurface from a 3D field of values.
The 2D analog would be to take an image, and for each pixel, set
it to black if the value is below some threshold, and set it to
white if it's above the threshold. Then smooth the jagged black
outlines by skinning them with lines.
The marching cubes algorithm tests the corner of each cube (or
voxel) in the scalar field as being either above or below a given
threshold. This yields a collection of boxes with classified
corners. Since there are eight corners with one of two states,
there are 256 different possible combinations for each cube.
Then, for each cube, you replace the cube with a surface that
meets the classification of the cube. For example, the following
are some 2D examples showing the cubes and their associated
surface.
- ----- + - ----- - - ----- + - ----- +
|:::' | |:::::::| |:::: | | ':::|
|:' | |:::::::| |:::: | |. ':|
| | | | |:::: | |::. |
+ ----- + + ----- + - ----- + + ----- -
The result of the marching cubes algorithm is a smooth surface
that approximates the isosurface that is constant along a given
threshold. This is useful for displaying a volume of oil in a
geological volume, for example.
References:
"Marching Cubes: A High Resolution 3D Surface Construction Algorithm",
William E. Lorensen and Harvey E. Cline,
Computer Graphics (Proceedings of SIGGRAPH '87), Vol. 21, No. 4, pp. 163-169.
[Watt:Animation] pp. 302-305 and 313-321
[Schroeder]
Although the resources weren't great, most people in CG think marching cubes is "obvious" because the algorithm was independently discovered by a large number of people (without needing a guide like this!)
Wasn't the algorithm wrapped up in a software patent until about 10/15 years ago? I'm sure I looked into using this when I was in the games industry but couldn't due to the patent.
Bookmarking this thread as it's evolving into an encyclopedia of metaball algorithms. And this is just geometry. Realistic fluid physics and chemical reactions extends the scope ad infinitum ;)
One more from GPU Gems 3: Point-Based Visualization of Metaballs on a GPU
I wonder if is possible to have it so that when circles are moving towards each other they behave as spheres, but once they have “touched” and begin to move away they have the metaball behavior. That way they look like droplets when separating but don’t have the droplets “magically” seeking other droplets when moving towards each other.
Should be easy to isolate objects that haven't touched anything yet. Determining when to start isolating already touching objects would be harder. Probably some kind of threshold is needed.
When I experimented with 2D metaballs a decade ago I started with a low resolution scan and then subdivided all squares whose corners weren't either all inside or outside until I had individual pixels. Except for a few glitches when just checking the corners wasn't enough this worked rather well and was easy to implement.
I used metaballs and marching cubes on a Kinext game project back in college and had no idea how it actually worked (copy and pasted a lot of code). This description was very helpful to finally understand it.
You could get something visually similar by animating sprites with blurry edges, and thresholding the composite image. I can’t imagine that would be too expensive to do at high resolution in JavaScript.
Depending on blur function the result could be identical. Rendering blurred balls means brute-force calculation of contribution of each metaball for each rendered pixel. The larger your blurred image, the higher your fill rate will be. This would choke the performance for large number of balls. You also end up with sampled (pixelized) output which doesn't give you normals and similar geometry information.
If your metaballs are mostly the same size, the blurring step can happen ahead of time with a commutative convolution, stored in the texture.
Then, rather than applying a threshold to the resulting pixel/voxel contribution, you can apply a sigmoid function (or something cheaper approximating it) and get smooth edges.
You can cheat normals like you can with any other sort of height map, by measuring the differences between neighboring fragments.
If they are defined this way, it is easy to smoothly interpolate between two or more solids. When rendering to screen, similar techniques have to be used to convert to screen geometry in realtime.
The latest technical marvel that uses this tech is Media Molecules Dreams. With SDFs they made rich modelling tools (constructive solid geometry) that produce geometry for physics and painting-style renderer, all running on PS4. The Learning From Failure talk [2] was quite inspiring on lengths they went to achieve their artistic goals.
[1] https://www.iquilezles.org/www/articles/distfunctions/distfu... [2] https://www.youtube.com/watch?v=u9KNtnCZDMI