Around The World, Part 20: Two steps forward, one step back

Work on the game has been slow due to lack of time, but worse, I’ve let it drift off in the wrong direction. I need to be more careful about scope creep. But first, let’s talk about the progress that I am happy about.

No more ECS

In a previous post, I described my newly written entity-component system (ECS). I had some reasonable arguments for structuring the game as an ECS, most notably that it’s not possible to add functionality to all Godot nodes, regardless of type. You’d need multiple inheritance for this.

I did mention the workaround of putting the functionality onto a child node, but that’s a fairly heavyweight solution. However, does that matter in practice? Perhaps not. Moreover, I’ve found another approach: put the nodes into a group, and have a separate node that applies some operation to all nodes in that group. This only works for code, not for data, but it turns out I don’t need that – I initially wanted to store a 64-bits node position, but storing a 32-bits position relative to the floating origin works just as well.

And my ECS came with a number of drawbacks. Adding new node-based entities now required me to add code in five different files:

  1. Define a component to hold the data.
  2. Create a scene to represent the entity in the scene tree.
  3. Implement a system to work on that data and sync the changes to the scene.
  4. Register the system to be executed every frame.
  5. Register the scene to be added to the tree whenever the component is added.

This was just way too cumbersome, for little benefit. With Godot’s node-based approach, I’d only need the scene and a script, and the link between the two is automatically managed.

And because all the values were tucked away in components and systems in pure C# code, I lost the ability to modify scene properties in the editor while the game is running. I didn’t realize at the time how useful that feature is.

Another argument for using an ECS was that it would simplify the implementation of saving and (especially) loading. I now realize that isn’t true. Yes, streaming a bunch of components from disk using some predefined serialization format is easier than reconstructing a scene tree from (e.g.) a JSON object. However, after loading those components, I’d still need to reconstruct that scene tree anyway. The work is just moved into the systems that update nodes from components, but it still needs to be implemented.

So in the end, I decided to throw out the ECS and port everything back to a more customary Godot scene tree. Everything? No, there were a few key benefits that I wanted to keep.

First, there’s dependency injection. Systems allowed injecting resources and queries into their constructors, and this really helped to decouple the code. Since we’re now doing nodes again, I wrote a very simple Injector node, which is triggered whenever a node is added to the scene tree. It uses reflection to scan the new node for any fields with the [Inject] annotation, and provides them with values from an array of pre-registered injectable objects. Functionally it’s almost the same as Godot’s singletons, but makes the dependency more explicit, which I like.

Second, there’s the global event bus. I haven’t actually implemented this yet, and maybe I won’t need to, but I’m definitely keeping it in mind.

LayerProcGen

Earlier this year, Rune Skovbo Johansen alias runevision released LayerProcGen, a principled framework for procedural generation of infinite worlds. My worlds aren’t infinite, but they are big enough that we can’t generate and store them in their entirety, so the same principles apply. The idea is that procedural generation happens in layers, and each layer is generated in chunks as usual. Each layer can only depend on the layers below it, but it can request chunks from a larger area so that it has some context to work with:

Example of layers
Source: the LayerProcGen documentation, licensed under MPL 2.0

I realized that I was essentially already doing some of the things LayerProcGen helps with, but in a more ad-hoc way. So I decided it would make sense to switch over to this framework. I couldn’t use Rune’s code directly even though it’s in C#, because it assumes a flat world (curse those spheres!). Fortunately, the implementation isn’t rocket science so I just wrote my own.

I now have layer-by-layer, chunk-by-chunk generation working, distributing the work over several threads to speed it up. The game looks exactly the same as before, but the code is better organized and easier to build on top of.

Around this point, I got sidetracked a bit.

Full 3D

I’d previously settled on 3D rendering, but with a mostly top-down camera:

Top-down perspective

On a whim, with my newly found powers of editing the scene tree while the game is running, I moved the camera away from the top-down perspective, and put it in a third-person perspective behind the ship. And it looked… rather nice. Oh dear.

Third-person perspective behind the ship

This screenshot doesn’t even have any land in it, but it already has a much more immersive feel than the top-down view. It would also add interesting gameplay elements, such as distant coasts actually being less clearly visible, and having to do more work to match your surroundings to a map chart.

I figured that we have a full 3D scene already, so it shouldn’t be too much work to use this perspective instead of top-down, right? So down the rabbit hole I went, not realizing how deep it was.

Sky

We can now see the sky, and it looks rather drab and boring – not even properly blue. Indeed, that’s the best you can get with Godot out of the box, so I had to write my own sky shader. I did that, implementing a pretty standard path tracer with single scattering, which made sunsets about 100× prettier:

Sky test scene with sunset

It automatically works with moonlight too:

Moonlit scene on the water

Aerial perspective

Okay, we now have some atmospheric scattering going on, but it’s only applied to the sky and not to any other objects:

Without aerial perspective

The faraway islands are still a harsh green, rather than fading into the distance. Fortunately, Godot has a checkbox to add some fake aerial perspective, which mimics the effect of light scattering into the ray between the camera and the distant mountains. This instantly made the scene look much better:

With aerial perspective

The effect is fake and might be limiting later once I start adding fog, but it’s better than nothing. Good enough for now.

Painterly rendering

With this new third-person perspective, the low-poly ship model contrasted weirdly with the highly detailed waves and terrain. I already had a solution in mind for that. There weren’t any cameras in the Age of Sail, so much of what we know has come to us in the form of paintings. So wouldn’t it be cool if the game looked like an oil painting as well? I’d been planning to apply a post-processing filter to do just that.

I looked around in the literature, and found that there are essentially two ways to make images look like paintings. On the one hand, there are filters that modify the image in some clever way, so that the result looks somewhat like brush strokes. On the other hand, some techniques create and render actual brush strokes. The challenge with that is temporal coherence: we render 60 frames per second, but we don’t want entirely new brush strokes to appear every frame, because that would cause way too much flicker.

The anisotropic Kuwahara filter by Kyprianidis et al. is of the first category, which is easier to implement, so I wanted to try it first:

Example of anisotropic Kuwahara filter

This YouTube video shows an implementation in TouchDesigner, applied to some drone videos of a mountainous landscape, and it looks absolutely gorgeous (skip ahead to 1 minute):

I got most of the way through implementing this, using Godot’s new compositor effects and compute shaders, when I realized that this rabbit hole was too deep. The somewhat naive, but still GPU-based implementation was taking almost 100 milliseconds per frame; I’d need to get it down to 3-4 ms to still run smoothly on older hardware. And some bug was causing it to look more like a bad JPEG than a painting.

And even if I fixed the bugs and somehow made it 30 times faster, this filter might still not achieve the look I’m looking for, because the input doesn’t have nearly as much detail as a photograph or video. This filter is essentially about removing detail, where I might be better off with something that adds detail instead, i.e. individual brush strokes, with all the temporal stability issues that that entails.

Thump! Thump!

Alice felt that she was dozing off, when suddenly, thump! thump! down she came upon a heap of sticks and dry leaves, and the fall was over.

At this point, I belatedly realized that going full 3D wasn’t as quick and easy as I’d originally estimated. The sky shader needed more work to get rid of the green horizon. The painterly rendering was the kind of stuff that academics build entire careers on, but without it, I’d need to craft more detailed models. And on top of all that, I only have clear skies so far – I haven’t even begun to implement real-time, dynamically changing clouds yet.

It was becoming clear that I would have to cut scope, and put the camera back where I had planned it. I even considered going to 2D entirely, but that would essentially mean starting from scratch and wasting even more time. Instead, let’s climb back out of this hole to the surface, take a breath of fresh air, and forge ahead with what I’ve got already.