A look behind the scenes at Frozen Fractal.
Yes indeed, Frozen Fractal’s first officially released game is there! It’s called Patchy, and it’s a retro arcade-style land-grabbing game for Android. This post is about its inception and also describes some bits of the technical implementation.
Patchy is the spiritual successor to my one-weekend Ludum Dare entry Park to Park. (And of course, Park to Park is itself a spiritual successor to the 1981 arcade game Qix.) But I made several important changes.
The original Qix and also Park to Park let you move only horizontally and vertically. This makes sense with keyboard controls. With the freedom of a touch screen however, allowing for movement in any direction is much more fun. It also makes the bouncers’ trajectory harder to predict.
In Qix, the player’s movement is restricted to previously drawn edges. I saw no need for that and thought it might be hard to control on a touch screen, so I let the player move anywhere they please outside the arena.
The other notable change was that I added the tracers, which traverse the edge of the arena. These are critical to fast gameplay, because without them, you can just wait forever on the edge until a good opportunity arises. Because the player is not confined to edges, the tracers needed to have their laser sword to increase their range. Their purpose is to force the player to go out into the arena and expose themselves to danger, and they fulfil that purpose quite effectively!
I chose to use libgdx, a cross-platform game library originally developed for Android. The great advantage of libgdx is that you can run the very same game on desktop Java, so there is no need for slow emulators or annoying apk installs. The result is a much faster development cycle. I highly recommend it.
Anyway, one long Easter weekend of coding later, I had a game. A game with horrible, Ludum Dare style code quality, public fields, copypasta everywhere – but a game nonetheless, and that’s what counts in the end. Then the fun started.
The game ran fine on desktop, Nexus 7 and Galaxy Nexus, but achieved only around 17 frames per second on my trusty Nexus One. It may be a few years old, but it’s not ancient, and Patchy’s graphics are really simple. I didn’t want to release something that was going to have crappy performance on all but the latest phones. What was going on?
Fortunately the Android SDK comes with really great profiling tools. Traceview doesn’t just show a call graph like a traditional profiler does; it also shows a timeline detailing the exact sequence of calls, and context changes between threads.
Running with profiling enabled dropped the framerate to about 12, but I thought nothing of it at the time. The output was surprising though: a lot of time was spent in updating camera matrices. Just a handful of floating point operations! (Okay, maybe in the hundreds.) Later this turned out to be an artifact of the profiler; instrumenting profilers add some overhead to each method call, and camera updates were performing a lot of tiny method calls for operations such as vector additions and dot products.
Having optimized away as many camera updates as I could (updating only if something changed), although it all but vanished from the profiling output, did not help the framerate one bit. The next biggest chunk of time was spent in text rendering. Building up the text character by character into a vertex buffer, taking care to use correct kerning and such, turned out to be fairly expensive. This, too, consisted of quite a lot of method calls and was – with hindsight – probably a profiling artifact. But fortunately, text is fairly static so I could use libgdx’s BitmapFontCache. This, too, had a big impact in the profiling data but didn’t change framerate one bit.
The next biggest chunk of time, around 18%, was spent on framebuffer operations. The cool, CRT-like retro rendering effect is achieved by first drawing the entire scene to an offscreen 240x360 framebuffer, then downscaling that twice by a factor of 2, then blending the three using linear filtering to achieve a blur/glow effect (and throwing in some scan lines afterwards). Now I found that, by commenting out this code and just rendering the scene to the screen, the Nexus One achieved a framerate of a steady 60 fps!
I tried to optimize, but nothing worked. Using just one downscaled framebuffer instead of two didn’t help. Using no downscaling at all didn’t help either. Various combinations of clearing and not clearing the framebuffer, as suggested here, didn’t help either. Without the profiler, I was just guessing what operations took long.
I could have instrumented the code by hand, but since it was still slow after ripping out almost everything that mattered, it seemed pointless. I concluded that framebuffer operations are just inherently slow on this device, and left it at that. Shipping, after all, is a feature.
This was the first app I’ve ever published to the Google Play Store, and I was pleasantly surprised by the process. Exporting a signed apk from Eclipse is really easy using the wizard, and uploading it to the Play Store is a breeze. Configuration of pricing and distribution countries/carriers is also really easy in the new Developer Console. I don’t know what the old one was like, but the huge yellow bar keeps nagging me that it’s going away soon, so I don’t suppose I need to care.
Most of releasing time was spent on creating promotional graphics. I drew the banner above in Gimp, mainly because I was too lazy to reboot to Photoshop. I’m not completely happy about it, but shipping is a really, really important feature, so once it had some colours and things in it, I called it done.
I chose to make it a paid app because this is the least hassle for me. I do realize that payment makes for a significant barrier to entry, even though the price is as low as I could set it (GBP 0.49, EUR 0.59, USD 0.99). My main objective with this project was not to get rich; it was to get some experience actually releasing something on Android, figure out how the process works, what to do and what to avoid. My next Android game will probably be a free one with in-game ads, and possibly in-game purchases.
Reception (or lack thereof)
I did no advertising or promotion at all, apart from posting to Twitter, Facebook and Google+. This led to a handful of friends buying it to try (thanks everyone!), but not much else. Patchy has been in the Play Store for a few days now, and I’ve seen only one or two purchases from outside my normally under-represented home country of the Netherlands.
Thinking about how the Play Store works, this isn’t too surprising: new apps don’t show up anywhere, except in search results. And nobody will be searching for “patchy” unless they already know about it. Some people might be searching for “arcade”, or “retro”, but I don’t think it’ll show up high for those generic terms either. I think the best chance for new paid games to “make it” is to end up in the “top new paid” category; the apps at the bottom of those typically have only 100-500 downloads. This doesn’t seem too hard to achieve, and would then drive early adopters and hopefully bump you to one of the other, more prominent categories. There used to be a section in (what was then called) the Android Market with new/updated apps, but with Android’s popularity these days, it probably became unwieldy. Pity.
But hey, it’s been only a few days since release. Patchy has received one review so far, it’s not from anyone I know, and it’s five stars out of five, so who knows… As far as I’m concerned, having launched is a success worth celebrating in itself!
Ladies and gentlemen, Frozen Fractal presents… Bigcanvas! It’s an infinite online canvas that anyone can draw on. The ‘why’ is described within the app itself, so have a look! This blogpost focuses on the technical aspects, i.e. the ‘how’.
Just for fun and challenge, efficiency was one of the main design goals. This mainly manifests itself in the design of the server, storage format and the wire protocol. But to understand those, you first need to know how things fit together in the web browser.
The canvas is broken up into blocks of 512×512 pixels, each with base 64 integer coordinates. We indicate the particular point within a block with another pair of integer coordinates, in regular decimal notation this time. Each block is drawn on screen using a HTML5 canvas element. Scrolling means simply moving the canvases around, and adding and removing them as needed.
Drawing is a bit trickier. Because the canvas is broken up into blocks, a single line segment may overlap multiple blocks. We could simply draw it in all of the blocks that it overlaps, and rely on the canvas to clip it correctly. The problem is that the coordinate range within a block is limited (by the wire protocol, see below), so not every segment can be represented within each block. We therefore need to clip the line segment manually to fall within a small margin around each block. That’s all done on the client side.
At no point do we send or store pixel data. Instead, we work with ‘strokes’, i.e. a series of points that together make up a squiggle that the user drew.
Strokes are sent to the server via HTML5 WebSockets. This was a challenge, because the WebSockets standard does not (yet?) allow transmission of binary data, and mandates the use of UTF-8 strings. Invalid UTF-8 causes the connection to close. Note that UTF-8 may not contain null (
'\0') bytes. Code points between 1 and 127 (inclusive) are encoded as themselves. Code points of 128 and up require more than one byte to encode and are therefore quite wasteful. So the wire protocol was designed to use only bytes in the range 1–127.
By far most data sent across the wire will consist of strokes, so it is important to represent these efficiently. I started out with 127×127 blocks, so that each coordinate within a block would fit in a byte. A stroke would thus look like
xyxyxy.... But I soon discovered that this block size was too small, causing things to work slowly on Firefox and to a lesser extent on Chrome. So I switched to 512×512 blocks. Now this requires 9 bits per coordinate. The trick I use is to break each coordinate (x and y) up in a part modulo 127 and a part divided (rounding down) by 127. The modulo part fits in one character; the other part is at most 4 so it fits in 3 bits. Those 3 bits from x and y combine into 6 bits, which fit into another character. So each point is now three characters:
x%127 + 1
y%127 + 1
(((x/127) << 3) | (y/127)) + 1
It’s ugly, but it works well. All the encoding and decoding is done on the client side; the server is just a dumb data switch and does not need to know the format, except to the extent that it can check it is valid (which is easy and fast).
For extensibility, the string with coordinates is wrapped inside a JSON object. Later on, I might add stroke properties such as width and colour.
Of course, we wouldn’t want to send the entire canvas contents to a client when they connect. Therefore, a client sends ‘subscription’ messages to the server, requesting to be kept up to date on the contents of particular blocks. When one of these blocks changes, the server pushes out the new strokes (but not the existing ones) to all subscribed clients. When a block scrolls out of view, the client sends an ‘unsubscribe’ message so it no longer receives updates.
The server is written in Google’s Go. This entire project was in part an excuse for me to become familiar with that language. If you’ve never seen Go before, you can think of it as high-level C with garbage collection and very powerful and easy to use parallelization primitives.
Currently, the server is not distributed in any way; it just runs on a single machine. I did design it so that it can easily be sharded if the need arises. This could be done by region in the canvas (each server serves a particular area) but this would create hotspots if many users are active in the same area. It could also be done by hash (each server serves a set of blocks based on a hash of their coordinates), but this requires a client to have connections to many more servers, and open and close them as they move around. Neither solution is ideal; probably some area-based approach combined with splitting/combining of areas is most scalable. But that’s for another day.
Blocks are stored on the filesystem in flat files, one file per block. For small blocks (< 1 kB) this is a bit wasteful, but it will have to do for now. Of course, empty blocks are not stored at all, which is part of the secret ‘infinity’ sauce.
To avoid filesystem limits on the number of files per directory, we compute the SHA1 hash of the block’s coordinates, take the first three and second three characters, and create a directory structure out of that. As an example, block
3,5 would be stored in the file
95a/c05/3,5. If we conservatively assume a maximum of 4096 files per directory, this allows for 236 files. If each block is 4 kB, this takes 256 TB of disk space, so the directory structure is not going to be the limiting factor.
Inside each block file, the strokes are concatenated as a series of strings, each prefixed with a header specifying the length. This format is exactly the same as the wire format, which means that if a client requests a block, the server can simply and brainlessly stream out the entire file.
The advantage of flat files is that we can trivially append to them. If someone draws a new stroke, we simply append it verbatim to the existing file. This is implemented in the server code by storing the stroke in a Go ‘channel’. A single goroutine runs an infinite loop that takes strokes from the channel and writes them to disk. This prevents problems with concurrent writes, and ensures the least amount of disk thrashing (although I still expect disk seeks to become the first performance bottleneck).
This storage format is far from efficient, but it was easy to implement. The server currently caches all data into memory anyway, so as long as that still fits, it doesn’t really matter.
Of course, a truly infinite canvas is impossible, because computers have limited memory. Even if we don’t store the actual contents, even the coordinates themselves can become arbitrarily large. But I’ve given it my best shot, storing coordinates as strings in base 64. Assuming that we’ll hit some limit at 256 characters (be it URL length or filename length), this gives 64256 different coordinates. This number is so large that even Google’s built-in calculator refuses to compute it. But we need two of these babies, an x and a y coordinate, so that leaves ‘only’ 128 characters each. On an 80ppi screen, this comes down to 2.5 × 10230 metres. For reference: the size of the visible universe is only about 1024 metres.
Frankly, this is a bit scary. I intentionally did not add an erase option, and this thing is open to the entire internet, entirely anonymously. I have no idea what this will lead to. Will it bring out people’s inner artist and allow beauty to be created? Will people find a use for it, such as online collaborative sketching? Will it become a display of love, like when people carve hearts into tree barks? Will the immature insist on drawing penises all over the place? Will griefers paint everything black and ruin it for everyone? I have no idea, and that’s exciting!
Today I’m feeling more superficial and have been doing some UI design work. This image had been in my head for some time, but I was happy to discover that it works on paper, too.
The overall positioning of the elements on screen isn’t much different from what I’ve got today, because it gives a symmetric and balanced look.
The thick frame has the dimensions of an 980×661 browser viewport, which by some freaky coincidence is exactly the viewport of Safari on the iPad in landscape mode (but it’s a sane size for desktop browsers as well). The iPad would be the ideal platform for Turtle Paint, except for one teensy detail: the keyboard would take up half the screen. I suppose it would work okay in portrait mode, and meanwhile I’m hoping that Apple will come up with decent speech recognition at some point.
It might be hard to see in this sketch, but the backdrop is supposed to be a beach – beaches being the place where one finds turtles, right? (I could have done sewers instead, but that would have been a dark and smelly affair.) I’m planning to have some windsurfing, swimming and sunbathing turtles in the background, but those might not make it in the first iteration. There will be plenty of space for them to the sides, if I make the backdrop wider than the iPad frame; much like the entire thing will extend below the bottom of that frame, which will still be visible on desktop computers.
As you can see, I’m going for a very physical look and feel. The easel is leaning back slightly, making the canvas non-rectangular; you won’t be able to draw outside the canvas proper. The palette contains gobs of paint to select the colour to use, three or four different brush sizes, and a painter’s knife to erase (scratch paint off). The latter might need a tool tip to reveal its purpose, but I don’t think it’ll be a showstopper. I’ll also try giving the canvas some texture, and make the paint 5% transparent to let it shine through. It might look really classy.
Unfortunately, not everything is easily captured in that way. There are “Clear” and “Skip word” buttons, for which I couldn’t find suitable metaphors. Several other elements are also simply text. Although “3 of 5” and “4 of 10” could be represented graphically as a row of icons (e.g. checkboxes), I have yet to come up with a way of blending them in with the painting-on-the-beach theme. Maybe the guess list on the right side should fade into the horizon, to give the impression of depth, but I’m not sure how far I can shrink it before it becomes too small to fit all the guesses for the current round.
Interface elements appear and disappear depending on what is going on at the time. The “lobby”, where you choose a game to join, has the same backdrop but no easel or any of the other game elements. If we have joined a game, but are still waiting for it to start, the easel is shown without the canvas. The palette at the bottom is only shown while it’s your turn to paint. If it’s your turn to guess, it is replaced by the guess box, and the controls in the bottom right corner disappear. Everything will be smoothly animated; as I’m not much of an animator, I’ll restrict the animation to things that can be done with CSS3 transitions.
Next up is to convert all this from paper to Inkscape. That is the difficult step!
Once upon a time, over a decade ago, I wrote a simple program in C++Builder to help my father solve crossword puzzles and cryptograms. It would let you type a word with blanks such as
f....al and it would tell you which words would fit.
When my father asked whether I still had the program, to install it on his new computer, I decided to rewrite it from scratch. This time round, it is a web app using Ruby, Sinatra and Liquid. The result is at the unimaginative URL crosswordfinder.frozenfractal.com.
Of course, for anyone with basic shell-fu, it’s as simple as
aspell -l en dump master | egrep '^f....al$'
but for the other 99% of the population, this app might be of some use. It currently contains wordlists for Dutch, US and UK English, but I can easily add more on request.
Also, this is Frozen Fractal’s first open source application. The source is on GitHub.
- Immaturity. Often, there are five different Node.js libraries that do approximately the same thing, but all of them are buggy and none do it exactly the way you’d want.
- Documentation. Nonexistent for most Node.js modules; most just provide one or two canonical usage examples. You could argue that I should simply read the source, but that is often difficult if you don’t have the high-level picture of how things fit together.
- Callback spaghetti. For instance, the way I’m importing a word list into the MongoDB database backend takes no less than seven levels of nested callbacks: create database connection, open database, drop collection, create collection, open file, read lines, insert data. When done reading, open other collection, update word count. When written like this it looks straightforward, but in code, each action is a new level of nested callbacks. There are libraries like flow.js that make this a little easier, but they mostly remove indentation levels while not fundamentally changing anything. I like to write event-driven code when it makes sense, but being forced to use it for everything is a pain.
- Difficulty of event-driven programming. It gets really difficult to tell what gets executed when. For instance, I had a bug where I opened a file and wanted to read it line-by-line in an event-driven way. I knew how many lines the file had, but the code consistently kept reading fewer lines than that. The cause? After verifying that the file existed by having opened it successfully, I opened a database connection. But while doing so, my file object already started streaming out read events, even though no event handler had been hooked up to it yet! This caused the first few thousand lines of the file to be ignored. As the event handler needed an open database, there was no way to hook it up earlier. The “fix” was to open the database first, then open the file and synchronously hook up the data event handler.
var i;at the top of the function if I use
ias a counter in a
forloop, K&R C-style. Some such nonsense can be turned off, but not all. JSHint is better in this respect, but has other problems. Both struggle with unrecognized global variables that Node.js provides.
- Syntax. It matters. Too many
functionobjects make me go dizzy.
I’m planning to use Sinatra for serving static files, and Cramp (built on top of EventMachine) for the game logic. This will still be event-driven code, but only where it makes sense, and hopefully Ruby syntax will make it more pleasant to use. Cramp supports WebSockets out of the box, assuming one uses Thin or Rainbows as the HTTP server.
For older browsers, I might use the Flash fallback of web-sockets-js, or I could use socket.io-ruby. But as I don’t like Socket.IO very much either (feature creep, bad documentation, and new bugs every release), it’ll probably be the former. There won’t be a “polling” fallback for browsers that support neither WebSockets nor Flash, but I think I can live with that.