Featured

What is this about?

Calenhad is GIS for fictional worlds.

It’s a software project which aims to provide tools inspired by terrestrial (real-world) Geographical Information Systems (GIS) for the design and depiction of imaginary, fictional, made-up worlds. Mapping of made-up worlds I suppose began when J R R Tolkien inserted maps of Middle Earth into the front pages of The Hobbit, The Lord of the Rings and The Silmarillion, although clearly the realms of Midgard and Hades described in antiquity had some internal geography. When the real world was still poorly mapped, some places, like Ultima Thule or Camelot, were thought of as “somewhere” on the Earth, but remote enough in space and thought to be assigned invented or supposed geographies that were meaningful enough for the tales told about them. Even latterly a fantasy world could be inserted into the real world, like the Rev. W H Auden’s Sodor, the setting for Thomas the Tank Engine, or the dinosaur-infested plateau of Sir Arthur Conan Doyle’s The Lost World. Finally, an author could take part of the real world and adapt it to his own ends, as in His Dark Materials by Philip Pullman or in Richard Adams’ Watership Down.

Tolkien’s works kicked off a whole new genre of high fantasy and mapping fantastic geographies was carried on by a whole row of authors that followed him, was practised by a wider church of creators inspired by role-playing games such as Dungeons and Dragons, and is now driven to higher levels of detail and volume by the demands of modern film audiences and computer game players.

There are two different problems going on now. On the one hand, computer games may need to generate large amounts of landscape (many planets) which look realistic at a distance but are not interacted with closely, looked at for very long or require detailed explanation. On the other, a setting for a fantasy novel or game may be created and explored over many volumes or hours of play, looked at in parts in some detail, and want to be explained in scientifically-plausible terms. It is this latter set of problems we are trying to address by learning from geospatial techniques used in the real world.

An overview of what Calenhad will try to do and how I go about it appears in the About page. Source code in C++ / Qt5, which is presently of a highly experimental nature to say the least, is on Github (for the time being, use only if you enjoy linker errors and segmentation faults).

Qt compute shader without graphics

So I thought I would separate the generation of my content (terrain) from the display so that I could cache the content and use it to feed downstream computations without having to recompute the whole chain of operations every time. It tends to get assumed that if you’re using a compute shader you’re going to send the output to an OpenGL graphics pipeline, and actually only recently has it become easy to put data on the graphics card, process it there in a GLSL compute shader and bring the results back. So here’s an example.

Header file

class ComputeService {
public:
    ComputeService();
    ~ComputeService();
    float* compute (Graph* g, float* buffer, int size);

private:
    QString _sourceCode;
    QOffscreenSurface _surface;
    QOpenGLContext _context;
    QOpenGLShader* _computeShader;
    QOpenGLShaderProgram* _computeProgram;

    void execute (float* buffer, int height);
};

Source file

#include <iostream>
#include <src/CalenhadServices.h>
#include "ComputeService.h"

ComputeService::ComputeService () : _computeProgram (nullptr), _computeShader (nullptr) {
    // read code from files into memory
    QFile csFile (":/shaders/compute.glsl");
    csFile.open (QIODevice::ReadOnly);
    QTextStream csTextStream (&csFile);
    _sourceCode = csTextStream.readAll();
    
    // Create an  OpenGL context - in a graphics pipeline
    // we would inherit this with our QOpenGLWidget
    QSurfaceFormat format;
    format.setMajorVersion(4);
    format.setMinorVersion(3);
    format.setProfile(QSurfaceFormat::CoreProfile);
    _context.setFormat(format);
    if (!_context.create())
        throw std::runtime_error("context creation failed");
    _surface.create();
    _context.makeCurrent( &_surface);

    // initialise OpenGL on the context
    QOpenGLFunctions_4_3_Core openglFunctions;
    if (!openglFunctions.initializeOpenGLFunctions())
        throw std::runtime_error("initialization failed");
}

ComputeService::~ComputeService () {
    delete _computeShader;
    delete _computeProgram;
}

// Call this with a pointer to the buffer in which you
// want the results saved
float* ComputeService::compute (float* buffer, int size) {
    _context.makeCurrent( &_surface);
    delete _computeShader;
    delete _computeProgram;
    _computeShader = new QOpenGLShader (QOpenGLShader::Compute);
    _computeProgram = new QOpenGLShaderProgram ();
    clock_t start = clock ();

    QString code = g -> glsl ();
    if (code != QString::null) {
        if (_computeShader) {
            _computeProgram -> removeAllShaders ();
            if (_computeShader -> compileSourceCode (_sourceCode)) {
                _computeProgram -> addShader (_computeShader);
                _computeProgram -> link ();
                _computeProgram -> bind ();
                
                // launch the compute shader
                execute (buffer, size);
            } else {
                std::cout << "Compute shader would not compile\n";
            }
        }
    } else {
        std::cout << "No code for compute shader\n";
    }
}

// this handles the launch and fetches results
void ComputeService::execute (float* buffer, int size) {
    _context.makeCurrent (&_surface);
    GLuint ssbo;
    int bytes = size * sizeof (GLfloat);

    QOpenGLFunctions_4_3_Core* f = dynamic_cast<QOpenGLFunctions_4_3_Core*> (_context.versionFunctions ());
    f -> glUseProgram (_computeProgram -> programId ());
    
    // reserve space on the GPU
    f -> glGenBuffers (1, &heightMap);
    f -> glBindBuffer (GL_SHADER_STORAGE_BUFFER, ssbo);
    f -> glBufferData (GL_SHADER_STORAGE_BUFFER, bytes, NULL, GL_DYNAMIC_READ);
    f -> glBindBufferBase (GL_SHADER_STORAGE_BUFFER, 0, ssbo);
    // you'll want to tweak the geometry to the size of 
   // the buffer and its layout
    f -> glDispatchCompute (32, 64, 1);
    f -> glMemoryBarrier (GL_SHADER_STORAGE_BARRIER_BIT);

    // this gets the data from the GPU
    f -> glGetBufferSubData (GL_SHADER_STORAGE_BUFFER, 0, bytes, buffer);
    f -> glUnmapBuffer (GL_SHADER_STORAGE_BUFFER);
}

And the shader code itself will look like this – where value (ivec2 pos) is a function that computes the output value for a given invocation. Of course you will need different geometry!

#version 430
layout(local_size_x = 32, local_size_y = 32) in;
layout (std430, binding = 0) buffer out { float buffer_out []; };


void main() {
        ivec2 pos = ivec2 (gl_GlobalInvocationID.yx);
        float v = value (pos);
        buffer_out [pos.y * 32 * 32 * 2 + pos.x] = v;
}

Now that’s what I call a bug

Well it took me about a week to get to the bottom of that.

Lat week I found that if I uploaded a raster to the graphics card and saved it in a buffer, then I set a single uniform float variable to request the zoom level I wanted on rendering, the render was blank. Was the buffer corrupting the uniform variables? Was something overwriting the uniform variables? Was the shader failing to compile and not bothering to tell me? After about a week of looking at absolutely everything and wondering how one render could possibly bugger up another completely separate render, it turned out that in these circumstances setting the zoom level as a simple uniform float variable was being ignored by the shader altogether and it was always setting the zoom to zero. Apparently glUniform1f doesn’t always work and apparently this has been known for years. Well not to me it wasn’t.

So, if you ever find that you’re setting of a single float uniform variable is being ignored by your shader, this is what you do. Get a bunch of your float-typed uniforms and stick them in a vector or matrix. As it happens I am setting the datum in a vec2 so I bolted the zoom factor onto this making a vec3. You set this using glUniform3f which for some reason behaves.

Bonkers.

Workflow

This looks at the tools we need for creating planetary terrain at different levels of detail.

Workflow

In general the user will need to:

  • guide and direct the overall geography of a planet by placing the major features: continents, oceans, mountain belts, or by specifying the means by which these are determined. We can use noise algorithms directed by parameters to place these features; we can simulate the natural processes of plate tectonics; or we can specify a high-level layout by providing a sketch which the system can read and use to generate terrain.
  • design or sculpt specific features. The story might demand a mountain or river in a particular place, and we allow the feature to be placed and shaped within the landscape.
  • let the computer do all the hard work generating terrain at levels of detail between these two extremes.

The diagram shows in darker blue the functionality developed thus far, which revolves chiefly around a port to GLSL (shader language running on the GPU) of the libnoise library. The additional requirements are as follows.

Directing sketch and imported data

User input amounts to a low-resolution specification of approximate altitudes which can be read and encoded to an icosphere (height marks at evenly-spaced points over the globe). The problem many terrain generation algorithms that use fractal methods have is that they tend to produce rather homogeonous output, with mountains generally in the centres of continents, coastlines of similar indentedness everywhere, and little variation in the fractal pattern across the globe, like this.

Simple Perlin fractal terrain from libnoise

That’s the simple spherical terrain example from the libnoise documentation (http://libnoise.sourceforge.net/tutorials/tutorial8.html). Libnoise goes on to provide a much more sophisticated example comprising over 100 processing modules and which, among other things, attempts to remap the generated altitudes in such a way as to move the highest areas to near the coast (http://libnoise.sourceforge.net/examples/complexplanet/index.html), but that then creates land-locked depressions in the interior.

As far as specifying the overall layout of a planet is concerned, we can use a directing sketch, prepared by hand to a machine-readable format. An example of landscape generation using a directing sketch is included in Torben Mogensen’s planet generator (http://www.diku.dk/~torbenm/Planet/), although what happens there is that random planets are generated and compared with the sketch to return the one that matches it most closely. What we will do is take low-resolution height mapping and perhaps feature-type mapping from the user and elaborate it at lower-level detail in the second column of the diagram. Mogensen’s version is a grid of symbols that tell the program what sort of general shape is wanted in coarse terms.

With the benefit of the processing power available nowadays we can use a grid like this not as a filter for generated output but as an input to other processing modules to control the amplitude, overall height and character of terrain generated at different places. To do this we should provide a component which reads (say) a bitmap of height levels into a data structure and makes that available as the output of a noise module, which could replace something like a coarse Perlin generator as the high-level shape of the terrain.

By providing the facility to import raster data from a bitmap file and map it to a geographical extent (which could be the whole world or just a small area), we can support the introduction of a directing sketch and also support the addition of detailed landscaping for local areas where a specific shape is wanted or a particular feature in a particular place. There seem to be a few additional requirements here:

  • it is difficult to draw on an equirectangular projection, especially in the high latitudes, so it will be useful to provide sculpting tools (discussed below) to allow drawing to take place in the projection being viewed.
  • an imported raster must blend properly with any terrain that surrounds it, so we need to transition at the edges from the imported raster to the existing surrounding terrain.
  • Moreover, the area to be imported won’t necessarily be a lat-lon rectangle so we will need to mask out the bits of the raster we don’t want. This point and the one above can be addressed by treating the alpha channel of the raster as a blending coefficient: if it is 0, we ignore the raster altogether and take the existing value at that location; if it is 1, we supplant the existing value at that location with the value in the raster; for values in between, we interpolate using the alpha value as a weight, so that where alpha = 0.7 the output value at that location is 0.3 * the existing value plus 0.7 * the raster value, and so on.

Tectonic simulation

A tectonic model attempts to produce a realistic Earth-like model using a description of tectonic plates and generating the high-level layout by simulating the movement, production, collision and subduction of mantle driven by the Earth’s tectonic processes.

The tectonic model should work by generating an evenly-distributed network of points covering the globe and attaching simulated crust (lithosphere) to these, then moving them about and tracing the development of continents, ocean beds and mountain ranges over some number of iterations of the simulation. The points need to be evenly distributed on the sphere or else the terrain will be stretched at the poles. This tesselation of the sphere can be done in a number of ways:

  • generate a Fibonacci spiral, with points at equal intervals along a spiral that wraps around the sphere many times and has ends at antipodal points,
  • generate a cube with subdivided faces, and normalise its points so that they sit on the surface of the enclosing sphere (i.e. are equidistant from the centre),
  • generate a large number of random points on the surface of the sphere, and then have these iteratively repel these so that they move apart until their distribution is more or less even, or
  • generate an icosahedron and progressively subdivide its faces, normalising the vertices at each iteration.

I’ve selected the subdivided icosahedron or “icosphere”, because it gives us not only a tesselation but also a tree-like network structure for the points which lets us very easily “walk” over the points to find the nearest point to any given geolocation (we can perform this operation in O (n) where the icosphere has n iterations).

Progressive subdivision of an icosphere’s faces to approximate a sphere

We then assign quantities of crust (oceanic or continental) to each point, assign each point to a tectonic plate, assign motion to each plate, and iteratively compute the motion and update the terrain according to what happens at each plate boundary. This will necessarily be a massively simplified simulation but it can create a coarse distribution of altitudes that more fairly reflects those created by the tectonic processes that have shaped Earth’s geography at the planetary level of detail. Examples of tectonic simulation in software include:

Those examples produce good results at lowish levels of detail. What we will do is use the output of a tectonic model in the same way as a directing sketch, which is to say we will generate the model on the basis of data points maybe dozens of kilometres apart and then elaborate that coarse geography using other processes to produce a more detailed terrain from it.

Erosion simulation

Simulating the flow of water (and ice) over the surface should elaborate the terrain by adding the effects of erosion but also give us a hydrographic map, showing us where rivers and lakes form. Tricky aspects of that are:

  • we need to know something about the climate, so that we can say how much water falls where;
  • an erosion algorithm must be iterative, so we need to track the altitude of each point across many iterations. This means we need to represent the terrain as discrete points at the outset, and because we are on a sphere, we need to use our icosphere (or a portion of an icosphere) to generate the map of an eroded terrain. Erosion and hydrography will need to be generated for each view we want to create, and it’s not necessarily easy to make sure that one view will be consistent with another.

Sculpting tools

In order to provide detailed features of particular locations or arrange geographies to fit the needs of a story the user will need control over the output terrain, before or after the erosion stage (or both). This is especially true in a geography like Tolkein’s, for example, where terrain has been arranged by unnatural forces to serve particular purposes, such as providing ramparts around Mordor. But natural landscapes will need to be contrived to some extent to provide features demanded by a story. Again in Tolkein, for instance, a “lonely mountain” (Erebor) is needed in a particular place for the plot of The Hobbit to work.

As discussed above, a user can introduce detail provided in a raster and merge it smoothly with the rest of the terrain using the alpha channel. As also noted there, it probably isn’t that easy to edit terrain detail if

  • you can’t see the terrain that’s already there and that you’re trying to blend with;
  • you’re working in a projection where the shapes you draw have to be different from the ones you want because of the distortion they import. We want to be able to edit the map in front of us in the projection we choose, not have to do it in equirectangular and hope that our arctic fortress looks right when we view it in orthographic; oh, and plus which
  • if you’re working in a high latitude the editing tools need to work on evenly-spaced data points, so we need to represent the terrain in an icosphere (or part of one), not a raster. Look what happens if we try to edit terrain in a polar region in orthographic projection using Fractal Terrains and you can see what I mean. Above and to the right of the pole is a region I have raised up and it shows the familiar wedge shape of polar bunching, because I am editing a raster rather than a grid of evenly-spaced data points. To the left of that the dotted lines show the shape of the “brush” on the surface, highly elongated rather than round.

Screenshot_2018-02-17_23-06-21
Editing a polar region in Fractal Terrains