This looks at the tools we need for creating planetary terrain at different levels of detail.
In general the user will need to:
- guide and direct the overall geography of a planet by placing the major features: continents, oceans, mountain belts, or by specifying the means by which these are determined. We can use noise algorithms directed by parameters to place these features; we can simulate the natural processes of plate tectonics; or we can specify a high-level layout by providing a sketch which the system can read and use to generate terrain.
- design or sculpt specific features. The story might demand a mountain or river in a particular place, and we allow the feature to be placed and shaped within the landscape.
- let the computer do all the hard work generating terrain at levels of detail between these two extremes.
The diagram shows in darker blue the functionality developed thus far, which revolves chiefly around a port to GLSL (shader language running on the GPU) of the libnoise library. The additional requirements are as follows.
Directing sketch and imported data
User input amounts to a low-resolution specification of approximate altitudes which can be read and encoded to an icosphere (height marks at evenly-spaced points over the globe). The problem many terrain generation algorithms that use fractal methods have is that they tend to produce rather homogeonous output, with mountains generally in the centres of continents, coastlines of similar indentedness everywhere, and little variation in the fractal pattern across the globe, like this.
That’s the simple spherical terrain example from the libnoise documentation (http://libnoise.sourceforge.net/tutorials/tutorial8.html). Libnoise goes on to provide a much more sophisticated example comprising over 100 processing modules and which, among other things, attempts to remap the generated altitudes in such a way as to move the highest areas to near the coast (http://libnoise.sourceforge.net/examples/complexplanet/index.html), but that then creates land-locked depressions in the interior.
As far as specifying the overall layout of a planet is concerned, we can use a directing sketch, prepared by hand to a machine-readable format. An example of landscape generation using a directing sketch is included in Torben Mogensen’s planet generator (http://www.diku.dk/~torbenm/Planet/), although what happens there is that random planets are generated and compared with the sketch to return the one that matches it most closely. What we will do is take low-resolution height mapping and perhaps feature-type mapping from the user and elaborate it at lower-level detail in the second column of the diagram. Mogensen’s version is a grid of symbols that tell the program what sort of general shape is wanted in coarse terms.
With the benefit of the processing power available nowadays we can use a grid like this not as a filter for generated output but as an input to other processing modules to control the amplitude, overall height and character of terrain generated at different places. To do this we should provide a component which reads (say) a bitmap of height levels into a data structure and makes that available as the output of a noise module, which could replace something like a coarse Perlin generator as the high-level shape of the terrain.
By providing the facility to import raster data from a bitmap file and map it to a geographical extent (which could be the whole world or just a small area), we can support the introduction of a directing sketch and also support the addition of detailed landscaping for local areas where a specific shape is wanted or a particular feature in a particular place. There seem to be a few additional requirements here:
- it is difficult to draw on an equirectangular projection, especially in the high latitudes, so it will be useful to provide sculpting tools (discussed below) to allow drawing to take place in the projection being viewed.
- an imported raster must blend properly with any terrain that surrounds it, so we need to transition at the edges from the imported raster to the existing surrounding terrain.
- Moreover, the area to be imported won’t necessarily be a lat-lon rectangle so we will need to mask out the bits of the raster we don’t want. This point and the one above can be addressed by treating the alpha channel of the raster as a blending coefficient: if it is 0, we ignore the raster altogether and take the existing value at that location; if it is 1, we supplant the existing value at that location with the value in the raster; for values in between, we interpolate using the alpha value as a weight, so that where alpha = 0.7 the output value at that location is 0.3 * the existing value plus 0.7 * the raster value, and so on.
A tectonic model attempts to produce a realistic Earth-like model using a description of tectonic plates and generating the high-level layout by simulating the movement, production, collision and subduction of mantle driven by the Earth’s tectonic processes.
The tectonic model should work by generating an evenly-distributed network of points covering the globe and attaching simulated crust (lithosphere) to these, then moving them about and tracing the development of continents, ocean beds and mountain ranges over some number of iterations of the simulation. The points need to be evenly distributed on the sphere or else the terrain will be stretched at the poles. This tesselation of the sphere can be done in a number of ways:
- generate a Fibonacci spiral, with points at equal intervals along a spiral that wraps around the sphere many times and has ends at antipodal points,
- generate a cube with subdivided faces, and normalise its points so that they sit on the surface of the enclosing sphere (i.e. are equidistant from the centre),
- generate a large number of random points on the surface of the sphere, and then have these iteratively repel these so that they move apart until their distribution is more or less even, or
- generate an icosahedron and progressively subdivide its faces, normalising the vertices at each iteration.
I’ve selected the subdivided icosahedron or “icosphere”, because it gives us not only a tesselation but also a tree-like network structure for the points which lets us very easily “walk” over the points to find the nearest point to any given geolocation (we can perform this operation in O (n) where the icosphere has n iterations).
We then assign quantities of crust (oceanic or continental) to each point, assign each point to a tectonic plate, assign motion to each plate, and iteratively compute the motion and update the terrain according to what happens at each plate boundary. This will necessarily be a massively simplified simulation but it can create a coarse distribution of altitudes that more fairly reflects those created by the tectonic processes that have shaped Earth’s geography at the planetary level of detail. Examples of tectonic simulation in software include:
- Experilous Worldbuilder, https://experilous.com/1/store/offer/worldbuilder
- Worldengine, http://world-engine.org. This unfortunately works on a flat plane, but produces lovely maps
- Tectonics.js, http://davidson16807.github.io/tectonics.js/. Lets you see the development of the planet as it proceeds, but seems to produce lots of artefacts.
Those examples produce good results at lowish levels of detail. What we will do is use the output of a tectonic model in the same way as a directing sketch, which is to say we will generate the model on the basis of data points maybe dozens of kilometres apart and then elaborate that coarse geography using other processes to produce a more detailed terrain from it.
Simulating the flow of water (and ice) over the surface should elaborate the terrain by adding the effects of erosion but also give us a hydrographic map, showing us where rivers and lakes form. Tricky aspects of that are:
- we need to know something about the climate, so that we can say how much water falls where;
- an erosion algorithm must be iterative, so we need to track the altitude of each point across many iterations. This means we need to represent the terrain as discrete points at the outset, and because we are on a sphere, we need to use our icosphere (or a portion of an icosphere) to generate the map of an eroded terrain. Erosion and hydrography will need to be generated for each view we want to create, and it’s not necessarily easy to make sure that one view will be consistent with another.
In order to provide detailed features of particular locations or arrange geographies to fit the needs of a story the user will need control over the output terrain, before or after the erosion stage (or both). This is especially true in a geography like Tolkein’s, for example, where terrain has been arranged by unnatural forces to serve particular purposes, such as providing ramparts around Mordor. But natural landscapes will need to be contrived to some extent to provide features demanded by a story. Again in Tolkein, for instance, a “lonely mountain” (Erebor) is needed in a particular place for the plot of The Hobbit to work.
As discussed above, a user can introduce detail provided in a raster and merge it smoothly with the rest of the terrain using the alpha channel. As also noted there, it probably isn’t that easy to edit terrain detail if
- you can’t see the terrain that’s already there and that you’re trying to blend with;
- you’re working in a projection where the shapes you draw have to be different from the ones you want because of the distortion they import. We want to be able to edit the map in front of us in the projection we choose, not have to do it in equirectangular and hope that our arctic fortress looks right when we view it in orthographic; oh, and plus which
- if you’re working in a high latitude the editing tools need to work on evenly-spaced data points, so we need to represent the terrain in an icosphere (or part of one), not a raster. Look what happens if we try to edit terrain in a polar region in orthographic projection using Fractal Terrains and you can see what I mean. Above and to the right of the pole is a region I have raised up and it shows the familiar wedge shape of polar bunching, because I am editing a raster rather than a grid of evenly-spaced data points. To the left of that the dotted lines show the shape of the “brush” on the surface, highly elongated rather than round.