So, being stuck at home for a couple days, I thought I’d try something similar.

First, I had to shape a (nominally) 1×2 inch board into something vaguely sword shaped. For this I used a coping saw, a pocket knife, and a rasp, and sandpaper. A table saw, jigsaw, belt sander, and various other power tools would have made the job go a lot easier and quicker, but I don’t have those, so I had to do it by hand. Using the rasp to shape the blade by hand was tiring, to say the least.

Then I glued on some bits of wood I had sawed out with the coping saw to form a cross guard.

Then on with the aluminum tape, spray paint and steel wool.

I used a ball-point pen to do some “elvish” “engraving”.

Came out pretty cool.

]]>

]]>

]]>

There are a lot of different ways of doing it, and I have tried a few, going through a progression which I suspect many others have gone through as well. In the beginning my space game used my own, very simple, and very slow software renderer. I was not concerned even with texture mapping, and required my models to have an extremely low polygon count, and so the merest symbolic suggestion of a sphere was sufficient. Without putting much thought into it, I just created a sphere in OpenSCAD, exported it as an STL file, and called it done.

OpenSCAD sphere

This was fine, until I wanted to try wrapping a texture around the sphere. At the poles, the triangles get smaller and smaller. If you try to use an equi-rectangular mapping, that is, take an image that is twice as wide as it is high, and wrap it into a cylinder, then pinch the top and bottom of the cylinder to fit it onto the sphere, you get quite a lot of distortion at the poles. You can mitigate this somewhat be pre-stretching the image at the poles, but you lose some detail.

To avoid this, you can cube map a texture onto the sphere. You can do this with any geometry, and the tessellation of the sphere doesn’t really matter for purposes of cube-mapping a sphere, except that the interpolation tends to work a bit better if the triangles on the sphere are more uniformly sized. You can get a “smoother” sphere by having more uniformly sized triangles.

This leads to the next method of tessellating a sphere, the subdivided icosohedron. This has very nice, almost uniformly sized triangles, and works very well indeed.

Subdivided icosahedron

The problem comes when you try to do normal mapping to get nice lighting of detailed features like mountains and craters on the surface of your planets. To do normal mapping, you need to have per-vertex normal, tangent, and bi-tangent vectors which get interpolated across the faces of the triangles in the shader program. This would all work fine on the icosahedron if you could get a continuous field of tangent and bitangent vectors on the surface of a sphere. But you cannot get such a field, there will be discontinuities in any such filed (see the Hairy Ball Theorem). Discontinuities aren’t a problem in and of themselves, but manifest as problems when a triangle spans a discontinuity, which throws the interpolation off. The trick is to get the discontinuities to reside between the triangles. Doing this on a subdivided icosahedron would be quite tricky though.

This leads us to the next iteration of sphere tessellation, the spherified cube. The idea is you create a cube at the origin, and each of the six faces is divided into a grid, and each square of the grid is cut diagonally into two triangles. Then normalize every vertex. Voila! A sphere! Additionally if you make sure the edges of each of the six faces of the cube do not share vertices, then you can compute the per-vertex normal, tangent, and bitangent vectors independently for each face. This means that the discontinuities reside between the six faces, and each face independently can be free of discontinuities, and since no triangles span the boundary between faces, that means no triangles span discontinuities in the tangent and bitangent vector fields, and the interpolation across triangle faces works everywhere without problems.

Spherified cube

But, with a naive construction of the cube faces — dividing them into a uniform grid — another minor problem becomes apparent. The triangles near the center of the cube faces are much larger than the triangles near the corners of the cube faces. There is a simple solution to this. Instead of dividing the faces into a regular grid, divide them into a grid based on equal angles between vertices. For example, if each face is to be divided into a 10×10 grid, the naive way would be to divide the length of one edge of each face by ten, and make 100 equal sized squares. Once normalized into a sphere, you get the problematic distortion. Instead, we notice that from one edge of a face to another we sweep an arc of 90 degrees. So we can start with an angle of -45 degrees, and sweep across placing 10 vertices 9 degrees apart from one another. This will lead to more uniformly sized rectangles, and thus, more uniformly sized triangles.

Better spherified cube (more uniformly sized triangles)

There is one more step we can take. If we look at the corners of each face of the cube, we can see that the rectangles are in some cases cut into two thin slivers of triangles. If instead the rectangle had been cut by the other diagonal, the resulting triangles would have been fatter, and closer to an equilateral triangle than to the thin slivers they are now. So when splitting our rectangles into triangles we can choose to split them by the shortest diagonal. This yields even more uniformly sized triangles.

Even better spherified cube (even more uniformly sized triangles)

And that is the tessellation I am using lately. Perhaps something even better might come along.

]]>

Space Nerds In Space Issue #79: Debug normal maps

]]>

The method begins with another method I used for generating cubemap terrain textures. That method involves sampling real terrain elevation data to programmatically create “brushes” on the fly which are then used to recursively paste and blend elevation patches onto a sphere. The key insight I had was that I could do exactly the same thing with with satellite cloud imagery — treat it as if it were elevation data, and paste and blend dynamically created “elevation” patches consisting of slightly processed and sampled grayscale cloud imagery — I didn’t even have to write any code, I just gave my existing program different data to chew on.

This by itself worked ok, but lacked the kind of “swirliness” that you see when you look at pictures of earth taken from space, as you can see below.

For this swirliness, I fell back to my other nifty program gaseous-giganticus, used for creating gas-giant planet textures. I noticed that the “swirliness” of planets like Jupiter is quite different from the swirliness of planets like Earth. I spent some time looking at this super cool real-time earth wind thing and some time playing with the noise-scale parameter of gaseous-giganticus to get a better handle on how to produce a given sort of swirliness. Here are some pictures showing the effect of various noise scale values in gaseous-giganticus.

Once I could produce some satisfactory looking grayscale cloud cubemaps, it was a simple matter to take a white image and use the cloud images as an alpha mask and composite it onto terrain images.

In my Space Nerds In Space github repository, I describe in some detail exactly how to do all this. See: How to Generated Earth-like Planet Textures.

]]>

(you may have to hover the mouse over the little icon and right click and “View Image” — I think wordpress or maybe imgur doesn’t like hotlinking Jupiter sized gifs.)

]]>

]]>

]]>

It’s nice to see someone so prominent and well spoken present what are essentially the same conclusions I’d privately reached as a college freshman, and which I’d written about before on this very blog back in 2008.

]]>