Making art assets for regular games is hard enough; building full 4D models - things which allow you to look at them from any arbitrary vantage point in 4 dimensions - is just ridiculous. So procedural generation and simple geometric objects will be our friends. In fact, let's just do nothing but try to look at a randomly-generated 4D maze, constructed entirely out of right-angled, unit-sized hypercubes, as viewed through a 3D hyperplane slicing through the maze at an arbitrary angle and position.
Maze generation algorithms are all basically variations on finding spanning trees for the graph of maze cells, and as such are not strongly tied to any particular dimensionality; any of them can be fairly easily converted to work on arbitrary graphs, or in grids of arbitrary dimension. So, we'll leave maze generation as a problem space aside, and just assume that we can get a good maze map.
One way to render our view of the maze would be to represent the walls of the maze with polygons in 4 dimensions, and calculate the intersection of every edge with the currently-visible hyperplane to generate a 3D model that can be rendered in The Usual Way, using your 3D rendering library of choice.
Calculating all of those intersections, though, seems really complicated, and there's a much simpler option that takes advantage of the fact that our maze is restricted to a hypercubical grid: we can use raycasting!
Raycasting is more typically used to give the illusion of 3D with a 2D map, in games like Wolfenstein 3D. It involves marching a ray from the camera's position through the grid until you find a wall, and then drawing a column of pixels with height scaled based on the length of the ray, so that walls appear to recede into the distance. The code for this is fairly simple, and when you only have to cast one ray per pixel of horizontal resolution, its fast enough that you can even do it in pure JavaScript in a web browser. In order to render a 3D slice of a 4D map, we'll have to instead cast one ray for every pixel on the whole two-dimensional screen. That's significantly slower, but the code for casting each individual ray is not much more complex- it's merely about twice as long as that for a traditional raycaster, due to handling twice as many cases.
Casting Rays & Finding Walls
To get reasonable framerates, I implemented my 4D raycaster in GLSL to run on a GPU. The first step is calculating the appropriate ray given a pixel position:
uniform float u_depth;
uniform vec2 u_resolution;
uniform vec4 u_origin;
uniform vec4 u_rgt;
uniform vec4 u_up;
uniform vec4 u_fwd;
void main(){
vec2 coords = gl_FragCoord.xy - (u_resolution / 2.0);
vec4 ray = u_fwd*u_depth + u_rgt*coords.x + u_up*coords.y;
gl_FragColor = cast_vec(u_origin, ray, 10.0);
}
(For the uninitiated, "uniform" variables have their values passed in from outside the shader program.)
This code converts the pixel location into a new coordinate system with the origin at the center of the field of view, instead of one corner. Then it calculates a a direction in which to cast a ray based on the camera's egocentric coordinate system; the depth of the virtual camera in front of the screen specifies how much of the forward direction to use, which constrains the viewing angle, while the x and y pixel coordinates indicate how far to go along the right and up axes. Note that, so far, aside from the fact that all of the vectors have four components, this looks entirely like 3D code; there is no mention of the fourth, ana, axis. This is because we only care about casting rays contained in the hyperplane defined by the F, R, and U axes, so we only need to construct rays with components from those basis vectors. If we wanted to view a projection of the world into a 3D hyperplane, rather than a slice, then we could define an arbitrary w-resolution, in addition to the x and y resolutions, and cast additional rays with components from the ana basis vector of the camera's coordinate system. But multiplying the number of rays like that not do good things to your framerate!
(Note that, although we only use a 3D subset of the camera's egocentric coordinate system, the resulting ray can, and usually will, have non-zero components in all four of the x, y, z, and w axes. This will only fail to be the case if the egocentric coordinate system is exactly parallel to the map grid.)
After calculating the ray corresponding to a particular pixel, we pass the origin point, the ray, and a distance limit into the raycasting alorithm, which returns a pixel color. So, let's look at how that works. First, a useful utility function:
// Find the distance to the next cell boundary
// for a particular vector component
float cast_comp(vec4 v, float o, out int sign, out int m){
float delta, fm;
if(v.x > 0.0){
sign = 1;
fm = floor(o);
delta = fm + 1.0 - o;
}else{
sign = -1;
fm = ceil(o - 1.0);
delta = fm - o;
}
m = int(fm);
return length(vec4(delta,delta*v.yzw/v.x));
}
This does three things:- Calculates the distance you have to move in the direction of a ray, starting from a particular origin point, before you hit a cell boundary in a particular grad-axis-aligned direction.
- Calculates the sign of the ray's propagation along a particular grid-axis direction.
- Calculates the integer position of the grid cell in which the ray originates along a particular grid-axis.
// Starting from the player, we find the nearest gridlines
// in each dimension. We move to whichever is closer and
// check for a wall. Then we repeat until we've traced the
// entire length of the ray.
vec4 cast_vec(vec4 o, vec4 v, float range){
v = normalize(v);
// Get the initial distances from the starting
// point to the next cell boundaries.
int4 s, m;
vec4 dists = vec4
cast_comp(v.xyzw, o.x, s.x, m.x),
cast_comp(v.yxzw, o.y, s.y, m.y),
cast_comp(v.zxyw, o.z, s.z, m.z),
cast_comp(v.wxyz, o.w, s.w, m.w)
);
// Inverting the elements of a normalized vector
// gives the distances you have to move along that
// vector to hit a cell boundary perpendicular
// to each dimension.
vec4 deltas = abs(vec4(1.0/v.x, 1.0/v.y, 1.0/v.z, 1.0/v.w));
The deltas give us the last bit of information we need to initialize the algorithm- how big the steps are in the direction of the ray in order to move one full grid unit in a grid-axis-aligned direction. This is different from the initial distance needed to get from an origin point somewhere inside a cell to the cell boundary, calculated above.
// Keep track of total distance.
float distance = 0.0;
// Keep track of the dimension perpendicular
// to the last cell boundary, and the value of the
// last cell the ray passed through.
int dim, value;
// while loops are not allowed, so we have to use
// a for loop with a fixed large max number of iterations
for(int i = 0; i < 1000; i++){
// Find the next closest cell boundary
// and increment distances appropriately
if(dists.x < dists.y && dists.x < dists.z && dists.x < dists.w){
dim = 1*s.x;
m.x += s.x;
distance = dists.x;
dists.x += deltas.x;
}else if(dists.y < dists.z && dists.y < dists.w){
dim = 2*s.y;
m.y += s.y;
distance = dists.y;
dists.y += deltas.y;
}else if(dists.z < dists.w){
dim = 3*s.z;
m.z += s.z;
distance = dists.z;
dists.z += deltas.z;
}else{
dim = 4*s.w;
m.w += s.w;
distance = dists.w;
dists.w += deltas.w;
}
In this section, we keep track of the length of the ray that it would take to get to the next cell boundary in any of the four grid axes, stored in the dists vector. Whichever one is smallest becomes the new length of the ray. Additionally, we record which axis we stepped along (in dim), update the coordinates of the cell through which the ray will pass next (in the m vector), and then increment the length of the theoretical ray which would hit the next cell boundary along the same grid axis.
After that, we just check to see if we've hit a wall yet. If there were any other objects in the game, this is where we'd add more detailed intersection checks for the objects within a particular cell, but with just a simple maze we just have to check the value of the cell to see if it is a wall or not:
value = get_cell(m.x, m.y, m.z, m.w);
// If the cell is a wall, or we've gone too far, terminate.
if(value == 255 || distance >= range){
break;
}
}
// Calculate the actual intersection coordinates
// and use them to look up a texture
vec4 ray = o + distance * v;
vec3 tex = calc_tex(dim, ray);
return vec4(tex, 1.0);
}
Once the casting loop terminates, we can use the coordinates of the end of the completed ray to look up the color of the bit of wall we ran into
The conversion of the vec3 color to a vec4 in the last line is just to add in an alpha channel, which is always 1, and is thus left out of the procedural texture code below for simplicity.
3D Textures
Once we've found a wall at the end of a ray, the coordinate for the axis perpendicular to that wall (identified by the value of dim), will have an integer value, while the fractional parts of the remaining three coordinates describe the position inside a cube which forms the boundary of one side of the hypercubical grid cell. This is exactly analogous to the 3D situation, where we would have two non-integer coordinates describing a location inside a square that forms the boundary of a cubical grid cell, or the 2D situation (where raycasting is more typically applied) where we have one non-integer coordinate identifying a position along a line forming the boundary of a square.In order to figure out the color of a particular spot on the 3D wall of a 4D cell, we'll need three-dimensional textures. Say that we want our walls to be 512 pixels wide; for a 2D wall around a 3D cell. With one byte for each of three color channels, that means we'd need a texture taking up a little over 93KB in memory. That's entirely reasonable. We could easily have different textures for walls facing in all six directions around a cube, to help orient you in the maze. But for a 3D wall around a 4D cell, we'd need over 50 megabytes for each texture. Even if I had the digital artistic talent to create full 3D textures like that, that's rather a lot of memory to devote to textures. Once again, we can turn to procedural generation to create interesting wall textures.
There are lots of ways to do procedural texture generation, but the algorithm I picked looks basically like this:
uniform vec3 u_seed;
const vec3 grey = vec3(0.2);
const vec3 red = vec3(1.0,0.5,0.5);
const vec3 green = vec3(0.5,1.0,0.5);
const vec3 blue = vec3(0.5,0.5,1.0);
const vec3 yellow = vec3(0.71,0.71,0.5);
vec3 calc_tex(int dim, vec4 ray){
ray = fract(ray);
vec3 coords, tint;
if(dim == 1 || dim == -1){
coords = ray.yzw;
tint = red;
}
else if(dim == 2 || dim == -2){
coords = ray.xzw;
tint = green;
}
else if(dim == 3 || dim == -3){
coords = ray.xyw;
tint = blue;
}
else if(dim == 4 || dim == -4){
coords = ray.xyz;
tint = yellow;
}
float h = julia(coords, u_seed);
if(h == 0.0){
return mix(tint/16.0, grey, layered_noise(coords, 3, 3));
}
vec3 base = texture2D(u_colorscale, vec2(h, 0.5)).rgb;
return mix(tint/8.0, base, layered_noise(coords, 5, 3));
}
It starts with a basic case analysis to pick some parameters based on which direction we hit the wall from, and thus which kind of hypercube boundary cell we're calculating the texture for.
Then, the basic texture is drawn from a 3D generalization of a Julia fractal. In order to break up the psychedelic color bands produced at the edges, we layer on top of this a bunch of 3D simplex noise at multiple frequencies; the layered_noise function (implementation not shown) takes a start and a length for the range of octaves of basic noise that should be added together.
In principle, we could produce a different 3D texture for each of the 8 cubes that form the boundary of each 4D cell. In fact, with the full coordinates of the ray endpoint, we could even produce different textures for every cube in the entire maze. In practice, however, that turns out to be more code for little benefit. Since the appearance of any particular wall depends heavily on your precise 4D position and orientation, revealing a very specific 3D slice of the 4D whole, having unique textures for every cube in the maze doesn't really seem to help that much with identifying your position.
So, to help with orientation, instead of calculating a completely different texture for every 3D face of a hypercube, we just add a tint of a particular color to the basic texture to identify the orientation of each cube with respect to the grid.
That ends up producing views something like this:
Note that not all of the walls in this scene seem to be at right angles! This despite the fact that the world consists of nothing but make walls aligned with a perfect right-angled hypercubical grid.
The reason for this is fairly straightforward; the viewing volume isn't parallel with the wall grid. Just like an oblique 2D slice through a 3D cube can end up with a hexagonal outline, a 3D slice through a 4D cube can also end up producing an outline with 2D segments meeting at all sorts of weird angles.
That ends up producing views something like this:
Note that not all of the walls in this scene seem to be at right angles! This despite the fact that the world consists of nothing but make walls aligned with a perfect right-angled hypercubical grid.
The reason for this is fairly straightforward; the viewing volume isn't parallel with the wall grid. Just like an oblique 2D slice through a 3D cube can end up with a hexagonal outline, a 3D slice through a 4D cube can also end up producing an outline with 2D segments meeting at all sorts of weird angles.