VoxelBased Navmesh
Introduction
In game development, one crucial aspect of modern AI is the ability to navigate through a level. This is often accomplished using a Navmesh, also known as a navigational mesh. Essentially, a Navmesh is a graph-like structure that can be generated using a set of meshes and specific parameters. It serves as a representation of where an AI character can move within the game environment.
Popular game engines like Unity and Unreal Engine have a built-in feature that allows for the generation of Navmeshes for their integrated AI systems. These engines, along with others, typically use a modified version of the Recast library, which is a tool for generating Navmeshes.
To simplify my adaptation process, I narrowed it down to three main steps. These steps involve creating a voxel volume, rasterizing the voxels, and generating a mesh from the remaining voxels.
Creating the volume
To start, we need to establish the boundaries of our volume. This is depicted in the image below as a red line-box. The volume determines which objects will be included in the navmesh by checking if their mesh bounds intersect with the volume's bounds. This is the first crucial step in the process.
Once the bounds were defined, the next step was to create all the necessary voxels and ensure they were contained within the volume. To accomplish this, I calculated the width count by dividing the absolute length of an axis by the size of my voxels. The offset was then calculated by subtracting the non-absolute distance from the width count times the voxel size. This ensured that all the voxels were correctly positioned within the defined volume.
The last step involves repeating the previous process for all axes and iterating through each axis to create the voxels. By following these steps, the voxel volume is generated and can then be used to rasterize the voxels and generate a mesh for the remaining voxels.
Rasterising the volume
The initial stage of rasterization involved determining which voxels to keep by performing voxel-on-triangle collision detection. To accomplish this, I first checked whether any of the voxel points were located within the triangle's projection in its normal direction. This can be thought of as a Toblerone box, and if the point is within the box, then it is inside the triangle. Afterward, I checked whether the signed value from the triangle's normal to the point was within the voxel size. This helped to identify which voxels were intersecting with the triangles and needed to be retained for the final mesh generation.
The navmesh was starting to take shape, but there was one crucial issue that was not immediately apparent: some voxels from the bottom mesh were still present within the meshes. To address this, I opted to create convex hulls for all the meshes contained within the volume. This enabled me to identify whether any voxels were inside the hull (mesh). At this point, I was slightly behind schedule, so I decided to take a shortcut. All the examples I had worked with thus far only involved cubes, which are convex hulls by default. Once I removed these voxels, the navmesh looked significantly better, and it also helped to eliminate some of the edges.
Turning it into a mesh
The final step in the process involved creating a more visual representation, and to accomplish this, I utilized a well-known algorithm called Marching Cubes. This algorithm transforms voxels into a mesh by examining the voxels to the left, right, top, and bottom of each voxel and using this information to determine which triangles to add to that voxel. By using Marching Cubes, I was able to generate a detailed mesh that more accurately represented the navmesh.