Friday, July 31, 2009

Navmesh Height Accuracy


I have spent some time recently to research different methods to improve the height accuracy of the navigation meshes. I had four ideas how I wanted to approach problem:
  1. Subdivide the navmesh polygons to better conform the underlying terrain
  2. Use subdivision based displacement mapping per navmesh poly
  3. Use method similar to texture mapping per polygon
  4. User callback function
Option number one would not require any changes to the run-time part, but it would cost both memory as well as performance, since the A* node count would increase.

Option two was my number one choice for a long time. The idea would be to have some sort of subdivision scheme which is easy to reproduce at runtime and store deltas per subdivision. In the end I rejected this since I could not find proper subdivision scheme that would work on polygons and slivers too.

The texture mapping approach would store a small heightmap texture per polygon or per couple of polygons much like lightmappers do. The additional awesomeness about this feature is that it would allow to embed other kinds of gameplay related information into the navmesh, such as how stealth the current location is, or stance value for the character, etc.

Some path finding solutions use the user callback mechanism and I might add that to Detour at some point too. The rationale behind that solution is that the height data can take a lot of memory and is often already stored somewhere.

After some failed prototypes with the tesselation approach I tried out the texture mapping and it seemed to work better, but not without its own set of problems. The image above shows first the raw navigation mesh and below is the texture mapped version.

The heigthfield texel size in the picture is roughly the width of the thin polygons on the bridge. The image shows less than a quater of the level, and the textures for that level take 128.3 kB. It should be possible to trim that number using some simple compression and omitting textures which are not really used.

If you have good pointers to simple texture compression schemes which allow fast random lookup, let me know! Bonus points for anything that allows fast bilerp too.

I have tried to make the textures so that the height is slight over estimation. That should make it easier to trace a perfect ground alignment. It is possible to later on add different downsampling schemes if necessary.

Currently there is a small texture stored per polygon. First I wanted to store a texture per region, but that turned out to be more complicated than I expected. When the texel size increases there starts to be more and more problems. Since the regions are non-convex, something like U-shaped staircase can create a lot of problems if you try to own sample it.

There are still some similar problems with the per polygon textures, but they reasonable to fix.

The next update of Recast and Detour will include the height texture feature. Hopefully out soonish. The code will be laid out so that the textures are optional.

Tuesday, July 28, 2009

Velocity Obstacles pt. 2

Few more thoughts on velocity obstacles.

Firstly, I'm also fed up with performance figures provided in FPS figures. Humus had a good post about it from graphics programmers point of view. The ClearPath paper uses frames per second all over the place to describe the performance of their algorithm. Academia, if anything should embrace the use of the System of International Units, and their peer review process should enforce to provide more information how the performance was measured, just like they require the author to state the prior art too.

Secondly, I forgot to mention another recent VO paper:

Independent Navigation of Multiple Robots with Hybrid Reciprocal Velocity Obstacles
http://gamma.cs.unc.edu/HRVO/

The paper provides nice trick to avoid the reciprocal dance problem usually related to VOs.

So, now there's VOs, RVOs, NLVOs, FVOs, HRVOs, NHRVOs, PVOs and what what not. And with every new VO related abbreviation there is one less reason not to invest in proper crowd control on your game.

Path Following Using Velocity Obstacles


Path following amongst dynamic objects is a tricky topic. Gladly there are tons of good papers about it and it seems to be lively topic in the academia and actually one of those which produce good papers with realistic test cases too.

Over the past year, I have experimented with velocity obstacles. I tried several different sampling schemes, but I never quite got my implementation robust or fast enough. There's a recent paper which solves all the problems I had with the approach so far. The paper is titled:

ClearPath: Highly Parallel Collision Avoidance for Multi-Agent Simulation
http://gamma.cs.unc.edu/CA/

It is simple to implement and seems fast enough. I implemented the FVO union and new velocity selection mechanism and I like the results so far as you can see from the image above. I still need to add handling for static obstacles and then I'm ready to move on to actually create path follow example for Detour.

Friday, July 24, 2009

Libtess2


I have been sitting on this code for way too long. First there was the crappy license (not it is a little less crappy, but still horrible) and then I thought I must finish nanosvg to get some data in. It is great piece of code for prototyping and tools.

Libtess2 is a refactored version of the original libtess which comes with the GLU reference implementation. The code is good quality polygon tesselator and triangulator. The original code comes with rather horrible interface and its' performance suffers from lots of small memory allocations. The main point of the refactoring has been the interface and memory allocation scheme.

Simple bucketed memory allocator (see Graphics Gems III for reference) was added which speeds up the code by order of magnitude (tests showed 15 to 50 times improvement depending on data). The API allows the user to pass his own allocator to the library. It is possible to configure the library so that the library runs only on predefined chunk of memory.

The API was changed to loosely resemble the OpenGL vertex array API. The processed data can be accessed via getter functions. The code is able to output contours, polygons and connected polygons. The output of the tesselator can be also used as input for a new run. I.e. the user may first want to calculate an union all the input contours and the triangulate them.

Libtess2 lives at http://code.google.com/p/libtess2/

[Edit] Added download zip for convenience.

Thursday, July 23, 2009

Recast and Detour 1.31 Released

This is small incremental release. It improves the pathfinding cost and heuristic functions as discussed earlier. Code available in google code as usual.

Recast and Detour 1.31

Monday, July 20, 2009

Recast and Detour Roadmap

While preparing for 1.31 release, I just noticed that I have not updated my road map for a while.

The 1.3 release proved to me that the tile based method will be my choice of dynamic navigation meshes. It allows wide range if different ways to locally adjust the mesh, either by completely rebuilding it, or maybe even just partially cutting holes to it.

It looks like it is possible to speed up the per tile creation process so that the completely dynamic approach will be feasible. The future Recast and Detour process will be aligned towards that. I will probably experiment with the cookie cutter method at some point.


The next push with Recast will be to add more functionality which is required for a game. The first two items on my todo list are currently: jump links and area annotations.

As the name implies jump links allow off mesh navigation links, just as jumping down from a ledge or climbing ladders. My plan is port the tile navmesh link structure to the static mesh too and extend it to allow user specified extra links. At first the links will need to generated manually but I have a couple of ideas how they could be created automatically too. Any ideas regarding that are welcome! Especially how would you like to define the jump trajectory, how they should be places, etc.

Area annotations are first painted into the voxelized walkable surface and are carried over to the final navigation mesh. The annotations may come from some filters, such as dividing the walkable space in to areas based on the height of the agent. Another source of annotation could come from the level designers, they could mark certain areas as spawn areas, or hazard locations or places where the agent cannot stop, etc. That's very important stuff to get your gameplay done. And will be very challenging to get the API for the queries to work with all the new possibilities.

Continuing the gameplay track, I might provide some sample code how to embed extra data per polygon such as cover points.

I have also received a lot of feedback that the navigation mesh does not conform the underlying surface. I have toyed around with some ideas and it looks like the best solution is to create an optional filter which will tesselate the navigation mesh to conform the heightfield where there is a lot of height variance. This will come with the cost of extra polygons, but I think it certain situations it will be really useful.


That is the near future. In longer run I want to create path following code to Detour as an additional layer of code which uses the services from the navigation mesh code. It would show how to handle multiple agents and dynamic obstacles moving in the game.

I also want to expand the tile navigation mesh demo to contain more dynamic environment. That might spin off another layer of code which could be a good starting point for implementing destruction and physics based layer which controls the tiled navmesh.

In order to make the fully dynamic navigation possible I want to take a look at the whole Recast library and see if I can cut some corners when dealing with small tile sizes. It looks like there is a lot of room for optimization there. I have been brainstorming with even different kind of space partitioning and rasterization which should be both simpler and take less memory. I will work on this item along while pushing the gameplay features further.

Movement Cost Along Polygons

Somehow this topic keeps on haunting me. While doing some home improvements over the weekend, I remembered a paper from few years ago:

Theta*: Any-Angle Path Planning on Grids
http://idm-lab.org/bib/abstracts/papers/aaai07a.pdf

It looks like there is more recent paper on topic too:

Incremental Phi*: Incremental Any-Angle Path Planning on Grids
http://ijcai.org/papers09/Papers/IJCAI09-303.pdf

The rough takeaway from both papers is that they incorporate line of sight tests to the cost calculation. So instead of accumulating the movement cost as you follow wiggly edge midpoints or polygon centroids you actually use the shortest path to most distant parent to calculate the cost.

It looks like the string pulling method used in Detour could be used in similar way. That is, it should be possible to create partial result of the shortest path up to the current polygon in open list and use that to make better guess of the cost up to a certain edge. The down side is that the state takes too much memory per node (around 28 bytes) so I don't think the trade off is good. I might give it a spin at some point just to see if it is indeed possible.

Thursday, July 16, 2009

Cost and Heuristics Take 2


I had a little time today to take another look at the travel costs and heuristics inspired by the discussion with Steve in the previous post.

Now Detour is using the polygon edge midpoints instead of polygon centroids to calculate the cost and heuristics. The start and end points are also taken into account, so there is slight API change.

The above screenshot shows the comparison between the old and new. In the process I also found out that Detour was not handling the A* open list properly (the nodes never got closed flag, argh!), now it is fixed too. Over estimating the heuristics helps quite a bit to find more straigth path, but making it too great will "wall hug" when finding around obstacles.

The stuff is in SVN now, I will optimize it a bit and make a new release soonish.

Tuesday, July 14, 2009

Recast 1.3 Released


Recast 1.3 is here! This is a bit black-triangle release. The largest chunk of work has gone into bringing the tile based navigation mesh into Detour. The demo is not very extensive yet.

I also went and changed the detour names a bit. The tiny migration guide is that the static mesh types are now prefixed with dtStat instead of dt, for example dtPoly is now dtStatPoly. You also need to include the DetourNode.h/cpp as well as DetourCommon.h/cpp. The first contains the node pool and queu and the second one contains common utilities shared between the two different implementations.

The demo currently works so that when you are using the Create Tile tool (it's detault tool), you can create (click) or remove (shift-click) navmesh tiles on the fly. Do not forget to press Build once before you start clicking! The unexciting trick there is that tiles are generated on the fly. It is kinda obvious if you just click around, but it is magic, trus me ;).

The magic that the tile navmesh does is that it links adjacent navmeshes together on the fly in quite efficient way without much calculations or memory allocations. Currently the tiles in the demo come from the Recast build process but it is possible to create them beforehand and stream them in as you need 'em. It is possible to also calculate the tiles for certain world states (bridge intact and destroyed) and swap them, or calc new tiles on the fly as your world changes. I will write more this in next post.

I also did some revamp for the gui, the OpenGL specific stuff lies now in separate file, maybe one day it may be possible to put that gui code into library too. I also changed to use Sean Barrett's awesome stb_truetype.

Download Recast 1.3

Wednesday, July 8, 2009

Tiled Mesh Progress Pt. 2


First succesful path query on tiled mesh got done today!

I initially thought that I would use similar structure of simply connected polygons as I used for the static meshes and make the connections between the tiles a special case. It turned out that with my target tile size (32-64 voxels) actually the opposite was true. There are more tile-to-tile connections than inter-tile connections. So I went and changed my data structure a bit and I got rid of few levels of indirection, cleaner code and less conditionals with the expense of a bit more memory.

The tiled mesh is going to be a bit slower than the static mesh, but I expect it to be quite swift still. Once I get the initial full version done, I will experiment a little with different packed data maybe even limiting the vertices to range of 0-255 (the height will need few more bits), to get the data size as low as possible. A lot of the calculations can be done in integer coords anyways, so it should not be a problem.

One design decision that is biting me in the ass constantly is that I wanted the changes to tiles be as local as possible. For that reason I allow axis aligned T-junctions at the connections at the tile edges. That is adding a bit more work on the runtime as I need to clamp some coordinates when an edge at the tile border is processed. But it think in the end it is worth it. This also forced me to make support for more links per polygon than there are edges, so this work should be useful later when I will add support for non-polygonal links (jump/animation links).

Tuesday, July 7, 2009

Tiled Mesh Progress



Looks like the tiled mesh is getting quite a bit more complicated than the regular static one. Especially the polygon Ids a getting a bit tricky. I'm using salty-handles to reference the polygons so the system is able to delete a tile and the user will then need to react to that when using the data. I general I prefer this method as it does not kill the cache with random callbacks when something changes.

So far I have implemented dynamic tile generation, tile stitching and polygon queries. In the demo the user can create, delete and recreate tiles of navmesh by clicking around the level and then use the usual tools to find path or locate nearest polygons.

There are a bit too much indirection in the data when following the neighbour polygons. The code currently accesses first the 'from' polygon then a portal, then another portal and then the 'to' polygon. By some clever misuse of the salt-handles I might be able to leave the portal out of the equation completely. One thing that will cause indirect accesses is that when following a salty-handle, I need to check if the tile where the handle points to exists. Well, once I get the first implementation ready it is start to fire some profiler.

I try to get a rough version into SVN as soon as possible, so I can start to polish it and finally make a release.

Thursday, July 2, 2009

Costs and Heuristics


Not good enough yet, but an improvement.

I did a small update to Detour yesterday. The heuristics of the Detour path finder have been really bad and I though about improving it a bit. The pathfinder should perform now much better especially with meshes generated using the tiled process. I'm pretty sure Mr. Ericson would not approve my changes, though. I have to admit that I do not understand the problem good enough yet.

I settled to something that does not over expand the open list and something that is visually pleasing as well as coherent.

I noticed that the problem is not only they heuristics, but also the cost function. It is main in the ass to create good cost function which would work with many types of polygons. Additionally the tiled processing sometimes generated regular squares, which eve further messes up any cost function that you come up with.

The cost function I ended up doing actually measures the distance from previous polygon to the next polygon instead of current to next. It sort of emulates 8-connected links in 4-connected environment. I think using the midpoints for anything related to polygons and pathfinding is super bad. Especially trying to follow a path by following the midpoints! Edge mid points may be one notch better choice, but they are bad estimate too.

Visually, looking at the results a Dijkstra producses, the problem looks a lot like distance field building problems. I wonder if one of those algorithms could spark some ideas. The CDA algorithms are close to my prev-to-next distance calculation in the sense that they use larger environment to make better guesses.


Image from paper: George J. Grevera, The ‘‘dead reckoning’’ signed distance transform, 2004.

Getting to Know My Data


In order to forward with the tiled navigation, I decided to take a bit more detailed look at what kind of data Recast creates per tiles. I knew roughly how much time it takes per tile and I could say that there were not that many polygons per tile. I like this part of the job. It always gives me such insights.

I ran my tests with 32x32 tiles (which means 40x40 rasterization when the padding is added, that results 50% over rasterization), max 6 vertices per polygon. The above scenarion was generated in 13.5 ms and it touched 11750 triangles during rasterization. I'm sure there are a lot of unnecessary triangles which are processed.

My current input data is huge mesh (the above one contains few million triangles) and I divide it into chunks of less than 512 polygons using AABB tree like algorithm. This is sort of balancing between memory and speed. In real world example the data usually would be a database of objects, which creates much nice partitioning in practice.

After running the tests I noticed that almost half of the tiles in my test data usually contain just one polygon! The most complex tiles I got were around 32 polygons. This means that if I chose to use HPA* for pathfinding, I could just sore a small lookup table inside each tile to find paths inside that tile.

Most of the time the technique is able to create a tile in about 15 ms, the most complex cases do not exceed 50 ms. The technique is currently rasterization bound, which takes over 50% of the build time. This is actually good news, since I think there is a lot of room for improvement there.

The results from different levels show similar trends in build speed and polygon counts.


Thanks to John Ratcliff for the test data!