Sunday, August 29, 2010

My Summer of Code 2010

This where all the magic happened!

You may have noticed the increased traffic on the commits and the blog. But now Mikko's Summer of Code has come to an end. I used a few weeks of downtime between projects to flush my TODO queue. I'm starting a new project this week and I will be back to my 1 day a week schedule with RC&DT.

Here's some thoughts on what are likely to happen in near future.

Multi-Agent Navigation

I got the multi-agent navigation to much further state than I expected. The code is currently huge pile of spaghetti. The next step is put it all together. I have been trying to think about the API and it seems to be quite a big task. I think it is good that my progress is slowing down so that I can let things simmer in a bit.
If you have some thoughts how you would like to integrate the multi-agent navigation to your project, please let me know!
Would you like to use it as a component? Do you want a manager you can tick or do you prefer to do that yourself? Do you use physics for collision detection? Where in your agent update cycle you are planning to put the path following code? Any input is welcome. If you don't want to share the secrets here you can always send me email too.

Everyone is integrating path following slightly differently. I hope to support as many people as possible. On one extreme there are the folks who just would like to get plug'n'play, and on the other side are the folks who would like to get the max performance which means that I need to split the API so that you can tear everything apart as necessary.

My current idea is to roughly separate the code into roughly four pieces:
  • mover
  • steering
  • obstacle avoidance
  • collision detection
The mover takes care of handling path corridor and movement along navmesh surface. In theory you could use it to create NPCs or even player movement. It will also keep track if new path should be queried.

Steering is game and behavior specific so I think I will add some helper functions to the mover class which allows you to build your own. I will provide some examples how to do it too.

Obstacle avoidance will be separate class which can be used even without any of the mover stuff. Basically you feed in obstacles and the agent state (position, radius, velocity, desired velocity) and it will spit out new velocity which should result in less collisions than the desired one. The first implementation will be the sampled VO, I'm looking into adding ORCA stuff too later.

Collision detection will be responsible of finding the neighbors to avoid and to collide with. This is potentially something that your engine does already. I might include this only in the example code.

Many of the tasks required for multi-agent navigation are embarrassingly parallel (see all the for loops in the current CrowdManager::update()). My idea is to tailor the APIs so that when possible you just pass in the data that is absolutely necessary. For example the obstacle avoidance code will not figure out the neighbors itself, but you need to feed them.

I will create an example which can be used for learning and also something that can be integrated as is, for those who just like to get quick and dirty integration.

Detour Error Handling

There are some cases where Detour should be more verbose on error cases. My work on the multi-agent navigation has revealed some short comings in the error handling code too and the sliced pathfinder work surfaced some ideas too.

I'm planning to improve the Detour API at some point so that it will report errors better. The change will be most likely that every method will return a status instead. This allows me to report invalid input data and more elaborate reason why path finding may have failed or report that path finding succeed, but actually returned partial path. There are some discrepancies on the raycast() function return values too.

Temporary Obstacles

I got quite exited about the heightfield compression findings and got some really good feedback on it too. It is definitely something I wish to prototype more in detail.

This also ties into better Detour error handling. I will need to finally figure out how to handle interrupted and invalid paths. Detour will not replan automatically, but it will let you know what something went wrong or that something will go wrong in the future so that you can schedule new path query yourself.

7 comments:

  1. Mikko,

    In our application we have a manager providing the API based on the underlying Navmesh. This manager would deal with paging tile in a large world and every agent representation which wants to know its path will call something like "mgr.getPath(start, end)".
    The manager provides a path object as result of the query which, depending on the implementation is updated internally, when the navmesh changes. Beside that the manager also provides information about accessibility of places, region types (based on your flags) and distance to walls.
    We use PhysX to actually collide our agent with collision objects. Since Detour does not yet consider blocking objects this is an unsolved issue at the moment (We are glancing at your recent updates ;-) ).
    Coming from a strong OOP field I suggest using interfaces for the different computation parts providing example implementations and allowing custom implementations if necessary. If these are called a few times per second this should not lead to performance drops at all.

    For future plans we would like to be able to support large crowds of hundreds of agents therefore we would need to be able to switch between RVO and eg. potential maps. Its not yet clear to me how this should/could fit into the general API.

    Mirko

    ReplyDelete
  2. Mirko,

    Does the path object also handle steering and sending the velocity command to physics, or does it just contain the spatial data?

    OOP designs does not do well in practice when you have many similar things to update [1]. My plan is to divide the code so that you can call it in nice chunks to (hopefully) benefit from cache and parallelism.

    Potential maps might work at the steering level. Although, implementing 2.5D potential maps/fields may be a bit of a challenge.

    One solution for crowd path planning would be to store density per polygon and use that as extra weight during A*, a bit similarly to [2] or [3].

    Maybe the agent would regenerate path if the density of some polygons ahead along the path has changed a lot.

    The point of this method is that the agents would avoid crowded areas and fix some of the problems you saw in the current demo earlier.

    I hope ORCA will allow me to handle more agents in the future (I'm foolishly hoping for 4x more for the same budget).


    [1] http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf
    [2] http://gamma.cs.unc.edu/PLE/
    [3] http://gamma.cs.unc.edu/DenseCrowds/

    ReplyDelete
  3. I think you're right Mikko, I don't much like having individual agents doing their own updates, which is what I've currently got. Right now I use physx for the collision check, but its far to slow for larger crowds. Ideally I suppose I'd feed in a list of agent positions and desired velocities and get back a list of updated positions and velocities. Thats I guess the "black box" interface. But the downside is that in order to get locomotion to feel right I really need more information (as I might actually want my agent to look towards the nearest collision, or sidestep etc).

    I think adding potential fields and such should definitely be possible in the API. But I'm happy to hack that into the API as long as the API is reasonably clear I can figure the rest out.

    I guess I'm not helping much am I :) but its a really personal thing I think. My main wish is to have ways of querying data outside of just the pathing, preferably during update, so as long as the update loop is reasonably simple to understand, I can live with the rest.

    ReplyDelete
  4. The other part of that is that there's a great deal of interplay between physics and locomotion that concerns me. Right now I use a character controller in physx, which is basically a kinematic moving capsule that doesnt really react to physics. So I feed it a vector and it moves the capsule along the vector and gives me a resulting position (i.e. after collision). In a way, the locomotion system should really do that, so that the interface for the code basically fills in a "desired position" and the rest is handled by the locomotion code, which handles collision, avoidance, animation, etc. I guess thats the model that Christian Gyrling was talking about at GDC 2009. So at the high level you basically have a "character" and then everything below that is opaque to the AI. But at the lower level thats not a great interface because you want to batch operations up.

    I'd hesitate to say there's really no ideal here, so go with your gut and then we'll whine at you when we find something that doesnt feel right?

    ReplyDelete
  5. Phil, that gut thing is vaguely my current plan. But I feel that I'm entering the very muddy deep end of the who navigation deal :)

    The physics round trip is very problematic. In common canse (80% of the time) there is no problem with it. But when things go wrong, the code that tries to resolve that always ends up being a huge pile of hacks. Not to mention that syncing physics and the point on navmesh becomes nasty business.

    The road from 80% to 90% is twice as long as it was to get to that 80%. Getting from 90% to 99%, is equally challenging. End we need to aim for 99.99%.

    Ideally I would like the navmesh to be the most dominant constraint. It makes the navigation heaps more error proof (something like 80% to 99% thing). In that scenario I envision the physics being used as sensor (something hit me, I better react! or there's a moving box nearby, I might want to steer around) or to adjust the character height for better ground alignment.

    I'm sure so many people are against that because they have always used the capsule. For characters I don't really see any other good reason but legacy (and early protos) to use physics capsules.

    Animation is another beast. I have to reread Christians slides. I've yet to dive into the animation problem (and not sure if/when I want to), but even in that case the navmesh should be the final constraint, or you could allow the locomotion location to deviate from the actual navigation location (which may or may not be good idea).

    When navigation hits the fan, I think it is better to be able to detect and resolve the situation over time than to look good. So better prepare that "ouch, I hurt me toe!" locomotion animation set ;)

    ReplyDelete
  6. I know what you mean. I've been using seperate components for nav following, physics, animation etc. But it feels more and more like they are one big beast rather than seperate concerns.

    One other thing I've been working on is squad control and I'm still not sure wether the squad should have individual movement or some overall movement and local steering. Seems to me that even the local steering is just a very short path plan. So all your saving is that the paths are shorter. I guess there's also the fact that you dont get too many agents pushing each other around if you use local steering + overall path follow.

    ReplyDelete
  7. I'm not sure if the usual component architecture will lend very well to the whole navigation thing. But you can structure the code quite nicely if you think it as a series of small tasks which have dependencies. If you look at the CrowdManager::update() in RecastDemo, most of the for loops are embarrassingly parallel (well, they need few tweaks to get there), but there are a couple of spots where you need to sync.

    Squad movement is interesting problem. More so if you wish them to move in formations. If it is ok to just move them as a bunch, I would structure the code so that you find path from one of the NPCs (leader) location to the new goal location and then the other agents find path to the leader, then concatenate the paths.

    This usually results one long query and many smaller ones. The paths may not be optimal ones, but the obvious kinks will be smoothed out by the runtime path optimization code (currently I use walkability raycast), and the behavior will read as a group [1].

    Once you get close to the final destination, you find path to the individual's formation locations. You can potentially do this using findPolysAround() (Dijkstra).

    Reactive movement is indeed short pathfinds, but very special ones and that can be used to make it really fast. Take a look at my constrained movement post earlier [2].

    Maintaining elastic structure as a path representation--and updating it as you go--frees you up to create all kinds of styles of movement. Most of the AI programmers out there does not get this yet, though :) The navigation loop as a whole looks like complicated cluster of code, but the problem is not simple either! I have a feeling that I will need to spend quite a bit of time advertising and explaining the method more in detail.

    In case of formations it is also possible to adjust the path corridor as the goal location moves, which means that you do not need to replan the path if the goal moves.
    _

    [1] http://en.wikipedia.org/wiki/Gestalt_psychology - see Prägnanz.
    [2] http://digestingduck.blogspot.com/2010/07/constrained-movement-along-navmesh-pt-3.html

    ReplyDelete