## Friday, November 27, 2009

### Obstacle Avoidance Experiments

I had a little time today to do some more experiments with obstacle avoidance. It is starting to shape up nicely, I hope that in next update I can share some interactive examples too.

Today's experiment was to solve of unfortunate side effect of the sampling method I chose--namely initial collision. If two agents are overlapping, the sampling will always return blocked and the two agent will get stuck. I previously side stepped this problem by just ignoring the initial collision.

The collision happens most likely because of floating point inaccuracy or that the agent movement code did a bit different thing than what the local navigation told it to do. Some situations on games can make it that happen, so it is good to be able to handle that.

First I tried to adjust the time-of-impact values based on the initial collision, but that did not work out as well as I hoped. Eventually my solution was to calculate both the time-of-impact as well as time-of-exit values per sampling direction. The same code actually is able to return both the values easily (TOI is calculated using ray-circle intersection, TOI is the first hit and TOE is the second).

My final solution was to add yet another component to the sample scoring calculation (remember, the sample direction with smallest score is selected). Previously the score or penalty was calculated as follows:

score = Wa*da + Wt*1/toi

Where da is the angle difference between the desired direction and sample direction and toi is time of impact in that direction. Wa and Wt are weights, I have used values 2 and 1 for the weights respectively in my test.

If there is a collision, I'd like the agent to move to to the direction which resolves the collision as quickly as possible. The best direction is where the time-to-exit is the smallest. TOE is zero, if there is no initial collision. For each sample, the largest TOE among the all hits is selected. The modified score calculation looks as follows. I used We=100 in my tests.

score = Wa*da + Wt*1/toi + We*toe^2

The result is that the agent prefers to move away from the collision, but does not select overly stupid direction because of that unfortunate state.

Another thing is test was how the min and max time horizons affect the behavior of the agents. You may remember from the earlier blog posts that max time horizon specifies how early the agent starts to change the movement direction, and min time means how early the agent starts to change the movement speed.

Setting min time horizon close to max time horizon results much more varying and smoother movement, which smaller time horizon results faster and more aggressive movement. It is pretty much as simple as that. I just had not played around with it earlier.

As expected, more aggressive settings make the crowd to spread out much more in more congested situations, while less aggressive setting makes the crowd to make more swirl like solution.

I'm quite happy how few parameters there are to tune with this local navigation solution. I also like how readable their change is too. The overall behavior is mostly controlled how min and max time horizon is set. The weights allow further tuning how much the agent is supposed to favor moving towards the goal (Wa higher) versus moving fast (Wt higher).

## Friday, November 20, 2009

### Recast SVN Version in Flux

I'm currently merging the functionality of dtStatNavMesh and dtTiledNavMesh into one class called dtNavMesh. A first iteration is already in SVN, coexisting with the old classes. I will remove the old classes once I'm done with the merging. If you want to sync to the most recent "stable" version, please sync to revision 72. I hope to get the merging finished next week.

I apologize for the upcoming migration trouble, I think it will be for the better in the long run. If you guys have the time, please take a look at the new dtNavMesh, and let me know if you have some concerns regarding any functionality lost in merging.

## Wednesday, November 18, 2009

### A Couple of Recent Crowd Papers

The guys at UNC have been busy with crowd simulation recently. Here's a couple of recent papers.

Aggregate Dynamics for Dense Crowd Simulation
This paper described method which is similar to boids, but instead of the boinds rules of cohesion and alignment, they use incompressible fluid simulation. Scales well to thousands of agents.

Directing Crowd Simulations using Navigation Fields
Directing crowd movement using mixture of RVOs and potential fields. I can imagine this method could be easily help certain scenarios appearing in games, although it might be too expensive. It provides some interesting ideas none the less.

## Sunday, November 15, 2009

### Recast Debug Draw Interface

I added simple abstract interface for debug drawing. Currently only the Recast debug draw code is converted over, I will do Detour later too. I hope it helps a bit to reuse to debug draw code in different projects.

The is pretty much as simple as it gets. It has begin(), vertex(), and end() methods. The begin method gets primitive type, number of vertices and primitive size as parameters. Sequential calls to vertex is assumed to fill in the primitives to be draw, and all vertices are send when end is called. This should allow to easily implement both buffered and immediate rendering.

Changes are available in the SVN.

I also started to add some preparation to get the area stuff working again. Last time I was working on it, I made too big changes in one jump. This time around I try to do more gradual changes.

I will be merging Detour tiled and stat navmeshes into one at some point in near future. The impact for stat mesh users should be really small. I like the simplicity of the stat mesh, but the beauty will be gone soon anyways when the areas and jump-links get implemented.

## Monday, November 9, 2009

### Obstacle Avoidance Video

Here's a video of the previously discussed obstacle avoidance scheme.

The obstacle avoidance method presented in this video uses time-of-impact sampling around the agent to adjust the agent's desired movement velocity (black arrow) to a direction of less potential collision (yellow arrow).

The figure around the agent is displaying time of collision in in each sampling direction (64 directions in the video). TOI up to a max time horizon (yellow circle) is considered while avoiding the other agents. If the TOI is less than the min time horizon (red dashed circle, time it takes to stop the agent), the steering velocity is scaled down.

## Saturday, November 7, 2009

### Rediscovering Obstacle Avoidance

EDIT: Added picture describing the new method, video coming soon.

After fiddling around with the different methods to implement velocity obstacles (especially the directive circle style approaches), I noticed that there are two parameters which can be used to tune to get out of the cases where the agent is completely surrounded by the velocity obstacle. The parameters are: desired movement speed and time horizon. Time horizon means the latest potential collision we are interested in avoiding (or reversely, how early something should be avoided).

The directive circle or BOA style velocity obstacle implementations assume that the agent wants to travel at constant speed. This allows to simplify some of the calculations, since the admissible velocities can be found at a circle, which radius is desired speed. In practice it means that instead of choosing amongst all admissible velocities, we choose an admissible direction. Simplifies things a lot.

What should the system do, when all the directions are blocked? There two options and it is quite hard to choose which one is better.

The first option is to slowdown. That is, reduce the size of the circle. This usually leads the obstructed areas on the directive circle to become smaller and it may reveal new openings to follow.

The second option is to reduce the time horizon. The higher the time horizon value is set, the earlier the obstacles are avoided. By reducing the time horizon value, the velocity obstacle cones will be truncated more, which gives some extra space around the agent to manouver again.

The directive circle (and ND, and VFH) approach looses it charm one you start to try to solve those two problems. You end breaking their simple and intuitive premise and either doing a directive circle for each velocity you might find useful and even more complex spaghetti to choose amongst them. One of the biggest problems is that you need to make decisions based in binary data, which leads to jittering when you try to chanse the correct value.

This led me to another idea, which combining several things I discovered while prototyping the different methods.

Recursively Reciprocal Directive Velocity Obstacle Circle

By exercising the age old AI truth: "In case of doubt, discretize and brute force!" I managed to make yet another sampling based obstacle avoidance, which is super simple to implement, easy to understand, intuitive to tune and seems to work well in practice too.

The basic idea is:
1. Sample the first time of collision at different directions around the moving agent
2. Score each sample based on time to collision and how much it deviates from the desired movement direction
3. The direction with best score is chosen
4. That's it!
Calculating the time of impact for two moving circles is simple, it is featured in books and there probably some code laying in the webs too. The devil is of course in the details, as follows.

To get it working better, I introduced minimum and maximum time horizons. The max time horizon is used to clamp the time-to-collision values, so that every direction where collision is not likely to happen really soon now gets the same time-to-impact score. This will make sure that the nearest direction will be chosen instead. Larger max time horizon will make the agent to avoid things earlier. This can be thought as sort of shininess factor.

The minimum time horizon is used to scale down the velocity. If the time-of-impact towards the best direction is less than min time horizon, the velocity is scaled down proportionally. That is, if the agent is standing next to an obstacle, the velocity will be scale to zero. Or if the agent tries to run towards a wall, the system will scale the actual velocity down.

Another thing that can be adjusted are the weights used for score calculation. If the time-of-impact is weighted more, you tend to get more straight segments of movement, while if the angle difference is weighted more, the agent tends to approach the goal more. Usually that means more wiggly trajectory in micro scale, but more goal directed movement in general.

That sort of solved most annoying problems that the directive circle type of methods, namely speed selection and approaching a goal in front of an obstacle.

On top of all that I use multiple iterations. This adds the needed reciprocal, recursive or reflective or any other R starting property into the mix. What it means in practice is that the velocity selection does not flicker that much nor does the system have problems choosing sides on head-to-head cases. It should be possible to extend the system to use the reciprocal rule too.

All in all, it is quite a bit similar to the reciprocal velocity obstacles implementation except that I formulated the change in speed in different terms. In practice this means crap loads of less samples (although you can have them back by using more iterations).

I'm so happy about this discovery that I think I'm going to go and have a beer instead of digesting yet another heap of more papers for supper :)

## Friday, November 6, 2009

### Recent Interesting Paper Findings

I have went through numerous papers recently. My search terms are starting to return the same papers all the time, but every now and then there are some good ones worth sharing. Here's some which are relevant to my recent prototyping results.

Very similar approach to directive circle method. It discusses the reasoning behind using only maximum velocity when selecting the velocity.

I had the gut feeling that multiple iterations would help the velocity selection. These papers provide some more information on the subject.

This paper discusses how VOs can be used in formation type of situations.

This paper discusses how the time horizon (how much the cone is clamped) can be adjusted.

Zvi Shiller was co-author of the well known velocity obstacle paper and has papers and videos at his page regarding the subject. Those non-linear velocity obstacles look pretty cool!

## Thursday, November 5, 2009

### Hybrid Reciprocal Directive Circle

Or HRDC for short. That is my homage to obstacle avoidance scene. The method is combining the good stuff from HRVO with easy of implementation of DC.

I got my implementation of directive circle working better and in many cases it works as nicely as my geometric implementation of HRVOs. I like how it makes it easy to combine different obstacles, and it looks like implementing static segment obstacles will be pretty easy.

DCs are faster to generate so some form of look-head planning could be feasible (it could be turned off as LOD).

## Wednesday, November 4, 2009

### Noodle Soup Considered Harmful

I just saw a report of the enchangements they are planning for Blender 2.5. One of the enchangements was a node graph to create game logic in Blender Game Engine. While the proposal is definitely and improvement over what they have in there, I think in general boxes and arrows are super horrible way to create game logic. Let me elaborate.

This idea has been bubling under for quite a few years now. I've been in two projects where the game logic has been build using some sort of flow graph. While it is intuitive model to create something a bit more complicated than when this do that, the final noodle soups look really horrible and are impossible for anyone to debug.

I have been trying to put the frustration of using the boxes-and-arrows editors into words, abd finally when I read a book called The Back of the Napkin, I realised what was the problem. The problem is that the designers are forced to answer all the questions with one answer.

The Back of the Napkin is about visual problem solving. One of its' ideas is that certain types of pictures answer certain types of questions the best. If the only means to create game mechanics is flowgraphs, then the designer is most of hte time answering to question How. If your level editor also supports some sort of timelines, you may be able to answer the When question too. Trigger volumes give opportunities to asnwer few where questions too.

In all system I've seen so far all these questions are answered using separate tools. This breaks the big picture. You do something in timeline editor and then something somewhere else, even if technically you could be editing the game in one unified view, zoom in when necessary, and build the best picture (and representation) to describe the required task.

Mashing up these different representations is really important to let the designers to get their work done. Trapcode Soundkeys is good example of such mash up. It kind of is against all programmers reason. Draw boxes to get output of an audio analyzer? That's a lot of bullnoodle right there! But it works, its intuitive and you immediate get it. It answer the the questions where and how much really well and you can adjust it easily based on what you see coming in.

That is why the idea logic blocks and flowgraphs as main means of gameplay logic must die. Creating gameplay logic is not about logic only, it is about answering complex questions with understandable "pictures"!

P.S. Yes, the MAX/MSP/PD crowd gets some of these things right, but not all.

## Tuesday, November 3, 2009

### Making Velocity Selection More Stable

I have done a couple of tests the get my velocity obstacle prototype to reduce the flicker in the velocity selection. Here's a couple of things that seem to work well.
1. Iterations: Just like you can improve collision detection by iterations, you can help the velocity selection with iterations too. By adjusting the sensed velocity slowly towards the selected velocity allows the other agents to react to the gradual change. Even after this change I often get jitter when two agents are about to start avoiding each other.
2. Samples Selection: I originally used the sample selection presented in HRVO code, but I noticed that I get more stable results if I just use the samples at the max velocity circle. This sometimes results full stops, but it might not be a bad solution at all. It should be possible to use a bit of higher level logic, which could choose between several states based on how crowded the situation is. This should help the animation play too.
3. Finite Time Threshold: Truncating the tip of the VO cone helps certain situations a lot, especially approaching to the goal location. There seems to be some correlation between how well the avoidance works versus how far the time threshold is, but it varies quite a lot from situation to situation. I think it is possible to adjust the threshold and get better results.
4. Velocity Obstacle Shape: The shape of the velocity cone has big impact on how smooth the velocity selection is. It is a no-brainer but it definitely was not the first thing in my list to try to get the system running smoother. Making the tip of the truncated cone more sharp helps a lot the first contact velocity selection (the problem that was unsolved by the iterations).
I think my further prototyping will concentrate on investigating how the shape of the velocity obstacle affects the avoidance. For example what happens if the shape of the VO is more like a parabola, or what if it is more like a trumpet. Another thing to prototype is how different avoidance region types affect the movement.

I also did a quick prototype of the directive circle approach based on this paper: Robot Motion Planning in Dynamic Environments with Moving Obstacles and Target. It uses similar max velocity selection method which I found more stable and has interesting trick to combine the VOs. Sampling artifacts aside, it seems to have some problems when approaching the goal location. I will test it a little more. The way the obstacles are combined in that approach allows me to test if adding more clearance might help the steering too.