DOTS Navigation with Auto-Jumping Agents and Movable Surfaces

Unity DOTS—ECS, Jobs, and Burst—integrated with NavMesh components. That's right, multi-threaded navigation.

Last updated on June 11, 2020. Created on January 25, 2020.

Depiction of a dinosaur planning a jump to another surface.

Introduction

You can access my Unity DOTS navigation package via:

  1. OpenUPM, which is rife with usage instructions.
  2. GitHub, where you'll discover the package is actually part of a monorepo.

Be aware, the folks at Unity Technologies are working on an official DOTS navigation solution that will undoubtedly eclipse my unofficial implementation, but the timeline is, as of this writing, unknown. In other words, you're stuck with Reese and his oddball design philosophy, one that emphasizes parent transformations.

Essentially, for reasons of artistic expression, I needed a navigation solution where surfaces are uniformly transformed by association of their respective parent, and likewise for agents parented to surfaces. In other words, surfaces should be able to move together and independently of one another—with agents navigating them all the same. What's more, those agents should be able to automatically jump from surface to surface. These are qualities I feel any sufficiently high-level, user-facing navigation solution should have, and yet I've never seen anything like it available in open source.

Since nothing open source was available, I made my own thing.

I went with DOTS because of self-imposed, asinine performance requirements, but I don't regret it. By the time I started development on the navigation package, the semantics of ECS and jobs were reasonably mature, with fundamental API changes occurring at a steadily decreasing and manageable pace for a maintainer. Unity Technologies had just released their stateless physics package, which I would ultimately require for ray- and collider-casting. Furthermore, in the midst of development, I discovered OpenUPM and published my packages to it, including "Reese's DOTS Navigation."

Thoughts

While you can read the navigation package's user guide and my source code, I figured I'd record some explanatory thoughts here from my trials and tribulations getting this thing working.

Treating Destinations Like Sticky Grenades

In the Halo franchise, you can punt your enemies with plasma grenades, colloquially known as "sticky" grenades. Obviously, they stick. They start in space local to the thrower. Then they move through world space and, once stuck, have their position updated, frame by frame, in accordance with space local to the unfortunate soul who just got "pwned," soon to be recipient of a gesture where the thrower repeatedly "victory crouches" above their ragdoll corpse.

Depiction of a sticky grenade about to be thrown.

I'm sure you're familiar with it.

My agents are kind of like sticky grenades. Not in the sense that they'll repeatedly squat on your dead body—I did not code them to do that—but rather in the sense that they can move through world space toward a point that is stuck, as in local, to a surface. More specifically, when they jump, they're leveraging information regarding world space, and they're also dictated by the same projectile motion math that applies to Halo and countless other video games.

So, how do agents stay stuck to surfaces? This is ensured by three ordered steps:

  1. Agent surface detection
  2. Destination surface detection
  3. Destination localization

Agent surface detection. When agents spawn or jump to another surface, raycasting is performed to detect a parent surface below. This allows agents to move through space local to a surface, so if that surface moves, child agents interpolate smoothly relative to it. When a new destination is set on a new surface, the NavDestinationSystem immediately detects this with a change filter. It's as simple as calling Entities.WithChangeFilter<NavNeedsDestination>, featuring an aptly named component with only one variable member called Value of type float3. It's all the user code needs to update to set a new destination.

Destination surface detection. Next, there are collider casts for a destination surface with an invisible sphere physically encapsulating the destination point. It does not raycast, since raycasting is only effective if the normal of the surface is known beforehand. (Normal just means the direction a surface is facing.) In sum, if, in your game, agents can traverse an upside-down surface, then this strategy enables that conception of physics. The direction of "down" is relative to the agent, not the world.

Now, recall that I said surface detection must happen ASAP. Here's why: initially destinations are set in world coordinates for the user's convenience. Yet, destinations need to be in local coordinates, in terms of the surface they're on—as soon as they're requested—or else they can significantly deviate from the original, intended destination. If surfaces are moving, then even if one tries to convert the world destination to a local destination later, it may still be wrong because it will have been localized from a world coordinate that is no longer on any surface, potentially.

Destination localization. The key here is immediacy. To "stick" a destination to a surface, per request it is advantageous to collision-cast for a destination surface, which can have its transform extracted. Then that transform can be used to conduct the matrix math needed to convert the world destination into a local one and save it for later. Here's a code snippet doing just that:

var castInput = new ColliderCastInput()
{
    Collider = (Collider*)collider.GetUnsafePtr(),
    Orientation = quaternion.identity
};

if (!physicsWorld.CastCollider(castInput, out ColliderCastHit hit))
{
    commandBuffer.RemoveComponent<NavNeedsDestination>(entityInQueryIndex, entity); // Ignore invalid destinations.
    return;
}

var destination = NavUtil.MultiplyPoint3x4(
    math.inverse(localToWorldFromEntity[agent.DestinationSurface].Value),
    destination.Value
) + agent.Offset;

... // Code handling the special case of teleportation here. Yes, teleportation is inherently supported.

agent.DestinationSurface = physicsWorld.Bodies[hit.RigidBodyIndex].Entity;
agent.LocalDestination = destination;

When an agent jumps from one surface to another, world coordinates must be used again, since the agent is moving between surfaces. No problem. The world destination can then be calculated from the destination surface at a given point in time (which is probably accurate as long as surfaces aren't moving absurdly fast), and then converted back into a local destination, albeit in terms of the surface being jumped from so it's always offset by the gap between surfaces. The following snippet from the NavInterpolationSystem expresses this in code:

waypoint = NavUtil.MultiplyPoint3x4( // To world (from local in terms of destination surface).
    localToWorldFromEntity[agent.DestinationSurface].Value,
    jumpBuffer[0].Value
);

waypoint = NavUtil.MultiplyPoint3x4( // To local (in terms of agent's current surface).
    math.inverse(localToWorldFromEntity[surface.Value].Value),
    waypoint
);

Now I'll cover how jumping gets automated in terms of pathfinding.

Automating Jumping

There are some questions one ponders when it comes to jumping automation:

  1. What is jumpable?
  2. Where should the agent jump?
  3. How does the agent jump?

What is jumpable? As in, which surfaces are jumpable from one another? Automating this currently falls out of scope. Without diving down a rabbit hole, it's easier to just say that it's the user's responsibility in deciding which surfaces are jumpable from one another; however, with that said, the NavSurfaceAuthoring component proffers a publicly exposed list. Users can drag-and-drop other GameObjects with NavSurfaceAuthoring components into said list to make association between jumpable surfaces. They can later look these up directly from NavJumpableBufferElement buffers associated with NavSurfaces. It's a convenience utility.

Note that it is possible, given a graph of all surfaces jumpable from one another furnished by the user, to add meta-navigation between surfaces using a search algorithm, such that an agent might automatically route from surface A to B to C. Presently the agent will jump straight from A to C if told, regardless of what's considered jumpable. This is a feature I would consider implementing with more thought.

Where should the agent jump? This question hinges on the technical peculiarities of an API called Experimental.AI.NavMeshQuery. In the documentation, it is clearly stated that "this API is experimental and might be changed or removed in the future." Such warnings will not deter the likes of me, so I used this API anyway. For what it's worth, it's interoperable with Unity's existing NavMesh support and NavMeshComponents. (Yes, it's the old way, but there is no implementation at present of a new way, and moreover, in authoring, it's a widely accepted pattern to convert GameObjects into entities regardless.) Anyway, as to where agents should jump, the answer requires trickery when using the experimental API. Two queries end up being run:

  1. A query to find the path from the agent's present location to the destination.
  2. Another query to find the path from the destination to the agent's present location.

Now, if you're thinking that the result of the first query is necessarily the reverse of the second, then you'd be wrong. Actually, each query will only return the set of points that are possibly traversible. For example, imagine that there are two surfaces, Surface A and Surface B, with a yawning chasm between them. For an agent to jump from Surface A to Surface B, a query is run to first find the remaining path on Surface A, whatever that is. There are no points from Surface B in the result, because the NavMeshQuery API does not know how to get to Surface B. It can however provide the information needed to let the agent "walk" the remainder of Surface A.

Before the agent jumps, a second query is run to find the path from the destination to the agent, which yields the, erm, well, remaining remaining path. That is not a typo. We want to reverse the final path for the sake of pathing, yes, yes, but more pressingly for jumping beforehand we need to consider that the last element of the second returned query is the ideal jump point. It's easier to picture this if you imagine that the destination and agent position are swapped—the last point found on Surface B, coming from the destination, must be a good jump point, right? Indeed it is, hence why my NavPlanSystem literally swaps the destination and position in the case of a jump.

How does the agent jump? The answer to this question is simple: projectile motion math. Wikipedia explains it well. Note that I assume "artificial" gravity and do not even use Unity.Physics for this, namely because, again, I'm not making assumptions about the normal of a surface. As previously discussed, agents stick to surfaces in my navigation implementation. They can still have colliders attached to them if the user so chooses, though. The NavAgent and PhysicsCollider components are able to coexist.

Abusing the NavMeshQuery

I've already talked about the NavMeshQuery API quite a bit. I'm very impressed with it, to be candid, as I feel it was ahead of its time when it was written. It still works great, but the challenge I encountered was DOTS-ifying it—ideally making path planning Burst-compilable. Eventually I figured out I could abuse the NavMeshQuery into submission with unsafe pointer usage in what I called the NavMeshQuerySystem, which updates in the InitializationSystemGroup. I believe the lengthy comment I left in the code explaining the custom system has application here:

This system exists because the NavMeshQuery type is inherently evil. It's a NativeContainer, so it can't be placed in another such as a NativeArray. You can't instantiate one within a job using Allocator.Temp because the default NavMeshWorld is needed to create it, which includes unsafe code lacking [NativeDisableUnsafePtrRestriction], meaning that's a no-go inside a job. So, long story short, this system hacks the queries into a public NativeArray via pointers to them. The NavPlanSystem can then access them via the UnsafeUtility. It would be overkill to include a pointer in each NavAgent, since the number of threads is limited at any given time anyway, so the solution here is similar to what you'll find in Reese.Random.RandomSystem, which was inspired by how the PhysicsWorld exposes a NativeSlice of bodies. But here we index each query by native thread, taking thread safety into our own hands.

And so there is a NativeArray of structs as glorified pointers, of size JobsUtility.MaxJobThreadCount, indexed by thread number. The NavPlanSystem can read and write to them in a thread-safe manner via the magic nativeThreadIndex. The time and attempts it took me to reach this strategy dwarf any other perceived creativity on my part in designing this navigation package.

Conclusion

I'm still maintaining and updating the DOTS navigation package in my Unity monorepo, ensuring it's using and compatible with the latest and greatest Unity has to offer. It includes demos and other packages you may find useful. I've also been updating this page as the project develops. If you like what I'm doing, please consider buying me a coffee on Ko-fi. I have Patreon if you're into that as well. You're also welcome to message me on Twitter with questions or feedback. Issues on GitHub are fine too. But pull requests are welcome most of all!

Thanks for your time.

© Reese Schultz

My code is released under the MIT license.