PathPlanning/AA-SIPP-m
Algorithm for prioritized multi-agent path finding (MAPF) in grid-worlds. Moves into arbitrary directions are allowed (each agent is allowed to follow any-angle path on the grid). Timeline is continuous, i.e. action durations are not explicitly discretized into timesteps. Different agents' size and moving speed are supported. Planning is carried out in (x, y, \theta) configuration space, i.e. agents' orientation are taken into account.
Built on the SIPP (Safe Interval Path Planning) framework, AA-SIPP(m) uses a prioritized planning approach where agents plan sequentially while respecting collision-free safe intervals computed for static and dynamic obstacles. The algorithm supports failure recovery through deterministic and random agent re-ordering heuristics, plus Start Safe Intervals optimization to improve solution quality. Input/output is handled via XML files describing grid maps, agent configurations (size, speed, rotation), and start/goal positions, with the self-contained C++11 implementation relying only on STL and bundled tinyXML.
124 stars. No commits in the last 6 months.
Stars
124
Forks
37
Language
C++
License
MIT
Category
Last pushed
Dec 05, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/PathPlanning/AA-SIPP-m"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookresearch/BenchMARL
BenchMARL is a library for benchmarking Multi-Agent Reinforcement Learning (MARL). BenchMARL...
datamllab/rlcard
Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.
Toni-SM/skrl
Modular Reinforcement Learning (RL) library (implemented in PyTorch, JAX, and NVIDIA Warp) with...
utiasDSL/gym-pybullet-drones
PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
koulanurag/ma-gym
A collection of multi agent environments based on OpenAI gym.