Multi-Agent Collaboration for Wild Fire Management Resource Distribution

Most real-world domains can be formulated as multi-agent (MA) systems. Intentionality sharing agents can solve more complex tasks by collaborating, possibly in less time. True cooperative actions are beneficial for egoistic and collective reasons. However, teaching individual agents to sacrifice egoistic benefits for a better collective performance seems challenging. We build on a recently proposed Multi-Agent Reinforcement Learning (MARL) mechanism with a Graph Neural Network (GNN) communication layer. Rarely chosen communication actions were marginally beneficial. Here we propose a MARL system in which agents can help collaborators perform better while risking low individual performance. We conduct our study in the context of resource distribution for wildfire management. Communicating environmental features and partially observable fire occurrence help the agent collective to pre-emptively distribute resources. Furthermore, we introduce a procedural training environment accommodating auto-curricula and open-endedness towards better generalizability. Our MA communication proposal outperforms a Greedy Heuristic Baseline and a Single-Agent (SA) setup. We further demonstrate how auto-curricula and openendedness improves generalizability of our MA proposal.

Dashboard of multi-agent wildfire environment in inference mode.
Dynamic environment features.
GNN Message Passing Communication Diagram: Neighbourhood graph (n=3); Observations: Inbox environmental data, inbox help requests and local environmental data; Agent (PPO); Actions: send support, request support back, send a help request to others, support self; Rewards: individual and collective reward for preparedness in case of fire.

Environment Walkthrough

Wild-fire Management Resource Distribution Environment