Multi-Agent Collaboration for Wild Fire Management Resource Distribution

Most real-world domains can be formulated as multi-agent (MA) systems. Intentionality sharing agents can solve more complex tasks by collaborating, possibly in less time. True cooperative actions are beneficial for egoistic and collective reasons. However, teaching individual agents to sacrifice egoistic benefits for a better collective performance seems challenging. We build on a recently proposed Multi-Agent Reinforcement Learning (MARL) mechanism with a Graph Neural Network (GNN) communication layer. Rarely chosen communication actions were marginally beneficial. Here we propose a MARL system in which agents can help collaborators perform better while risking low individual performance. We conduct our study in the context of resource distribution for wildfire management. Communicating environmental features and partially observable fire occurrence help the agent collective to pre-emptively distribute resources. Furthermore, we introduce a procedural training environment accommodating auto-curricula and open-endedness towards better generalizability. Our MA communication proposal outperforms a Greedy Heuristic Baseline and a Single-Agent (SA) setup. We further demonstrate how auto-curricula and openendedness improves generalizability of our MA proposal.

This work has been published as part of the Gamification and Multiagent Solutions Workshop at ICLR 2022: https://arxiv.org/abs/2204.11350

The Wild-fire Management Resource Distribution environment can also be explored as a WebApp here: https://philippds-pages.github.io/RL-Wild-Fire_WebApp/

Environment Walkthrough

Wild-fire Management Resource Distribution Environment