9th North American Conference on Industrial Engineering and Operations Management

Resilience-Driven Scheduling in Critical Infrastructure Network - A Multi Agent Reinforcement Learning Approach

Pavithra Sripathanallur Murali & Shima Mohebbi
Publisher: IEOM Society International
0 Paper Citations
1 Views
1 Downloads
Track: Poster Competition
Abstract

Climate change is a critical issue for communities as natural disasters tend to be bigger and more intense. To respond effectively to these disruptions, optimal scheduling for restoration efforts is essential. Despite the growing importance of bolstering the security and resilience of Critical Infrastructure (CI), current studies on CI lack a comprehensive framework to optimize restoration across multiple subsystems while considering organizational factors, resource constraints, and complex network interdependencies. This study presents a dynamic restoration scheduling model for restoration under resource constraints. A hybrid simulation model comprehensively represents CI, including organizational dynamics and network evolution. Crew agents learn to schedule through a reinforcement learning algorithm tailored for decentralized scheduling. Additionally, a novel memory-based reward shaping enables crews to learn from their past experience and motivates them to opt for better policies. Therefore, with this integration of reinforcement learning and intrinsic motivation through reward shaping, crews learn to perform restoration to achieve a long-term resilient network. The proposed model is applied to water and transportation networks in the City of Tampa, FL.

Published in: 9th North American Conference on Industrial Engineering and Operations Management, Washington D.C., United States

Publisher: IEOM Society International
Date of Conference: June 4-6, 2024

ISBN: 979-8-3507-1736-5
ISSN/E-ISSN: 2169-8767