Track: Poster Competition
Abstract
Climate change is a critical issue for communities as natural disasters tend to be bigger and more intense. To respond effectively to these disruptions, optimal scheduling for restoration efforts is essential. Despite the growing importance of bolstering the security and resilience of Critical Infrastructure (CI), current studies on CI lack a comprehensive framework to optimize restoration across multiple subsystems while considering organizational factors, resource constraints, and complex network interdependencies. This study presents a dynamic restoration scheduling model for restoration under resource constraints. A hybrid simulation model comprehensively represents CI, including organizational dynamics and network evolution. Crew agents learn to schedule through a reinforcement learning algorithm tailored for decentralized scheduling. Additionally, a novel memory-based reward shaping enables crews to learn from their past experience and motivates them to opt for better policies. Therefore, with this integration of reinforcement learning and intrinsic motivation through reward shaping, crews learn to perform restoration to achieve a long-term resilient network. The proposed model is applied to water and transportation networks in the City of Tampa, FL.