Track: Artificial Intelligence
Abstract
Project management is essential for achieving objectives within specified timeframes, but real-world projects are prone to various risks that can impede progress. Developing robust risk management systems can reduce associated costs and enable proactive risk mitigation. Yet, current risk analysis methods struggle with modern project complexities, leading to the exploration of artificial intelligence (AI) techniques. AI, particularly machine learning, crafts predictive models from past data for precise risk assessment. But AI's opacity sparks transparency and fairness concerns. To address these, eXplainable AI (XAI) enhances model interpretability. In project risk management, the incorporation of explainable AI (XAI) techniques is crucial due to the socio-economic impact and ethical perspectives associated with each project, and stakeholders are responsible for ensuring that decision-making processes align with logical reasoning and conform to ethical perspectives. However, the popular XAI technique, Local Interpretable Model-agnostic Explanations (LIME), has limitations in capturing complex interactions and class imbalances. To address this, a novel approach integrates the Variational Autoencoder (VAE) to generate meaningful samples and a rule-based decision tree model for complex interactions and nonlinearity. The expected contributions of the proposed study include improved interpretability and accuracy, handling complex models, and advancing XAI techniques in project risk management. To accomplish these goals, the study begins by selecting instances for the explanation, generating synthetic neighborhoods using VAE, and fitting a rule-based decision tree model to capture complex interactions. This integrated approach strives to enhance interpretability, generate meaningful samples, and provide transparent explanations for informed risk strategies in project risk management.