Artificial Intelligence (AI) is increasingly expected to improve operational efficiency, to forecast demand better, to assist in shipping, and to evaluate supplier risk in the wake of the enhanced complexity and globalization of supply chains. Nonetheless, numerous AI models are so-called black boxes, and as such, it is not easy to explain to the decision-makers what prompted a specific recommendation. This interpretability withholds it of being trusted, adopted, and makes it burdensome in case of industries due to regulatory compliance. A solution to this has been in the form of Explainable Artificial Intelligence (XAI) which brings both transparency and interpretability to AI-powered supply chains. This paper will provide systematic review of the XAI applications in Supply Chain Management (SCM), concerning the demand forecasting, optimization of inventories and supplier risk evaluation, and logistics planning. Executing regular systematic reviews in accordance with the guidelines would allow us to receive answers to the question of the most frequently implemented XAI approaches, how they can be utilized, and whether the accuracy/interpretability/computational performance trade-offs used are adequate. The findings show that SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) are the most applicable in the aspect of post-hoc interpretability and that the decision-tree based and the rule-based models can be employed to provide transparency in terms of SCM decision-making. Weaknesses are how to balance interpretability and predictive accuracy, computing costs of real-time applications, and data privacy issues with respect to supplier and customer details. The paper concludes with the concepts of hybrid XAI techniques, criteria of benchmark performance, and privacy-enhancing systems that are distinctive to the domain of SCM, when more transparent, economical, and trusted chains become feasible.