Semantics-based Explainable artificial intelligence (XAI) approach for Sustainable Supply Chain Management
In recent years, companies have been facing societal, governmental and competitive pressures to focus on the social and environmental consequences of their supply chain (SC) processes. Businesses globally are increasingly considered accountable for sustainability. They should adopt their practices, along with a better administration of resources. SC has become a source of competitive advantage for sustainable business models rather than just focusing on delivery (Naz et al., 2021). Moreover, as (Shi et al., 2019) concluded, around 90% of carbon emissions occur in SCs; this necessitates an increased emphasis on Green or Sustainable Supply Chain (SSC) activities to curb environmental and social threats. Over the past two decades, SSC management (SSCM) has emerged as an approach for improving sustainability outcomes in supply chains. Sustainability outcomes encompass the adoption of environmentally and socially responsible practices as well as the achievement of environmental, social or economic performance. One of the main issues considered in SSCM related topics concerns decision making to evaluate SSC alternatives and define a SSC configuration. Usually, this issue is tackled using optimization techniques (Awasthi and Kannan, 2016), and (Qin et al. 2017). However, with recent advancements in technology, artificial intelligence (AI) techniques promise to overcome more complex decision-making problems. As the role of AI in decision-making continues to grow, there is a growing need for exploiting AI as a way to add value to SSCM. Many initiatives have explored the potential of AI use in SSCM problems (Fallahpour et al., 2016) (Sari, 2017) (dos Santos et al., 2019) (de la Torre et al., 2021). Future of SSCM lies in adoption of AI for a better crossing of sustainability drivers and predicting SSC behavior and performance.
However, AI techniques are black boxes and do not allow humans to understand how the decision was made. Moreover, in a SC, a multitude of stakeholders are involved in decision-making processes. Some of them have a more quantitative approach towards decision making. Whereas others have a more intuitive approach. It is important to have a transparent and trustful AI approach.
To understand and appropriately trust AI results, Explainable AI (XAI) is considered the key solution (P´aez, 2019). XAI, also known as transparent AI or interpretable AI, is about explaining the results generated by AI systems to make them more understandable and trustable by the users. To do so, XAI requires a thorough understanding of the processes as well as a high level of domain knowledge. This can be addressed by modelling the corresponding semantics using ontologies. Ontologies are defined as an explicit formal semantic representation of knowledge through logical axioms. Ontologies and knowledge graphs have the potential to add highly predictive features to learning models based on the semantic correlation of data. Accordingly, XAI, and specifically semantics-based XAI (S-XAI), can facilitate the adoption of AI-based tools in complex domains like SSC by providing a more context-rich explanation to the user. In the literature, XAI is undeveloped and superficially addressed in the SCM domain and there is a need for the development of such an approach (Mugurusi & Oluka, 2021).
The goal of this PhD thesis is to propose a S-XAI approach that investigates whether the current SC process is sustainable, and if not how to improve it and then provides human experts with meaningful explanations of how the decision process is conducted. Using such an approach, all the stakeholders can participate in the collaborative decision-making process while having confidence in AI reasoning.