DOI: 10.17587/prin.16.470-479
A Method for Explaining the Results of Artificial Intelligence Models based on the Shapley Algorithm and a Generative Language Model
P. V. Matrenin, PhD, Leading Researcher of the Scientific Laboratory, p.v.matrenin@urfu.ru,
Ural Federal University named after the first President of Russia B. N. Yeltsin, Ekaterinburg, 620062, Russian Federation
Corresponding author: Pavel V. Matrenin, PhD, Leading Researcher of the Scientific Laboratory, Ural Federal University named after the first President of Russia B. N. Yeltsin, Ekaterinburg, 620062, Russian Federation, E-Mail: p.v.matrenin@urfu.ru
Received on May 05, 2025
Accepted on June 11, 2025
Improving human-machine interfaces of intelligent systems is a pressing issue that involves creating user-friendly explanations of the results of artificial intelligence models. It is especially important for implementing intelligent systems with high responsibility for decisions, such as the power industry. The article describes a developed method based on the Shapley additive explanation algorithm. The proposed modification is based on: normalization of the feature contribution vector; semantic grouping of features; contributions' visualization that is different from the currently accepted one; and generation of text explanations using a language model. The presented method is aimed at reducing the cognitive load on the user when analyzing the results of intelligent decision support systems and increasing the confidence of industry experts in such systems. As an example, the problem of short-term forecasting of electricity consumption of an industrial enterprise is considered.
Keywords: explainable artificial intelligence, human-machine interface, decision support system, language model, time series forecasting
pp. 470—479
For citation:
Matrenin P. V. A Method for Explaining the Results of Artificial Intelligence Models based on the Shapley Algorithm and a Generative Language Model, Programmnaya Ingeneria, 2025, vol. 16, no. 9, pp. 470—479. DOI: 10.17587/prin.16.470-479. (in Russian).
References:
- Adadi A., Berrada M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI), IEEE Access, 2018, vol. 6, pp. 52138—52160. DOI: 10.1109/ACCESS.2018.2870052.
- Ali S., Abuhmed T., El-Sappag S. et al. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence, Information Fusion, 2023, vol. 99, article 101805. DOI: 10.1016/j.inffus.2023.101805.
- GOST R 71476—2024. Artificial intelligence. Artificial intelligence concepts and terminology. Moscow, Rossiyskiy institut standartizatsii, 2024 (in Russian).
- Haljasmaa K. I., Stepanova A. I., Zinivieva E. L. Legal aspects of the use of intelligent systems and methods of interpreting, Novosibirsk, Novosibirskiy gosudarstvennyy tekhnicheskiy universitet, 2024, 191 p. (in Russian).
- Molnar C. Interpretable Machine Learning. A Guide for Making Black Box Models Explainable, available at: https://chris-tophm.github.io/interpretable-ml-book (date of access 11.04.2025).
- Kivchun O. R., Gnatuk V. I. Methodology for forecasting the power consumption of objects based on the values of rank phase angles, Morskiye intellektual'nyye tekhnologii, 2022, no. 4—3 (58), pp. 105—109. DOI: 10.37220/MIT.2022.58.4.070 (in Russian).
- Glazyrin A. S., Bolovin E. V., Arkhipova O. V. et al. Adaptive short-term forecasting of electricity consumption by autonomous power systems of small northern settlements based on retrospective regression analysis methods, Bulletin of Tomsk Polytechnic University. Engineering of Georesources, 2023, vol. 334, no. 4, pp. 231—248. DO I: 10.18799/24131830/2023/4/4213 (in Russian).
- Khalyasmaa A. I., Revenkov I. S., Sidorova A. V. Application of digital twin technology for analysis and forecasting of transformer equipment condition, Vestnik Kazanskogo gosudarstvennogo energeticheskogo universiteta. 2022, vol. 14, no. 3 (55), pp. 99—113 (in Russian).
- Slack D., Hilgard S., Jia E. et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods, AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 180—186.
- Kuzlu M., Cali U., Sharma V., Guler O. Gaining Insight into Solar Photovoltaic Power Generation Forecasting Utilizing Explainable Artificial Intelligence Tools, IEEE Access, 2020, vol. 8, pp. 187814—187823. DOI: 10.1109/ACCESS.2020.3031477.
- Matrenin P. V., Gamaley V. V., Khalyasmaa A. I., Stepanova A. I. Solar Irradiance Forecasting with Natural Language Processing of Cloud Observations and Interpretation of Results with Modified Shapley Additive Explanations, Algorithms, 2024, vol. 17, no. 4, article 150. DOI: 10.3390/a17040150.
- Stepanova A. I., Khalyasmaa A. I., Matrenin, P. V., Eroshenko S. A. Application of SHAP and Multi-Agent Approach for Short-Term Forecast of Power Consumption of Gas Industry Enterprises, Algorithms, 2024, vol. 17, no. 10, article 447. DOI: 10.3390/a17100447.
- Baur L., Ditschuneit K., Schambach M. et al. Explainability and Interpretability in Electric Load Forecasting Using Machine Learning Techniques — A Review, Energy and AI, 2024, vol. 16, article 100358. DOI: 10.1016/j.egyai.2024.100358.
- Neubauer A., Brandt S., Kriegel M. Explainable multi-step heating load forecasting: Using SHAP values and temporal attention mechanisms for enhanced interpretability, Energy and AI. 2025, vol. 20, article 100480. DOI: 10.1016/j.egyai.2025.100480.
- Ribeiro M. T., Singh S., Guestrin C. "Why Should I Trust You?": Explaining the Predictions of Any Classifier, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics, 2016, pp. 97—101. DOI: 10.48550/arXiv.1602.04938.
- Lundberg S. M., Su-In Lee. A unified approach to interpreting model predictions, NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 4768—4777. DOI: 10.48550/arXiv.1705.07874.
- Matrenin P. V., Stepanova A. I. Enhancing the interpretability of electricity consumption forecasting models for mining enterprises using SHapley Additive exPlanations, Journal of Mining Institute. 2025, vol. 271, pp. 154—167.
- Stepanova A. I., Khalyasmaa A. I., Matrenin P. V. Shortterm forecasting of the load of an oil and gas industry enterprise using technological factors and the additive Shapley explanation, Izvestiya vysshikh uchebnykh zavedeniy. Problemy energetiki, 2024, vol. 26, no. 4, pp. 75—88. DOI: 10.30724/1998-9903-2024-26-4-7588 (in Russian).
- Gorshenin A. Yu. Formation of a sample of initial data for machine learning of a short-term forecasting model of electricity consumption, Avtomatizatsiya v promyshlennosti, 2023, vol. 10, P. 37—41. DOI: 10.25728/avtprom.2023.10.08 (in Russian).
- Sergeev N. N., Matrenin P. V. Improving the accuracy of forecasting the electricity consumption of an industrial enterprise using machine learning methods by selecting significant features from a time series, iPolytech Journal, 2022, vol. 26, no. 3, pp. 487— 498. DOI: 10.21285/1814-3520-2022-3-487-498 (in Russian).
- Blokhin A. V. Development and verification of a short-term forecasting system for electricity consumption of a resource supplying enterprise, Izvestiya Tul'skogo gosudarstvennogo universiteta. Tekhnicheskiye nauki, 2024, vol. 10, pp. 261—267 (in Russian).
- Blokhin A. V., Gritsai A. S., Gorshenin A. Yu. Study of factors influencing electricity consumption by a commercial enterprise, Matematicheskiye struktury i modelirovaniye, 2022, no. 3 (63), pp. 39—47. DOI: 10.24147/2222-8772.2022.3.39-47 (in Russian).
- Wang J., Chen Y., Giudici P. Group Shapley with Robust Significance Testing and Its Application to Bond Recovery Rate Prediction Group Shapley with Robust Significance Testing and Its Application to Bond Recovery Rate Prediction. ArXiv. 2025. DOI: 10.48550/arXiv.2501.03041.
- Jullum M., Redelmeier A., Aas K. groupShapley: Efficient prediction explanation with Shapley values for feature groups. ArXiv. 2021. DOI: 10.48550/arXiv.2106.12228.
- Zeng X. Enhancing the Interpretability of SHAP Values Using Large Language Models. ArXiv. 2024. DOI: 10.48550/arx-iv.2409.00079.
- Khediri A., Slimi H., Yahiaoui A. et al. Enhancing Machine Learning Model Interpretability in Intrusion Detection Systems through SHAP Explanations and LLM-Generated Descriptions, 2024 6th International Conference on Pattern Analysis and Intelligent Systems (PAIS), EL OUED, Algeria, 2024. pp. 1—6. DOI: 10.1109/PAIS62114.2024.10541168.
- Lim B., Huerta R., Sotelo A. et al. EXPLICATE: Enhancing Phishing Detection through Explainable AI and LLM-Powered Interpretability. ArXiv, 2025. Doi: 10.48550/arXiv.2503.20796.
- Haghshenas Y., Wong W. P., Gunawan D. et al. Predicting the rates of photocatalytic hydrogen evolution over cocatalystdeposited TiO2 using machine learning with active photon flux as a unifying feature, EES Catalysis, 2024, vol, 2. pp. 612—623. DOI: 10.1039/d3ey00246b.
- Smith A. H., Gray G. M., Ashfaq A. et al. Using machine learning to predict five-year transplant-free survival among infants with hypoplastic left heart syndrome, Scientific reports, 2024, vol. 14, article 4512. DOI: 10.1038/s41598-024-55285-1.
- Duan H., Okten G. Derivative-based Shapley value for global sensitivity analysis and machine learning explainability. ArXiv, 2023. DOI: 10.48550/arXiv.2303.15183.
- Blasio A. J., Bisantz A. M. A comparison of the effects of data—ink ratio on performance with dynamic displays in a monitoring task, International Journal of Industrial Ergonomics, 2002, vol. 30, no. 2, pp. 89—101. DOI: 10.1016/S0169-8141(02)00074-4.
- Xue L., Constant N., Roberts A. et al. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021, pp. 483—498. DOI: 10.18653/v1/2021.naacl-main.41.
- Lin Ch.-Y. ROUGE: A Package for Automatic Evaluation of Summaries, Text Summarization Branches Out, Barcelona, Spain, Association for Computational Linguistics, 2004, pp. 74—81.