A4 Article in conference proceedings
Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models (2022)
Terziyan, V., & Vitko, O. (2022). Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models. In F. Longo, M. Affenzeller, & A. Padovano (Eds.), 3rd International Conference on Industry 4.0 and Smart Manufacturing (pp. 216-226). Elsevier. Procedia Computer Science, 200. https://doi.org/10.1016/j.procs.2022.01.220
JYU authors or editors
Publication details
All authors or editors: Terziyan, Vagan; Vitko, Oleksandra
Parent publication: 3rd International Conference on Industry 4.0 and Smart Manufacturing
Parent publication editors: Longo, Francesco; Affenzeller, Michael; Padovano, Antonio
Conference:
- International Conference on Industry 4.0 and Smart Manufacturing
Place and date of conference: Linz, Austria, 17.-19.11.2021
Journal or series: Procedia Computer Science
ISSN: 1877-0509
eISSN: 1877-0509
Publication year: 2022
Number in series: 200
Pages range: 216-226
Number of pages in the book: 1918
Publisher: Elsevier
Publication country: Netherlands
Publication language: English
DOI: https://doi.org/10.1016/j.procs.2022.01.220
Publication open access: Openly available
Publication channel open access: Open Access channel
Publication is parallel published (JYX): https://jyx.jyu.fi/handle/123456789/80213
Abstract
Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems and products. However, there is an important challenge related to explainability of (and, therefore, trust to) the decisions made by the deep learning models (aka black-boxes) and their poor capacity for being integrated with each other. Explainable artificial intelligence is needed instead but without loss of effectiveness of the deep learning models. In this paper we present the transformation technique between black-box models and explainable (as well as interoperable) classifiers on the basis of semantic rules via automatic recreation of the training datasets and retraining the decision trees (explainable models) in between. Our transformation technique results to explainable rule-based classifiers with good performance and efficient training process due to embedded incremental ignorance discovery and adversarial samples (“corner cases”) generation algorithms. We have also shown the use-case scenario for such explainable and interoperable classifiers, which is collaborative condition monitoring, diagnostics and predictive maintenance of distributed (and isolated) smart industrial assets while preserving data and knowledge privacy of the users.
Keywords: industry; production technology; condition monitoring; maintenance; artificial intelligence; machine learning; deep learning; semantic web
Free keywords: Explainable Artificial Intelligence; Industry 4.0; semantic web; predictive maintenance
Contributing organizations
Ministry reporting: Yes
Reporting Year: 2022
JUFO rating: 1