A4 Article in conference proceedings
Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models (2022)


Terziyan, V., & Vitko, O. (2022). Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models. In F. Longo, M. Affenzeller, & A. Padovano (Eds.), 3rd International Conference on Industry 4.0 and Smart Manufacturing (200, pp. 216-226). Elsevier. Procedia Computer Science. https://doi.org/10.1016/j.procs.2022.01.220


JYU authors or editors


Publication details

All authors or editorsTerziyan, Vagan; Vitko, Oleksandra

Parent publication3rd International Conference on Industry 4.0 and Smart Manufacturing

Parent publication editorsLongo, Francesco; Affenzeller, Michael; Padovano, Antonio

Conference:

  • International Conference on Industry 4.0 and Smart Manufacturing

Place and date of conferenceLinz, Austria17.-19.11.2021

Journal or seriesProcedia Computer Science

ISSN1877-0509

eISSN1877-0509

Publication year2022

Volume200

Pages range216-226

Number of pages in the book1918

PublisherElsevier

Publication countryNetherlands

Publication languageEnglish

DOIhttps://doi.org/10.1016/j.procs.2022.01.220

Publication open accessOpenly available

Publication channel open accessOpen Access channel

Publication is parallel published (JYX)https://jyx.jyu.fi/handle/123456789/80213


Abstract

Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems and products. However, there is an important challenge related to explainability of (and, therefore, trust to) the decisions made by the deep learning models (aka black-boxes) and their poor capacity for being integrated with each other. Explainable artificial intelligence is needed instead but without loss of effectiveness of the deep learning models. In this paper we present the transformation technique between black-box models and explainable (as well as interoperable) classifiers on the basis of semantic rules via automatic recreation of the training datasets and retraining the decision trees (explainable models) in between. Our transformation technique results to explainable rule-based classifiers with good performance and efficient training process due to embedded incremental ignorance discovery and adversarial samples (“corner cases”) generation algorithms. We have also shown the use-case scenario for such explainable and interoperable classifiers, which is collaborative condition monitoring, diagnostics and predictive maintenance of distributed (and isolated) smart industrial assets while preserving data and knowledge privacy of the users.
See presentation slides: https://ai.it.jyu.fi/ISM-2021-XAI.pptx


Keywordsindustryproduction technologycondition monitoringmaintenanceartificial intelligencemachine learningdeep learningsemantic web

Free keywordsExplainable Artificial Intelligence; Industry 4.0; semantic web; predictive maintenance


Contributing organizations


Ministry reportingYes

Reporting Year2022

JUFO rating1


Last updated on 2024-03-04 at 19:05