TY - JOUR
T1 - TranDRL
T2 - A Transformer-Driven Deep Reinforcement Learning Enabled Prescriptive Maintenance Framework
AU - Zhao, Yang
AU - Yang, Jiaxi
AU - Wang, Wenbo
AU - Yang, Helin
AU - Niyato, Dusit
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Industrial systems require reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime. Existing studies rely on the heuristic models which may struggle to capture complex temporal dependencies. This article introduces an integrated framework that leverages the capabilities of the Transformer and deep reinforcement learning (DRL) algorithms to optimize the system maintenance actions. Our approach employs the Transformer model to effectively capture complex temporal patterns in IoT sensor data, thus accurately predicting the remaining useful life (RUL) of equipment. Additionally, the DRL component of our framework provides cost-effective and timely maintenance recommendations. Numerous experiments conducted on the NASA C-MPASS data set demonstrate that our approach has a performance similar to the ground truth results and could be obviously better than the baseline methods in terms of RUL prediction accuracy as the time cycle increases. Additionally, the experimental results demonstrate the effectiveness of optimizing maintenance actions.
AB - Industrial systems require reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime. Existing studies rely on the heuristic models which may struggle to capture complex temporal dependencies. This article introduces an integrated framework that leverages the capabilities of the Transformer and deep reinforcement learning (DRL) algorithms to optimize the system maintenance actions. Our approach employs the Transformer model to effectively capture complex temporal patterns in IoT sensor data, thus accurately predicting the remaining useful life (RUL) of equipment. Additionally, the DRL component of our framework provides cost-effective and timely maintenance recommendations. Numerous experiments conducted on the NASA C-MPASS data set demonstrate that our approach has a performance similar to the ground truth results and could be obviously better than the baseline methods in terms of RUL prediction accuracy as the time cycle increases. Additionally, the experimental results demonstrate the effectiveness of optimizing maintenance actions.
KW - Deep reinforcement learning (DRL)
KW - prescriptive maintenance
KW - transformer
UR - https://www.scopus.com/pages/publications/85200240920
U2 - 10.1109/jiot.2024.3436110
DO - 10.1109/jiot.2024.3436110
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85200240920
SN - 2327-4662
VL - 11
SP - 35432
EP - 35444
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 21
ER -