|
Depot Institutionnel de l'UMBB >
Publications Scientifiques >
Publications Internationales >
Veuillez utiliser cette adresse pour citer ce document :
http://dlibrary.univ-boumerdes.dz:8080/handle/123456789/14154
|
Titre: | LSTM-Autoencoder Deep Learning Model for Anomaly Detection in Electric Motor |
Auteur(s): | Lachekhab, Fadhila Benzaoui, Messouada Tadjer, Sid Ahmed Bensmaine, Abdelkrim Hamma, Hichem |
Mots-clés: | Anomaly detection Autoencoder model Deep learning model Electrical machine long short-term memory algorithm |
Date de publication: | 2024 |
Editeur: | Multidisciplinary Digital Publishing Institute (MDPI) |
Collection/Numéro: | Energies /Vol. 17, N° 10(2024) Art. N° 2340;pp. 1-18 |
Résumé: | Anomaly detection is the process of detecting unusual or unforeseen patterns or events in data. Many factors, such as malfunctioning hardware, malevolent activities, or modifications to the data’s underlying distribution, might cause anomalies. One of the key factors in anomaly detection is balancing the trade-off between sensitivity and specificity. Balancing these trade-offs requires careful tuning of the anomaly detection algorithm and consideration of the specific domain and application. Deep learning techniques’ applications, such as LSTMs (long short-term memory algorithms), which are autoencoders for detecting an anomaly, have garnered increasing attention in recent years. The main goal of this work was to develop an anomaly detection solution for an electrical machine using an LSTM-autoencoder deep learning model. The work focused on detecting anomalies in an electrical motor’s variation vibrations in three axes: axial (X), radial (Y), and tangential (Z), which are indicative of potential faults or failures. The presented model is a combination of the two architectures; LSTM layers were added to the autoencoder in order to leverage the LSTM capacity for handling large amounts of temporal data. To prove the LSTM efficiency, we will create a regular autoencoder model using the Python programming language and the TensorFlow machine learning framework, and compare its performance with our main LSTM-based autoencoder model. The two models will be trained on the same database, and evaluated on three primary points: training time, loss function, and MSE anomalies. Based on the obtained results, it is clear that the LSTM-autoencoder shows significantly smaller loss values and MSE anomalies compared to the regular autoencoder. On the other hand, the regular autoencoder performs better than the LSTM, comparing the training time. It appears then, that the LSTM-autoencoder presents a superior performance although it was slower than the standard autoencoder due to the complexity of the added LSTM layers. |
URI/URL: | https://www.mdpi.com/1996-1073/17/10/2340 https://doi.org/10.3390/en17102340 http://dlibrary.univ-boumerdes.dz:8080/handle/123456789/14154 |
ISSN: | 1996-1073 |
Collection(s) : | Publications Internationales
|
Fichier(s) constituant ce document :
|
Tous les documents dans DSpace sont protégés par copyright, avec tous droits réservés.
|