Multi-Model AI-Based Mechanical Anomaly Detector w/ BLE
Mechanical anomaly detection is critical in autonomous manufacturing processes so as to prevent equipment failure, ameliorate the effects of expensive overhaul procedures on the production line, reduce arduous diagnostic workload, and improve workplace safety. In light of recent developments towards the fourth industrial revolution (Industry 4.0), many renowned companies focused on enhancing manufacturing and production processes by applying artificial intelligence in tandem with the Internet of Things for anomalous behavior detection. Although companies take different approaches, and each technique has specific strengths and weaknesses based on the applied manufacturing mechanisms, autonomous anomalous behavior detection enables businesses to preclude detrimental mechanical malfunctions that are challenging to detect manually by operators.
Nevertheless, there are still a few grand challenges to overcome while applying mechanical anomaly detection to mass production operations, such as the scarcity of data sources leading to false positives (or negatives) and time-consuming (or computationally expensive) machine learning methods. Since every manufacturing system setup produces conflicting mechanical deviations, the optimal anomaly detection method should be deliberately specialized for the targeted setup, which minimizes false negatives and maintains exceptional precision. If the mechanical anomaly detection method is applied without proper tuning for interconnected manufacturing processes, the applied method cannot pinpoint the potential root cause of the detected mechanical anomaly. In that regard, inefficient anomaly detection methods still require operators to conduct manual inspections to diagnose the crux of the system failure.
After inspecting recent research papers on autonomous anomalous behavior detection, I noticed that there are very few appliances focusing on detecting mechanical deviations and diagnosing the root cause of the detected anomaly so as to provide operators with precise maintenance analysis to expedite the overhaul process. Therefore, I decided to develop a device to detect mechanical anomalies based on sound (via an onboard microphone), diagnose the root cause of the detected deviation via object detection, and then inform the user of the diagnosed root cause via SMS.
To be able to detect mechanical anomalies and diagnose the root cause efficiently, I decided to build two different neural network models — audio classification and image classification — and run them on separate development boards to avoid memory allocation issues, latency, and reduced model accuracy due to multi-sensor conflict.
Since FireBeetle 2 ESP32-S3 is a high-performance and budget-friendly IoT development board providing a built-in OV2640 camera, 16MB Flash, and 8MB PSRAM, I decided to utilize FireBeetle 2 ESP32-S3 to run the object detection model. To run the neural network model for audio classification, I decided to utilize Beetle ESP32-C3, which is an ultra-small-sized IoT development board based on a RISC-V single-core processor. Then, I connected a Fermion 2.0'' IPS TFT display to FireBeetle 2 ESP32-S3 in order to benefit from its built-in microSD card module while saving image samples and notify the user of the device status by showing feature-associated icons. To perform on-device audio classification, I connected a Fermion I2S MEMS microphone to Beetle ESP32-C3.
Even though this mechanical anomaly detector is composed of two separate development boards, I focused on enabling the user to access all interconnected device features (mostly via serial communication) within a single interface and get notified of the root cause predicted by two different neural network models — sound-based and image-based. Since I wanted to capitalize on smartphone features (e.g., Wi-Fi, BLE, microphone) to build a capable mechanical anomaly detector, I decided to develop an Android application from scratch with the MIT APP Inventor. As the user interface of the anomaly detector, the Android application can utilize the Wi-Fi network connection to obtain object detection model results with the resulting images from a web application, save audio samples via the built-in phone microphone, and communicate with Beetle ESP32-C3 over BLE so as to get audio-based model detection results and transmit commands for image sample collection.
As explained earlier, each manufacturing setup requires a unique approach to mechanical anomaly detection, especially for interconnected processes. Hence, I decided to build a basic frequency-controlled apparatus based on Arduino Mega to replicate mechanical anomalous behavior. I designed 3D parts to contain servo motors to move a timing belt system consisting of a GT2 60T pulley, a GT2 20T pulley, and a 6 mm belt. Since I utilized potentiometers to adjust servo motors, I was able to produce accurate audio samples for mechanical anomalies. Although I was able to generate anomalies by shifting the belt manually, I decided to design diverse 3D-printed components (parts) restricting the belt movement in order to demonstrate the root cause of the inflicted mechanical anomaly. In other words, these color-coded components represent the defective parts engendering mechanical anomalies in a production line. Since I did not connect a secondary SD card module to Beetle ESP32-C3, I decided to utilize the Android application to record audio samples of inflicted anomalies via the phone microphone instead of the onboard I2S microphone. To collect image samples of the 3D-printed components, I utilized the built-in OV2640 camera on FireBeetle 2 ESP32-S3 and saved them via the integrated microSD card module on the Fermion TFT display. In that regard, I was able to construct notable data sets for sound-based mechanical anomaly detection and image-based component (part) recognition.
After completing constructing two different data sets, I built my artificial neural network model (audio-based anomaly detection) and my object detection model (image-based component detection) with Edge Impulse to detect sound-based mechanical anomalies and diagnose the root cause of the detected anomaly — the restricting component (part). I utilized the Edge Impulse FOMO (Faster Objects, More Objects) algorithm to train my object detection model, which is a novel machine learning algorithm that brings object detection to highly constrained devices. Since Edge Impulse is nearly compatible with all microcontrollers and development boards, I have not encountered any issues while uploading and running both of my models on FireBeetle 2 ESP32-S3 and Beetle ESP32-C3. As the labels of the object detection model, I utilized the color-coded names of the 3D-printed components (parts):
For the neural network model, I simply differentiate audio samples with these labels denoting the operation status:
After training and testing my object detection (FOMO) model and my neural network model, I deployed and uploaded both models on FireBeetle 2 ESP32-S3 and Beetle ESP32-C3 as Arduino libraries respectively. Therefore, this mechanical anomaly detector is capable of detecting sound-based mechanical deviations and diagnosing the root cause of the detected anomaly by running both models independently without any additional procedures or latency.
Since I focused on building a full-fledged AIoT mechanical anomaly detector, supporting only BLE data transmission for displaying the detection results from both models was not suitable. Therefore, I decided to develop a versatile web application from scratch in order to obtain the object detection model predictions with the resulting images (including bounding box measurements) via HTTP POST requests from FireBeetle 2 ESP32-S3, save the received information to a MySQL database table, and inform the user of the detected anomaly and the diagnosed root cause via SMS through Twilio's SMS API. Since FireBeetle 2 ESP32-S3 cannot modify resulting images to draw bounding boxes directly, the web application executes a Python script to convert the received raw image buffer (RGB565) to a JPG file and draw the bounding box on the generated image with the measurements passed as Python Arguments. Furthermore, I employed the web application to transfer the latest model detection results, the prediction dates, and the modified resulting images (as URLs) to the Android application as a list via an HTTP GET request.
Considering the harsh operating conditions in industrial plants and the dual development board setup, I decided to design a unique PCB after completing the wiring on a breadboard for the prototype. Since I wanted my PCB design to symbolize a unique and captivating large-scale industrial plant infrastructure, I decided to design an Iron Giant-inspired PCB. Thanks to the unique matte black mask and yellow silkscreen combination, the Iron Giant theme shines through the PCB.
Lastly, to make the device as robust and compact as possible, I designed a complementing Iron Giant-inspired case with a removable top cover and a modular PCB holder (3D printable) providing a cable-free assembly with the external battery.
So, this is my project in a nutshell 😃