Fire Situation Monitoring System Through Connection with DT

Fire Situation Monitoring System Through Connection with Digital Twin and Unmanned Smart Mobility Vehicles

Fire Situation Monitoring System Through Connection with DT

Things used in this project

 

Hardware components

HARDWARE LIST
1 Scout2
1 ZED2 camera
1 f450 pixhawk
1 Raspberry Pi 4 Model B
1 Raspberry Pi Pi NoIR Camera V2
1 RGB LED
1 Buzzer
1 DFRobot Gravity: Analog Flame Sensor For Arduino
1 Seeed Studio Grove - Gas Sensor(MQ2)

Software apps and online services

Story

 

TEAM NAME: Blossom { Blossom is a word similar to the Korean pronunciation of "불났어! (A fire breaks out!)" }

*Service Introduction*

 

Current problems in the event of a fire:

 

In the event of a fire in a building, fire authorities cannot actively deal with it until fire trucks and firefighters arrive.

The cause for the current situation is...

 

Unable to aggregate information from IoT devices to share the entire fire situation. The absence of a platform that directs IoT, robots, drones, etc. 

Features provided by the service

 

Fire detection and real-time sharing of fire situations using IoT devices. .Using Mobius-based AIaaS to detect the number of lives and locate the robot Selection of evacuation sites based on fire conditions and calculation of optimal evacuation routes. Implement digital twin with Mobius and visualize with omniverse platform. 

Scenario

 

Scenario

 

The scenario is the same as the image above.

 

System Diagram

 

System Diagram

 

The system diagram is the same as the image above.

 

oneM2M resource design(Blossom)

 

oneM2M resource design(Blossom)

 

The following is about oneM2M resource structure design.

 

The oneM2M resource design was made by dividing it into a total of two.

 

The first is the structure of the control command side of the control tower and various sensors necessary for monitoring fire occurrence, and the second is the structure of the AIHub side for AIaaS.

 

First of all, the first structure has four containers under AE called Blossom.

 

The control Tower has a target container that determines whether a fire is present, priority is a container that gives priority according to the severity of the fire, and escape path is a container that informs the escape route.

 

4WD is a container related to a four-wheel drive robot and a container containing image data called 4WDcam was designed below.

 

Drone is a container related to drones, and designed a container with image data called dronecam on the bottom and a container with GPS data of drones on the bottom.

 

There are two types of edgeDevice containers: flamesensor and gas sensor, and each container is divided into rooms

 

By room, the group function was used to obtain the last values of the flamesensor and the gas sensor at the same time.

 

Sequence Diagram(Fire detection algorithm)

 

Sequence Diagram(Fire detection algorithm)

 

The flame and the gas sensor data are come from the oneM2M standard IoT platform. When the two sensor values are higher than the reference value, the counting algorithm starts.

 

If data that is lower than the reference value doesn’t come in consecutively and data that is higher than the reference value comes in twice or more, the count is increased by one. When the count becomes 3, Req_ID and sensor-cf connection requests are sent to the standard IoT platform. The Req_ID becomes 0 and it means fire.

 

If the count is lower than 3 due to the consecutive input of data that is lower than the reference value, ReqID and the request to stop the connection between the sensor and the cf is sent to the standard IoT platform. The Req_ID changes 1 and it meas the end of the fire.

 

oneM2M resource design(AIHub)

 

oneM2M resource design(AIHub)

 

Sequence Diagram(AI as a service)

 

Sequence Diagram(AI as a service)

 

First, using the oneM2M standard IoT platform, AI as a service was developed by linking the IoT hub in charge of linking with the physical world and the AI Hub in charge of AI as a Service.

 

AI Hub has an AI service manager who oversees the management of AI services, and AI services allow AI service enablers existing inside the AI hub to register artificial intelligence models and link them to service APIs from outside through AI service brokers.

 

Kafka brokers were used to support high-speed linkage between IoT hubs and AI hubs.

 

The following figure is an AI Hub resource structure linked to a standard IoT platform. Under the AE called AIHub, there are several models of containers and a user-requested container called target.

 

When the AI service Enabler registers its own artificial intelligence model in the AI hub, a model container is also created on the standard IoT platform, and at the bottom of the model container is a report container that returns the results of the model used when using the AI service from the outside.

 

In the case of a target container, a request is received when using an AI service from the outside. If the request format is Req_ID = 0, it means requesting connection of the AI service, and a path to know the location of the desired AI model and test set is entered. When Req_ID = 1, it means that the AI service is requested to be deleted.

 

Sequence Diagram(The process of sending image data from a drone)

 

Sequence Diagram(The process of sending image data from a drone)

 

When IoT devices detect fire, they share fire situations in real time on standard IoT platforms.

 

When a reqeust that a fire occurred on a standard IoT platform enters, drones among unmanned smart mobility vehicles move to a fire situation and take an indoor image of the fire site.

 

In addition, the control Tower automatically requests the connection between the camera sensor mounted on the drone and the AI Hub model.

 

When the connection request enters the standard IoT platform, indoor image data taken with a drone will enter the AI Hub through the Kafka broker, and the result value will be transmitted to the standard IoT platform through the Kafka broker after being informed through the model requested by the user.

 

Based on the inferenced result value, the number of person in need of rescue in the room is synchronized to the digital twin.

 

oneM2M resource design(AIHub)

 

oneM2M resource design(AIHub)

 

In the case of a fire, the control Tower automatically selects the camera sensor mounted on the drone and the model within the AI Hub to enter the target container within the standard IoT platform.

 

Since Req_ID = 0, it is a connection request, and the path with sensor data in the standard IoT platform is written, and the desired AI model is human detection.

 

 

An image of the results predicted by the Human Detection Model

 

• An image of the results predicted by the Human Detection Model

 

The Original Image

 

In the case of the human detection model, the human detection model shown in yolov5 was used. The performance of the model is very good, and the results are as shown in the figure above.

 

Link results inferenced from human detection model to standard IoT platform

 

Link results inferenced from human detection model to standard IoT platform

 

Based on the user's request, the IoT sensor and the AI model in the AI Hub are connected, and the result value is displayed in the human detection model container subreport container selected by the user after inference. The location of the room where the photograph was taken is 529 and it can be confirmed that there are three people

 

Real-time interworking of standard IoT platforms with digital twins

 

Real-time interworking of standard IoT platforms with digital twins

 

The building environment is built on Digital Twin in advance, and Digital Twin takes data from the standard IoT platform and synchronizes it in real time. As soon as the return value that there are three people in 529 are posted on, it can be expressed that there are three people on the Digital Twin.

 

http and mqtt were used as the oneM2M protocol.

 

http used post and get, and mqtt used sub.

 

Post was used to upload data, and get was used to retrieve data periodically. In addition, the sub was used when receiving asynchronously.

 

The group function was created by http post, and when importing data generated by group, the http get(fopt) function has been used to receive data.

 

 

 

Query image for visual localization

 

• Query image for visual localization

 

 Database image for matching with query

 

Unlike the localization of unmanned aerial vehicles that move outdoors and measure GPS data, indoor-based(non GPS based) visual localization must be performed to link indoor coordinates of unmanned aerial vehicles with digital twins in real time.

 

The indoor-based visual localization method used an image-based place recognition technique, and the inferencing model used a CNN based feature extract model with NetVLAD pooling layer that measure the similarity between images by aggregating local vectors of image pixel values into one global vector.

 

First, the interfering server has database photos captured for each node that is a certain distance away from the building, and the corresponding node information (information in the nearest room) is tagged (in this case, the information in the tagged closest room is used as a reference coordinate to perform localization in the tearal twin).

 

As shown in these two pictures(query and database picture), when a fire occurs, an unmanned moving object moves, and the most similar database picture for the captured query picture is extracted, and the corresponding node information tagged in the picture is extracted.

 

Our service is aimed at making it easier for firefighters to grasp fire conditions.It should be easy for firefighters to use, and it was designed to prevent confusion when used. So, I turned off the visual effect, made a key button, and visualized information about the place where the fire broke out and where it would be okay to evacuate before firefighters came.

The video contains a function of turning off and turning off the visual effect.

The video above shows how we represent the optimal evacuation path.

Since the unmanned moving object will be used to identify human movements and conditions, it uses a true depth camera called ZED2 to obtain joint values from a human pose estimation API and transmit them to Mobius. This allows the digital twin to calculate the joint value and implement the movement as it is.

The video is a reflection of the movement

The article was first published in hackster, November 4, 2022

cr: https://www.hackster.io/blossom/fire-situation-monitoring-system-through-connection-with-dt-365f98

author: Team Blossom: 김유진, 김세중, Jeong Hyeryeong, Juyeon, ohssapsu, SeungMyeong Jeong, Bob Flynn, Andreas Kraft, Miguel Angel Reina Ortega, Wonbae Son

License
All Rights
Reserved
licensBg
0