AI Navigate

Intelligent Spatial Estimation for Fire Hazards in Engineering Sites: An Enhanced YOLOv8-Powered Proximity Analysis Framework

arXiv cs.CV / 3/11/2026

Tools & Practical UsageIndustry & Market MovesModels & Research

Key Points

  • The study introduces an enhanced dual-model YOLOv8 framework combining instance segmentation for fire and smoke detection with object detection for identifying nearby entities such as people, vehicles, and infrastructure.
  • By calculating pixel-based distances between detected fire and surrounding objects and converting these to real-world measurements, the system enables proximity-aware risk assessment for fire hazards.
  • The framework integrates fire evidence, object vulnerability, and exposure distance to generate quantitative risk scores and alert levels, supporting actionable hazard prioritization.
  • Achieving precision, recall, and F1 scores above 90% and mAP@0.5 over 91%, the system demonstrates high accuracy and reliability in complex environments.
  • Implemented with open-source tools on Google Colab, the framework is lightweight and suitable for deployment in industrial sites and resource-constrained environments to enhance situational awareness and fire hazard management.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09069 (cs)
[Submitted on 10 Mar 2026]

Title:Intelligent Spatial Estimation for Fire Hazards in Engineering Sites: An Enhanced YOLOv8-Powered Proximity Analysis Framework

View a PDF of the paper titled Intelligent Spatial Estimation for Fire Hazards in Engineering Sites: An Enhanced YOLOv8-Powered Proximity Analysis Framework, by Ammar K. AlMhdawi and Nonso Nnamoko and Alaa Mashan Ubaid
View PDF HTML (experimental)
Abstract:This study proposes an enhanced dual-model YOLOv8 framework for intelligent fire detection and proximity-aware risk assessment, extending conventional vision-based monitoring beyond simple detection to actionable hazard prioritization. The system is trained on a dataset of 9,860 annotated images to segment fire and smoke across complex environments. The framework combines a primary YOLOv8 instance segmentation model for fire and smoke detection with a secondary object detection model pretrained on the COCO dataset to identify surrounding entities such as people, vehicles, and infrastructure. By integrating the outputs of both models, the system computes pixel-based distances between detected fire regions and nearby objects and converts these values into approximate real-world measurements using a pixel-to-meter scaling approach. This proximity information is incorporated into a risk assessment mechanism that combines fire evidence, object vulnerability, and distance-based exposure to produce a quantitative risk score and alert level. The proposed framework achieves strong performance, with precision, recall, and F1 scores exceeding 90% and mAP@0.5 above 91%. The system generates annotated visual outputs showing fire locations, detected objects, estimated distances, and contextual risk information to support situational awareness. Implemented using open-source tools within the Google Colab environment, the framework is lightweight and suitable for deployment in industrial and resource-constrained settings.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09069 [cs.CV]
  (or arXiv:2603.09069v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09069
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Nonso Nnamoko [view email]
[v1] Tue, 10 Mar 2026 01:27:46 UTC (3,708 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Intelligent Spatial Estimation for Fire Hazards in Engineering Sites: An Enhanced YOLOv8-Powered Proximity Analysis Framework, by Ammar K. AlMhdawi and Nonso Nnamoko and Alaa Mashan Ubaid
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.