Classifying Fires, Interpreting Decisions: An Explainable AI Framework for Architectural Image Analysis
Main Article Content
Abstract
The rapid and accurate detection of fire in architectural imagery is critical for safeguarding built heritage and ensuring public safety, presenting a compelling challenge at the intersection of computer vision and architectural studies. This paper introduces a robust deep-learning framework tailored for this task, with principal contributions spanning data, methodology, and model interpretability. First, we construct and publicly release a novel dataset of building exterior images encompassing fire, smoke, and normal scenes, providing a dedicated benchmark for scholarly and applied research. Second, we implement a fine-tuned ResNet-50 model, enhanced by strategic data augmentation and class-balancing techniques, which achieves perfect classification performance on a balanced test set. Finally, and most significantly for fostering trust, we employ Class Activation Mapping (CAM) to generate visual explanations. These heatmaps empirically verify that the model’s decisions are grounded in semantically relevant visual features—specifically, flames and smoke—rather than spurious correlations, thereby validating its reliability. Our work demonstrates the potent synergy of data-centric Artificial Intelligence (AI) and explainable AI for architectural image analysis. The findings offer substantial implications for interdisciplinary studies in architectural imagery, visual cognition, and the development of intelligent, reliable monitoring systems for the built environment.
Article Details
Issue
Section
Articles

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
How to Cite
Classifying Fires, Interpreting Decisions: An Explainable AI Framework for Architectural Image Analysis. (2025). Architecture Image Studies, 6(3), 1542-1549. https://doi.org/10.62754/ais.v6i3.483