Efficient Semantic Image Communication for Traffic Monitoring at the Edge

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes two edge-to-server semantic image communication pipelines (MMSD and SAMR) to transmit traffic-monitoring visuals under strict communication constraints without sending full-resolution pixel data.
  • MMSD decomposes images into compact semantic artifacts (segmentation maps, edge maps, and text) so sensitive pixel content is not transmitted, then reconstructs scenes at the receiver using a diffusion-based generative model.
  • SAMR improves the quality–compression trade-off by semantically suppressing non-critical regions prior to standard JPEG encoding and reconstructing the missing parts via generative inpainting.
  • The system uses an asymmetric architecture where lightweight semantic processing runs on edge devices (e.g., Raspberry Pi 5) while computationally intensive generative reconstruction is performed on the server.
  • Reported results show very large transmission savings (about 99% reduction for both methods) alongside favorable comparisons to baselines like SPIC, and strong performance in compression–quality trade-offs versus standard JPEG and SQ-GAN.

Abstract

Many visual monitoring systems operate under strict communication constraints, where transmitting full-resolution images is impractical and often unnecessary. In such settings, visual data is often used for object presence, spatial relationships, and scene context rather than exact pixel fidelity. This paper presents two semantic image communication pipelines for traffic monitoring, MMSD and SAMR, that reduce transmission cost while preserving meaningful visual information. MMSD (Multi-Modal Semantic Decomposition) targets very high compression together with data confidentiality, since sensitive pixel content is not transmitted. It replaces the original image with compact semantic representations, namely segmentation maps, edge maps, and textual descriptions, and reconstructs the scene at the receiver using a diffusion-based generative model. SAMR (Semantic-Aware Masking Reconstruction) targets higher visual quality while maintaining strong compression. It selectively suppresses non-critical image regions according to semantic importance before standard JPEG encoding and restores the missing content at the receiver through generative inpainting. Both designs follow an asymmetric sender-receiver architecture, where lightweight processing is performed at the edge and computationally intensive reconstruction is offloaded to the server. On a Raspberry Pi~5, the edge-side processing time is about 15s for MMSD and 9s for SAMR. Experimental results show average transmitted-data reductions of 99% for MMSD and 99.1% for SAMR. In addition, MMSD achieves lower payload size than the recent SPIC baseline while preserving strong semantic consistency, whereas SAMR provides a better quality-compression trade-off than standard JPEG and SQ-GAN under comparable operating conditions.