DualStrip-Net: A Strip-Based Unified Framework for Weakly- and Semi-Supervised Road Segmentation From Satellite Images

Jingtao Hu, Qiang Li, Qi Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Automated road segmentation from remote sensing imagery remains a fundamental challenge in Earth observation systems. The primary bottleneck lies in acquiring dense pixel-wise annotations, which is both labor-intensive and time-prohibitive. This article presents DualStrip-Net, a novel deep learning framework for weakly supervised and semi-supervised road segmentation that effectively handles both sparse annotations and limited labeled data. Unlike conventional convolutional neural network (CNN)-based segmentation methods that lack explicit road topology modeling, DualStrip-Net exploits the inherent linear topology of road networks through a dual-stream architecture that combines patch-level annotation strategy and strip-based feature learning. The framework captures road characteristics through orthogonal strip processing in horizontal and vertical orientations. The proposed DualStrip Learning mechanism enables robust feature representation of road structures through complementary views. Extensive evaluations on the DeepGlobe, Massachusetts, and CHN6-CUG benchmark datasets demonstrate that DualStrip-Net achieves superior performance in both weakly supervised and semi-supervised settings. Notably, with only 20% of labeled training data, our method outperforms the supervised-only baselines on both Massachusetts and CHN6-CUG datasets.

Original languageEnglish
Article number5617514
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume63
DOIs
StatePublished - 2025

Keywords

  • DualStrip
  • patch-level
  • remote sensing imagery
  • road segmentation
  • semi-supervised
  • weakly supervised

Fingerprint

Dive into the research topics of 'DualStrip-Net: A Strip-Based Unified Framework for Weakly- and Semi-Supervised Road Segmentation From Satellite Images'. Together they form a unique fingerprint.

Cite this