DST-Net: A Dual-Stream Transformer with Illumination-Independent Feature Guidance and Multi-Scale Spatial Convolution for Low-Light Image Enhancement
arXiv cs.CV / 3/18/2026
📰 NewsModels & Research
Key Points
- DST-Net presents a Dual-Stream Transformer for low-light image enhancement that leverages illumination-agnostic priors and a multi-scale spatial fusion mechanism to improve quality while preserving details.
- A feature extraction module combines Difference of Gaussians (DoG), LAB color space transformations, and VGG-16 to obtain texture priors that guide the enhancement without destroying intrinsic signal information.
- The dual-stream architecture uses a cross-modal attention mechanism to dynamically rectify degraded signal representations and perform iterative enhancement via differentiable curve estimation.
- The Multi-Scale Spatial Fusion Block (MSFB) employs pseudo-3D and 3D gradient operator convolutions to recover high-frequency edges and capture inter-channel spatial correlations, achieving PSNR of 25.64 dB on the LOL dataset and robust cross-scene generalization on LSRW.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note
Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
Learning to Reason with Curriculum I: Provable Benefits of Autocurriculum
arXiv cs.LG