Abstract
Prior dual-stream methods with the feature interaction mechanism have achieved remarkable performance in single image reflection removal (SIRR). However, they often struggle with (1) semantic understanding gap between the features of pre-trained models and those of image restoration models, and (2) reflection label inconsistencies between synthetic and real-world training data. In this paper, we first adopt the parameter efficient fine-tuning strategy by integrating several learnable Mona layers into the pre-trained model to align the training directions. Then, a label generator is designed to unify the labels for both synthetic and real-world data with an optimized reflection label. In addition, a Gaussian-based Adaptive Frequency Learning Block (G-AFLB) is proposed to adaptively learn and fuse the frequency priors, and a dynamic agent attention (DAA) is employed as an alternative to window-based attention by dynamically modeling the significance levels across windows (inter-window) and within individual windows (intra-window). The aforementioned improvements collectively constitute our proposed Gap-Free Reflection Removal Network (GFRRN). Extensive experiments demonstrate the effectiveness of our GFRRN, achieving superior performance against state-of-the-art SIRR methods.
Visual Comparison
Drag the slider to compare the input image with our result.
Input
Output
Input
Output
Input
Output
Input
Output
Visual Effects on Open Datasets
The inputs are above, and the outputs are below. These images are from the datasets 'Nature', 'Real' and 'SIR2'.
BibTeX
@article{chen2026gfrrn,
title={GFRRN: Explore the Gaps in Single Image Reflection Removal},
author={Yu Chen and Zewei He and Xingyu Liu and Zixuan Chen and Zheming Lu},
year={2026},
eprint={2602.22695},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.22695},
}