Retrieval Compensated Group Structured Sparsity for Image Super-Resolution
Fig. 1. Flow diagram of the introduced two-stage similarity-measured external information and subsequent dictionary learning based on the refined external similar patches and internal patches.
Abstract
Sparse representation-based image super-resolution is a well-studied topic; however, a general sparse framework that can utilize both internal and external dependencies remains unexplored. In this paper, we propose a group-structured sparse representation (GSSR) approach to make full use of both internal and external dependencies to facilitate image super-resolution. External compensated correlated information is introduced by a two-stage retrieval and refinement. First, in the global stage, the content-based features are exploited to select correlated external images. Then, in the local stage, the patch similarity, measured by the combination of content and high-frequency patch features is utilized to refine the selected external data. To better learn priors from the compensated external data based on the distribution of the internal data and further complement their advantages, nonlocal redundancy is incorporated into the sparse representation model to form a group sparsity framework based on an adaptive structured dictionary. Our proposed adaptive structured dictionary consists of two parts: one trained on internal data and the other trained on compensated external data. Both are organized in a cluster-based form. To provide the desired overcompleteness property, when sparsely coding a given LR patch, the proposed structured dictionary is generated dynamically by combining several of the nearest internal and external orthogonal sub-dictionaries to the patch instead of selecting only the nearest one as in previous methods. Extensive experiments on image super-resolution validate the effectiveness and state-of-the-art performance of the proposed method. Additional experiments on contaminated and uncorrelated external data also demonstrate its superior robustness.

Fig. 2. The group-structured dictionary learning and online dictionary generation process in GSSR.

Fig. 3. A flow chart of the proposed super-resolution algorithm, including the iterative SR framework with patch enhancement based on both internal and external similar patches and the group sparse reconstruction based on the structured dictionary.
Results
We conducted both qualitative and quantitative evaluations on our method, comparing it with the Bicubic interpolation method, ScSR [1], BPJDL [2], ASDS [3], NCSR [4], Landmark [5], SRCNN [6], ANR [7], A+ [8], SelfEx [9] and JSR [10].
Fig. 4. Visual comparisons between different algorithms for the image Leaves (2 times).
Fig. 5. Visual comparisons between different algorithms for the image Leaves (3 times).
Citation
@ARTICLE{7579175, author={J. Liu and W. Yang and X. Zhang and Z. Guo}, journal={IEEE Trans. on Multimedia}, title={Retrieval Compensated Group Structured Sparsity for Image Super-Resolution}, year={2017}, volume={19}, number={2}, pages={302-316}, month={Feb},} }
Reference
[1] J. Yang, J. Wright, T. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, Nov 2010.
[2] L. He, H. Qi, and R. Zaretzki, “Beta process joint dictionary learning for coupled feature spaces with application to single image super- resolution,” in Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, June 2013, pp. 345–352.
[3] W. Dong, D. Zhang, G. Shi, and X. Wu, “Image deblurring and superresolution by adaptive sparse domain selection and adaptive reg- ularization,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 1838–1857, July 2011.
[4] W. Dong, L. Zhang, G. Shi, and X. Li, “Nonlocally centralized sparse representation for image restoration,” IEEE Transactions on Image Processing, vol. 22, no. 4, pp. 1620–1630, April 2013.
[5] H. Yue, X. Sun, J. Yang, and F. Wu, “Landmark image super-resolution by retrieving web images,”IEEE Transactions on Image Proce- ssing, vol. 22, no. 12, pp. 4865–4878, Dec 2013.
[6] C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. European Conferen- ce on Computer Vision, 2014, vol. 8692, pp. 184–199.
[7] R. Timofte, V. De, and L. Van Gool, “Anchored neighborhood regression for fast example-based super-resolution,” in Proc. IEEE Int’l Conf. Computer Vision, Dec 2013, pp. 1920–1927.
[8] R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Proc. IEEE Asia Conf. Computer Vision, pp. 111–126, 2014.
[9] J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[10] Z. Wang, Y. Yang, Z. Wang, S. Chang, J. Yang, and T. Huang, “Learning super-resolution jointly from external and internal examples,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 4359–4371, Nov 2015.