Journal Publications


  • Unimodal Model-Based Inter Mode Decision for High Efficiency Video Coding
    Huanqiang Zeng, Wenjie Xiang, Jing Chen, Canhui Cai, Zhangkai Ni, and Kai-Kuang Ma.
    IEEE Access, vol.7, pp. 27936-27947, February 2019.
    Abstract | Paper | BibTex Abstract: In this paper, a fast inter mode decision algorithm, called the unimodal model-based inter mode decision (UMIMD), is proposed for the latest video coding standard, the high-efficiency video coding. Through extensive simulations, it has been observed that a unimodal model (i.e., with only one global minimum value) can be established among the size of different prediction unit (PU) modes and their resulted rate-distortion (RD) costs for each quad-tree partitioned coding tree unit (CTU). To guarantee the unimodality and further search the optimal operating point over this function for each CTU, all the PU modes need to be first classified into 11 mode classes according to their sizes. These classes are then properly ordered and sequentially checked according to the class index, from small to large so that the optimal mode can be early identified by checking when the RD cost starts to arise. In addition, an effective instant SKIP mode termination scheme is developed by simply checking the SKIP mode against a pre-determined threshold to further reduce the computational complexity. The extensive simulation results have shown that the proposed UMIMD algorithm is able to individually achieve a significant reduction on computational complexity at the encoder by 61.9% and 64.2% on average while incurring only 1.7% and 2.1% increment on the total Bjontegaard delta bit rate (BDBR) for the low delay and random access test conditions, compared with the exhaustive mode decision in the HEVC. Moreover, the experimental results have further demonstrated that the proposed UMIMD algorithm outperforms multiple state-of-the-art methods.
    @article{zeng2019unimodal,
    	title={Unimodal Model-Based Inter Mode Decision for High Efficiency Video Coding},
    	author={Zeng, Huanqiang and Xiang, Wenjie and Chen, Jing and Cai, Canhui and Ni, Zhangkai and Ma, Kai-Kuang},
    	journal={IEEE Access},
    	year={2019},
    	publisher={IEEE}
    }
  • A Gabor Feature-Based Quality Assessment Model for the Screen Content Images
    Zhangkai Ni, Huanqiang Zeng, Lin Ma, Junhui Hou, Jing Chen, and Kai-Kuang Ma.
    IEEE Transactions on Image Processing (T-IP), vol. 27, no. 9, pp. 4516-4528, September 2018.
    Abstract | Paper | Code | Project | BibTex Abstract: In this paper, an accurate and efficient full-reference image quality assessment (IQA) model using the extracted Gabor features, called Gabor feature-based model (GFM), is proposed for conducting objective evaluation of screen content images (SCIs). It is well-known that the Gabor filters are highly consistent with the response of the human visual system (HVS), and the HVS is highly sensitive to the edge information. Based on these facts, the imaginary part of the Gabor filter that has odd symmetry and yields edge detection is exploited to the luminance of the reference and distorted SCI for extracting their Gabor features, respectively. The local similarities of the extracted Gabor features and two chrominance components, recorded in the LMN color space, are then measured independently. Finally, the Gabor-feature pooling strategy is employed to combine these measurements and generate the final evaluation score. Experimental simulation results obtained from two large SCI databases have shown that the proposed GFM model not only yields a higher consistency with the human perception on the assessment of SCIs but also requires a lower computational complexity, compared with that of classical and state-of-the-art IQA models.
    @article{ni2018gabor,
    	title={A Gabor feature-based quality assessment model for the screen content images},
    	author={Ni, Zhangkai and Zeng, Huanqiang and Ma, Lin and Hou, Junhui and Chen, Jing and Ma, Kai-Kuang},
    	journal={IEEE Transactions on Image Processing},
    	volume={27},
    	number={9},
    	pages={4516--4528},
    	year={2018},
    	publisher={IEEE}
    }
  • Screen Content Image Quality Assessment Using Multi-Scale Difference of Gaussian
    Ying Fu, Huanqiang Zeng, Lin Ma, Zhangkai Ni, Jianqing Zhu, and Kai-Kuang Ma.
    IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), vol. 28, no. 9, pp. 2428-2432, September 2018.
    Abstract | Paper | Code | BibTex Abstract: In this paper, a novel image quality assessment (IQA) model for the screen content images (SCIs) is proposed by using multi-scale difference of Gaussian (MDOG). Motivated by the observation that the human visual system (HVS) is sensitive to the edges while the image details can be better explored in different scales, the proposed model exploits MDOG to effectively characterize the edge information of the reference and distorted SCIs at two different scales, respectively. Then, the degree of edge similarity is measured in terms of the smaller-scale edge map. Finally, the edge strength computed based on the larger-scale edge map is used as the weighting factor to generate the final SCI quality score. Experimental results have shown that the proposed IQA model for the SCIs produces high consistency with human perception of the SCI quality and outperforms the state-of-the-art quality models.
    @article{fu2018screen,
    	title={Screen content image quality assessment using multi-scale difference of gaussian},
    	author={Fu, Ying and Zeng, Huanqiang and Ma, Lin and Ni, Zhangkai and Zhu, Jianqing and Ma, Kai-Kuang},
    	journal={IEEE Transactions on Circuits and Systems for Video Technology},
    	volume={28},
    	number={9},
    	pages={2428--2432},
    	year={2018},
    	publisher={IEEE}
    }
  • ESIM: Edge Similarity for Screen Content Image Quality Assessment
    Zhangkai Ni, Lin Ma, Huanqiang Zeng, Jing Chen, Canhui Cai, and Kai-Kuang Ma.
    IEEE Transactions on Image Processing (T-IP), vol. 26, no. 10, pp. 4818-4831, October 2017.
    Abstract | Paper | Code | Database | Project | BibTex Abstract: In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.
    @article{ni2017esim,
    	title={ESIM: Edge similarity for screen content image quality assessment},
    	author={Ni, Zhangkai and Ma, Lin and Zeng, Huanqiang and Chen, Jing and Cai, Canhui and Ma, Kai-Kuang},
    	journal={IEEE Transactions on Image Processing},
    	volume={26},
    	number={10},
    	pages={4818--4831},
    	year={2017},
    	publisher={IEEE}
    }
  • Gradient direction for screen content image quality assessment
    Zhangkai Ni, Lin Ma, Huanqiang Zeng, Canhui Cai, and Kai-Kuang Ma.
    IEEE Signal Processing Letters (SPL), vol. 23, no. 10, pp. 1394–1398, August 2016.
    Abstract | Paper | Code | Project | BibTex Abstract: In this letter, we make the first attempt to explore the usage of the gradient direction to conduct the perceptual quality assessment of the screen content images (SCIs). Specifically, the proposed approach first extracts the gradient direction based on the local information of the image gradient magnitude, which not only preserves gradient direction consistency in local regions, but also demonstrates sensitivities to the distortions introduced to the SCI. A deviation-based pooling strategy is subsequently utilized to generate the corresponding image quality index. Moreover, we investigate and demonstrate the complementary behaviors of the gradient direction and magnitude for SCI quality assessment. By jointly considering them together, our proposed SCI quality metric outperforms the state-of-the-art quality metrics in terms of correlation with human visual system perception.
    @article{ni2016gradient,
    	title={Gradient direction for screen content image quality assessment},
    	author={Ni, Zhangkai and Ma, Lin and Zeng, Huanqiang and Cai, Canhui and Ma, Kai-Kuang},
    	journal={IEEE Signal Processing Letters},
    	volume={23},
    	number={10},
    	pages={1394--1398},
    	year={2016},
    	publisher={IEEE}
    }


Conference Publications


  • SCID: A database for screen content images quality assessment
    Zhangkai Ni, Lin Ma, Huanqiang Zeng, Ying Fu, Lu Xing, and Kai-Kuang Ma.
    International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pp. 774-779, November 2017.
    Abstract | Paper | Database | Project | BibTex Abstract: Perceptual quality assessment of screen content images (SCIs) has become a new challenging topic in the recent research of image quality assessment (IQA). In this work, we construct a new SCI database (called as SCID) for subjective quality evaluate of SCIs and investigate whether existing IQA models can effectively assess the perceptual quality of distorted SCIs. The proposed SCID, which is currently the largest one, containing 1,800 distorted SCIs generated from 40 reference SCIs with 9 types of distortions and 5 degradation levels for each distortion type. The double-stimulus impairment scale (DSIS) method is then employed to rate the perceptual quality, in which each image is evaluated by at least 40 assessors. After processing, each distorted SCI is accompanied with one mean opinion score (MOS) value to indicate its perceptual quality as ground truth. Based on the constructed SCID, we evaluate the performances of 14 state-of-the-art IQA metrics. Experimental results show that the existing IQA metrics do not be able to evaluate the perceptual quality of SCIs well and an IQA metric specifically for SCIs is thus desirable. The proposed SCID will be made publicly available to the research community for further investigation on the perceptual processing of SCIs.
    @inproceedings{ni2017scid,
    	title={SCID: A database for screen content images quality assessment},
    	author={Ni, Zhangkai and Ma, Lin and Zeng, Huanqiang and Fu, Ying and Xing, Lu and Ma, Kai-Kuang},
    	booktitle={2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)},
    	pages={774--779},
    	year={2017},
    	organization={IEEE}
    }
  • Screen content image quality assessment using Euclidean distance
    Ying Fu, Huanqiang Zeng, Zhangkai Ni, Jing Chen, Canhui Cai, and Kai-Kuang Ma.
    International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pp. 44-49, November 2017.
    Abstract | Paper | BibTex Abstract: Considering that human visual system (HVS) is greatly sensitive to edge, in this study, we design a new full-reference objective quality assessment method for screen content images (SCIs). The key novelty lies in the extracting of the edge information by computing the Euclidean distance of luminance in the SCIs. Since HVS is greatly suitable for extracting structural information, the structure information is incorporated into our proposed model. The extracted information is then used to compute the similarity maps of the reference SCI and its distorted SCI. Finally, we combine the obtained maps by using our designed pooling strategy. Experience results have shown that the designed method get higher correlation with the subjective quality score than state-of-the-art quality assessment models.
    @inproceedings{fu2017screen,
    	title={Screen content image quality assessment using Euclidean distance},
    	author={Fu, Ying and Zeug, Huanqiang and Ni, Zhangkai and Chen, Jing and Cai, Canhui and Ma, Kai-Kuang},
    	booktitle={2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)},
    	pages={44--49},
    	year={2017},
    	organization={IEEE}
    }
  • Screen content image quality assessment using edge model
    Zhangkai Ni, Lin Ma, Huanqiang Zeng, Canhui Cai, and Kai-Kuang Ma.
    IEEE International conference on Image Processing (ICIP), pp. 81–85, August 2016.
    Abstract | Paper | Code | BibTex Abstract: Since the human visual system (HVS) is highly sensitive to edges, a novel image quality assessment (IQA) metric for assessing screen content images (SCIs) is proposed in this paper. The turnkey novelty lies in the use of an existing parametric edge model to extract two types of salient attributes - namely, edge contrast and edge width, for the distorted SCI under assessment and its original SCI, respectively. The extracted information is subject to conduct similarity measurements on each attribute, independently. The obtained similarity scores are then combined using our proposed edge-width pooling strategy to generate the final IQA score. Hopefully, this score is consistent with the judgment made by the HVS. Experimental results have shown that the proposed IQA metric produces higher consistency with that of the HVS on the evaluation of the image quality of the distorted SCI than that of other state-of-the-art IQA metrics.
    @inproceedings{ni2016screen,
    	title={Screen content image quality assessment using edge model},
    	author={Ni, Zhangkai and Ma, Lin and Zeng, Huanqiang and Cai, Canhui and Ma, Kai-Kuang},
    	booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
    	pages={81--85},
    	year={2016},
    	organization={IEEE}
    }
Top