Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/18154
| Title: | A No-Reference Framework for Animated Video Quality Prediction in Consumer Multimedia Applications |
| Authors: | Mogasati, Ramya Saisree Ganavdiya, Sunny Appina, Balasubramanyam Pachori, Ram Bilas |
| Issue Date: | 2026 |
| Publisher: | Institute of Electrical and Electronics Engineers Inc. |
| Citation: | Mogasati, R. S., Ganavdiya, S., Poreddy, A. K. R., Appina, B., & Pachori, R. B. (2026). A No-Reference Framework for Animated Video Quality Prediction in Consumer Multimedia Applications. IEEE Transactions on Consumer Electronics. https://doi.org/10.1109/TCE.2026.3671988 |
| Abstract: | Animated video (AV) content is widely consumed on electronic devices such as televisions, smartphones, virtual and augmented reality headsets, streaming platforms, etc., where maintaining perceptual quality is essential for user experience. We propose a reference-free AV quality assessment framework that integrates local patch structural and textural attributes with global spatial and temporal features for perceptual quality estimation. We first decompose an AV sequence into non-overlapping 3D local patches to assess the influence of artefacts on spatiotemporal information and process them using separable 3D Gaussian-derivative filters (S-3DGDF). We compute the entropy score of each subband of S-3DGDF decomposition to measure the uncertainty due to the perceptual alterations, and then formulate a 3 × 3 dimensional matrix based on estimated entropy values of the individual second-order derivative and cross-derivative subbands. The maximum eigenvalue is computed from the matrix to capture the dominant structural variation. For texture attributes, we perform mean subtracted contrast normalization across multiple directions of each 3D local patch and measure entropy scores to represent the variation in textural uncertainty due to perceptual corruption. We then compute global temporal quality as the mean SSIM between successive frames and global spatial quality as the mean NIQE across frames. Finally, we pool local structural and textural measurements with global spatial and temporal scores to estimate the overall quality score of a test video. Experimental evaluations on the distorted AV dataset show that the proposed model highly correlates with human ratings and demonstrates superior performance compared to the popular unsupervised quality assessment methods. © 1975-2011 IEEE. |
| URI: | https://dx.doi.org/10.1109/TCE.2026.3671988 https://dspace.iiti.ac.in:8080/jspui/handle/123456789/18154 |
| ISSN: | 0098-3063 |
| Type of Material: | Journal Article |
| Appears in Collections: | Department of Electrical Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: