Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/18154
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMogasati, Ramya Saisreeen_US
dc.contributor.authorGanavdiya, Sunnyen_US
dc.contributor.authorAppina, Balasubramanyamen_US
dc.contributor.authorPachori, Ram Bilasen_US
dc.date.accessioned2026-05-14T12:28:14Z-
dc.date.available2026-05-14T12:28:14Z-
dc.date.issued2026-
dc.identifier.citationMogasati, R. S., Ganavdiya, S., Poreddy, A. K. R., Appina, B., & Pachori, R. B. (2026). A No-Reference Framework for Animated Video Quality Prediction in Consumer Multimedia Applications. IEEE Transactions on Consumer Electronics. https://doi.org/10.1109/TCE.2026.3671988en_US
dc.identifier.issn0098-3063-
dc.identifier.otherEID(2-s2.0-105032806095)-
dc.identifier.urihttps://dx.doi.org/10.1109/TCE.2026.3671988-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/18154-
dc.description.abstractAnimated video (AV) content is widely consumed on electronic devices such as televisions, smartphones, virtual and augmented reality headsets, streaming platforms, etc., where maintaining perceptual quality is essential for user experience. We propose a reference-free AV quality assessment framework that integrates local patch structural and textural attributes with global spatial and temporal features for perceptual quality estimation. We first decompose an AV sequence into non-overlapping 3D local patches to assess the influence of artefacts on spatiotemporal information and process them using separable 3D Gaussian-derivative filters (S-3DGDF). We compute the entropy score of each subband of S-3DGDF decomposition to measure the uncertainty due to the perceptual alterations, and then formulate a 3 × 3 dimensional matrix based on estimated entropy values of the individual second-order derivative and cross-derivative subbands. The maximum eigenvalue is computed from the matrix to capture the dominant structural variation. For texture attributes, we perform mean subtracted contrast normalization across multiple directions of each 3D local patch and measure entropy scores to represent the variation in textural uncertainty due to perceptual corruption. We then compute global temporal quality as the mean SSIM between successive frames and global spatial quality as the mean NIQE across frames. Finally, we pool local structural and textural measurements with global spatial and temporal scores to estimate the overall quality score of a test video. Experimental evaluations on the distorted AV dataset show that the proposed model highly correlates with human ratings and demonstrates superior performance compared to the popular unsupervised quality assessment methods. © 1975-2011 IEEE.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.sourceIEEE Transactions on Consumer Electronicsen_US
dc.titleA No-Reference Framework for Animated Video Quality Prediction in Consumer Multimedia Applicationsen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Electrical Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: