Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/18209
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGanavdiya, Sunnyen_US
dc.contributor.authorMogasati, Ramya Saisreeen_US
dc.contributor.authorAppina, Balasubramanyamen_US
dc.contributor.authorPachori, Ram Bilasen_US
dc.date.accessioned2026-05-14T12:28:17Z-
dc.date.available2026-05-14T12:28:17Z-
dc.date.issued2026-
dc.identifier.citationGanavdiya, S., Mogasati, R. S., Appina, B., & Pachori, R. B. (2026). A supervised no-reference quality assessment model for animated videos using 3D Gaussian derivative steerable filtering and statistical feature modeling. Displays, 94. https://doi.org/10.1016/j.displa.2026.103468en_US
dc.identifier.issn0141-9382-
dc.identifier.otherEID(2-s2.0-105035669552)-
dc.identifier.urihttps://dx.doi.org/10.1016/j.displa.2026.103468-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/18209-
dc.description.abstractAnimated video (AV) content is increasingly consumed across diverse digital platforms and display devices, where ensuring high perceptual fidelity is crucial for delivering a satisfactory user experience. In this work, we propose a supervised no-reference quality assessment model for animated videos that integrates statistically driven subband modeling with uncertainty-based feature characterization for perceptual quality estimation. We first decompose each AV sequence into non-overlapping 3D local patches to evaluate the influence of distortions on spatiotemporal structures. These patches are processed using 3D Gaussian-derivative-steerable filters configured over multiple azimuth and elevation orientations to extract directionally sensitive subband responses. Each subband is then empirically modeled using a Weibull distribution, and the corresponding shape and scale parameters are estimated to quantify distortion-sensitive statistical variations. To further capture perceptual uncertainty, we compute entropy scores from every subband response, generating a comprehensive representation of ambiguity introduced by visual artefacts. We then pool all subband level descriptors by weighting them according to the energy strength of their respective subbands. Finally, we consolidate all feature vectors along with associated human-assessed labels, and support vector regression model is subsequently trained to learn the mapping between these features and subjective scores enabling automatic assessment of perceptual quality. We have utilized benchmark AV dataset with multiple levels of software and hardware-rendered distortions. Experimental evaluations on this dataset demonstrate that the proposed framework achieves strong correlation with human perception and significantly outperforms existing models in predicting the quality of distorted animated videos. © 2026 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.en_US
dc.language.isoenen_US
dc.publisherElsevier B.V.en_US
dc.sourceDisplaysen_US
dc.titleA supervised no-reference quality assessment model for animated videos using 3D Gaussian derivative steerable filtering and statistical feature modelingen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Electrical Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: