Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/17237
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAkhtar, Mushiren_US
dc.contributor.authorKumari, Anuradhaen_US
dc.contributor.authorSajid, M.en_US
dc.contributor.authorQuadir, A.en_US
dc.contributor.authorArshad, Mohden_US
dc.contributor.authorTanveer, M. Sayeden_US
dc.date.accessioned2025-11-27T13:46:15Z-
dc.date.available2025-11-27T13:46:15Z-
dc.date.issued2026-
dc.identifier.citationAkhtar, M., Kumari, A., Sajid, M., Quadir, A., Arshad, M., Suganthan, P. N., & Tanveer, M. S. (2026). Towards robust and inversion-free randomized neural networks: The XG-RVFL framework. Pattern Recognition, 172. https://doi.org/10.1016/j.patcog.2025.112711en_US
dc.identifier.isbn9781597492720-
dc.identifier.isbn9780123695314-
dc.identifier.issn0031-3203-
dc.identifier.otherEID(2-s2.0-105021848076)-
dc.identifier.urihttps://dx.doi.org/10.1016/j.patcog.2025.112711-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/17237-
dc.description.abstractRandom vector functional link (RVFL) networks offer a computationally efficient alternative to conventional neural networks by leveraging some fixed random parameters and closed-form solutions. However, standard RVFL models suffer from two critical limitations: (i) vulnerability to noise and outliers due to their reliance on the squared error loss, and (ii) computational inefficiencies arising from matrix inversion. To address these challenges, we propose XG-RVFL, an enhanced RVFL framework that integrates the novel fleXi guardian (XG) loss function. The proposed XG loss extends the guardian loss function by introducing dynamic asymmetry and boundedness, enabling adaptive penalization of positive and negative deviations. This flexibility enhances robustness to noise, reduces sensitivity to outliers, and improves generalization. In addition, we reformulate the training process to avoid matrix inversion, significantly boosting scalability and efficiency. Beyond empirical performance, we provide a comprehensive theoretical analysis of the XG loss, establishing its key properties, including asymmetry, boundedness, smoothness, Lipschitz continuity, and robustness. Furthermore, we derive a generalization error bound for the XG-RVFL model using Rademacher complexity theory, offering formal guarantees on its expected performance. Extensive experiments on 86 benchmark UCI and KEEL datasets show that XG-RVFL consistently outperforms baseline models. Statistical significance is validated through the Friedman test and Nemenyi post-hoc analysis. Overall, XG-RVFL presents a unified, theoretically grounded, and computationally efficient solution for robust classification, effectively overcoming longstanding limitations of standard RVFL networks. The source code of the proposed XG-RVFL model is accessible at https://github.com/mtanveer1/XG-RVFL. © 2025 Elsevier B.V., All rights reserved.en_US
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.sourcePattern Recognitionen_US
dc.subjectFleXi guardian lossen_US
dc.subjectGuardian lossen_US
dc.subjectInverse free optimizationen_US
dc.subjectRademacher complexityen_US
dc.subjectRandom vector functional link networksen_US
dc.subjectRandomized neural networksen_US
dc.subjectRobust classificationen_US
dc.titleTowards robust and inversion-free randomized neural networks: The XG-RVFL frameworken_US
dc.typeJournal Articleen_US
dc.rights.licenseAll Open Access-
dc.rights.licenseGold Open Access-
dc.rights.licenseGreen Accepted Open Access-
dc.rights.licenseGreen Open Access-
Appears in Collections:Department of Mathematics

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: