Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/17237
Title: Towards robust and inversion-free randomized neural networks: The XG-RVFL framework
Authors: Akhtar, Mushir
Kumari, Anuradha
Sajid, M.
Quadir, A.
Arshad, Mohd
Tanveer, M. Sayed
Keywords: FleXi guardian loss;Guardian loss;Inverse free optimization;Rademacher complexity;Random vector functional link networks;Randomized neural networks;Robust classification
Issue Date: 2026
Publisher: Elsevier Ltd
Citation: Akhtar, M., Kumari, A., Sajid, M., Quadir, A., Arshad, M., Suganthan, P. N., & Tanveer, M. S. (2026). Towards robust and inversion-free randomized neural networks: The XG-RVFL framework. Pattern Recognition, 172. https://doi.org/10.1016/j.patcog.2025.112711
Abstract: Random vector functional link (RVFL) networks offer a computationally efficient alternative to conventional neural networks by leveraging some fixed random parameters and closed-form solutions. However, standard RVFL models suffer from two critical limitations: (i) vulnerability to noise and outliers due to their reliance on the squared error loss, and (ii) computational inefficiencies arising from matrix inversion. To address these challenges, we propose XG-RVFL, an enhanced RVFL framework that integrates the novel fleXi guardian (XG) loss function. The proposed XG loss extends the guardian loss function by introducing dynamic asymmetry and boundedness, enabling adaptive penalization of positive and negative deviations. This flexibility enhances robustness to noise, reduces sensitivity to outliers, and improves generalization. In addition, we reformulate the training process to avoid matrix inversion, significantly boosting scalability and efficiency. Beyond empirical performance, we provide a comprehensive theoretical analysis of the XG loss, establishing its key properties, including asymmetry, boundedness, smoothness, Lipschitz continuity, and robustness. Furthermore, we derive a generalization error bound for the XG-RVFL model using Rademacher complexity theory, offering formal guarantees on its expected performance. Extensive experiments on 86 benchmark UCI and KEEL datasets show that XG-RVFL consistently outperforms baseline models. Statistical significance is validated through the Friedman test and Nemenyi post-hoc analysis. Overall, XG-RVFL presents a unified, theoretically grounded, and computationally efficient solution for robust classification, effectively overcoming longstanding limitations of standard RVFL networks. The source code of the proposed XG-RVFL model is accessible at https://github.com/mtanveer1/XG-RVFL. © 2025 Elsevier B.V., All rights reserved.
URI: https://dx.doi.org/10.1016/j.patcog.2025.112711
https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17237
ISBN: 9781597492720
9780123695314
ISSN: 0031-3203
Type of Material: Journal Article
Appears in Collections:Department of Mathematics

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: