Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/16501
Title: Advancing RVFL Networks: Robust Classification with the HawkEye Loss Function
Authors: Akhtar, Mushir
Mishra, Ritik
Tanveer, M.
Arshad, Mohd.
Keywords: HawkEye loss function;Nesterov accelerated gradient (NAG) algorithm;Random vector functional link (RVFL) network;Robust classification;Squared error loss function
Issue Date: 2025
Publisher: Springer Science and Business Media Deutschland GmbH
Citation: Akhtar, M., Mishra, R., Tanveer, M., & Arshad, M. (2025). Advancing RVFL Networks: Robust Classification with the HawkEye Loss Function. In Lecture Notes in Computer Science: Vol. 15288 LNCS. https://doi.org/10.1007/978-981-96-6582-2_16
Abstract: Random vector functional link (RVFL), a variant of single-layer feedforward neural network (SLFN), has garnered significant attention due to its lower computational cost and robustness to overfitting. Despite its advantages, the RVFL network’s reliance on the square error loss function makes it highly sensitive to outliers and noise, leading to degraded model performance in real-world applications. To remedy it, we propose the incorporation of the HawkEye loss (H-loss) function into the RVFL framework. The H-loss function features nice mathematical properties, including smoothness and boundedness, while simultaneously incorporating an insensitive zone. Each characteristic brings its own advantages: 1) Boundedness limits the impact of extreme errors, enhancing robustness against outliers
2) Smoothness facilitates the use of gradient-based optimization algorithms, ensuring stable and efficient convergence
and 3) The insensitive zone mitigates the effect of minor discrepancies and noise. Leveraging the H-loss function, we embed it into the RVFL framework and develop a novel robust RVFL model termed H-RVFL. Notably, this work addresses a significant gap, as no bounded loss function has been incorporated into RVFL to date. The non-convex optimization of the proposed H-RVFL is effectively addressed by the Nesterov accelerated gradient (NAG) algorithm, whose computational complexity is also discussed. The proposed H-RVFL model’s effectiveness is validated through extensive experiments on 40 benchmark datasets from UCI and KEEL repositories, with and without label noise. The results highlight significant improvements in robustness and efficiency, establishing the H-RVFL model as a powerful tool for applications in noisy and outlier-prone environments. The supplement material and the source code for the proposed H-RVFL are publicly available at https://github.com/mtanveer1/H-RVFL. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
URI: https://dx.doi.org/10.1007/978-981-96-6582-2_16
https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16501
ISSN: 0302-9743
Type of Material: Conference Paper
Appears in Collections:Department of Mathematics

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: