Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/18227
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAhuja, Kapilen_US
dc.date.accessioned2026-05-14T12:28:18Z-
dc.date.available2026-05-14T12:28:18Z-
dc.date.issued2026-
dc.identifier.citationUpadhyay, A. K., Ahuja, K., & Dharavath, R. (2026). CI+KL: Confidence and divergence-guided aggregation with mixed-precision training for robust federated learning. Information Sciences, 748. https://doi.org/10.1016/j.ins.2026.123496en_US
dc.identifier.issn0020-0255-
dc.identifier.otherEID(2-s2.0-105035814637)-
dc.identifier.urihttps://dx.doi.org/10.1016/j.ins.2026.123496-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/18227-
dc.description.abstractFederated learning (FL) performance often degrades under non-independent and non-identically distributed (non-IID) client data, noisy updates, and unstable optimization dynamics. In this work, we propose CI+KL, a federated aggregation method that combines two complementary weighting signals: confidence interval (CI) widths, which reflect the statistical reliability of client updates, and Kullback–Leibler (KL) divergence, which captures distributional alignment with a global reference. By jointly accounting for update uncertainty and distributional mismatch, CI+KL adaptively modulates client contributions during aggregation. To improve computational efficiency, CI+KL is integrated with mixed-precision training, using FP16 for local computation and FP32 for global aggregation, reducing memory usage while preserving model accuracy. Experiments on standard benchmarks (CIFAR-10, CIFAR-100, SVHN, MNIST, and Shakespeare) demonstrate that CI+KL achieves stable convergence and competitive accuracy compared to established FL baselines under heterogeneous data distributions. Theoretical analysis supports the use of CIwidth as a proxy for variance, motivating the variance-reduction effect observed empirically. Overall, CI+KL provides a statistically grounded aggregation framework that demonstrates robustness and efficiency improvements on standard federated learning benchmarks. © 2026 Elsevier Inc.en_US
dc.language.isoenen_US
dc.publisherElsevier Inc.en_US
dc.sourceInformation Sciencesen_US
dc.titleCI+KL: Confidence and divergence-guided aggregation with mixed-precision training for robust federated learningen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: