Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/17516
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHegde, Suhasen_US
dc.contributor.authorKaur, Shilpyen_US
dc.contributor.authorTiwari, Arunaen_US
dc.date.accessioned2025-12-25T10:56:43Z-
dc.date.available2025-12-25T10:56:43Z-
dc.date.issued2025-
dc.identifier.citationHegde, S., Kaur, S., & Tiwari, A. (2025). VectorFit: Adaptive Singular & Bias Vector Fine-Tuning of Pre-Trained Foundation Models. In I. Lynce, N. Murano, M. Vallati, S. Villata, F. Chesani, M. Milano, A. Omicini, & M. Dastani (Eds.), Front. Artif. Intell. Appl. (Vol. 413, pp. 4522–4529). IOS Press BVen_US
dc.identifier.citationScopus. https://doi.org/10.3233/FAIA251353en_US
dc.identifier.isbn978-1614993605-
dc.identifier.isbn9781643685830-
dc.identifier.isbn9781586038311-
dc.identifier.isbn9781614994183-
dc.identifier.isbn9781614999409-
dc.identifier.isbn9781607507987-
dc.identifier.isbn1586035770-
dc.identifier.isbn9781643685694-
dc.identifier.isbn9781643685427-
dc.identifier.isbn9781607500490-
dc.identifier.issn09226389-
dc.identifier.issn1879-8314-
dc.identifier.otherEID(2-s2.0-105024458765)-
dc.identifier.urihttps://dx.doi.org/10.3233/FAIA251353-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/17516-
dc.description.abstractPopular PEFT methods reduce trainable parameter count for fine-tuning by parameterizing new low-rank or sparse trainable weights in parallel to the frozen pre-trained weights W. However, these weights are trained from scratch, and there exists a performance gap between these methods and full fine-tuning, especially in low-budget settings. We introduce VectorFit, a new way of parameterization that efficiently utilizes the existing knowledge embedded in W by adaptively training their singular vectors and biases. We show that utilizing the structural and transformational properties of W in this way can lead to high-rank incremental weight matrices ?W, comparable to that of full fine-tuning. VectorFit delivers superior results with 9× fewer trainable parameters than the leading PEFT methods. Through comprehensive experiments across 19 datasets covering a wide range of language and vision tasks such as natural language understanding and generation, question answering, image classification, and image generation, we demonstrate that VectorFit surpasses baselines in terms of performance as a function of parameter-efficiency. © 2025 The Authors.en_US
dc.language.isoenen_US
dc.publisherIOS Press BVen_US
dc.sourceFrontiers in Artificial Intelligence and Applicationsen_US
dc.titleVectorFit : Adaptive Singular & Bias Vector Fine-Tuning of Pre-Trained Foundation Modelsen_US
dc.typeConference Paperen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: