Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/17516
Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Hegde, Suhas | en_US |
| dc.contributor.author | Kaur, Shilpy | en_US |
| dc.contributor.author | Tiwari, Aruna | en_US |
| dc.date.accessioned | 2025-12-25T10:56:43Z | - |
| dc.date.available | 2025-12-25T10:56:43Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.citation | Hegde, S., Kaur, S., & Tiwari, A. (2025). VectorFit: Adaptive Singular & Bias Vector Fine-Tuning of Pre-Trained Foundation Models. In I. Lynce, N. Murano, M. Vallati, S. Villata, F. Chesani, M. Milano, A. Omicini, & M. Dastani (Eds.), Front. Artif. Intell. Appl. (Vol. 413, pp. 4522–4529). IOS Press BV | en_US |
| dc.identifier.citation | Scopus. https://doi.org/10.3233/FAIA251353 | en_US |
| dc.identifier.isbn | 978-1614993605 | - |
| dc.identifier.isbn | 9781643685830 | - |
| dc.identifier.isbn | 9781586038311 | - |
| dc.identifier.isbn | 9781614994183 | - |
| dc.identifier.isbn | 9781614999409 | - |
| dc.identifier.isbn | 9781607507987 | - |
| dc.identifier.isbn | 1586035770 | - |
| dc.identifier.isbn | 9781643685694 | - |
| dc.identifier.isbn | 9781643685427 | - |
| dc.identifier.isbn | 9781607500490 | - |
| dc.identifier.issn | 09226389 | - |
| dc.identifier.issn | 1879-8314 | - |
| dc.identifier.other | EID(2-s2.0-105024458765) | - |
| dc.identifier.uri | https://dx.doi.org/10.3233/FAIA251353 | - |
| dc.identifier.uri | https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17516 | - |
| dc.description.abstract | Popular PEFT methods reduce trainable parameter count for fine-tuning by parameterizing new low-rank or sparse trainable weights in parallel to the frozen pre-trained weights W. However, these weights are trained from scratch, and there exists a performance gap between these methods and full fine-tuning, especially in low-budget settings. We introduce VectorFit, a new way of parameterization that efficiently utilizes the existing knowledge embedded in W by adaptively training their singular vectors and biases. We show that utilizing the structural and transformational properties of W in this way can lead to high-rank incremental weight matrices ?W, comparable to that of full fine-tuning. VectorFit delivers superior results with 9× fewer trainable parameters than the leading PEFT methods. Through comprehensive experiments across 19 datasets covering a wide range of language and vision tasks such as natural language understanding and generation, question answering, image classification, and image generation, we demonstrate that VectorFit surpasses baselines in terms of performance as a function of parameter-efficiency. © 2025 The Authors. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | IOS Press BV | en_US |
| dc.source | Frontiers in Artificial Intelligence and Applications | en_US |
| dc.title | VectorFit : Adaptive Singular & Bias Vector Fine-Tuning of Pre-Trained Foundation Models | en_US |
| dc.type | Conference Paper | en_US |
| Appears in Collections: | Department of Computer Science and Engineering | |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: