Investigation on Quantization of LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language

Author(s)
Yu, Yifan
Advisor(s)
Editor(s)
Associated Organization(s)
Series
Supplementary to:
Abstract
Quantization is an indispensable technique for serving Large Language Models (LLMs) and has recently found its way into LoRA ffne-tuning (Dettmers et al., 2023). In this work we focus on the scenario where quantization and LoRA ffne-tuning are applied together on a pretrained model. In such cases it is common to observe a consistent gap in the performance on downstream tasks between full ffne-tuning and quantization plus LoRA ffne-tuning approach. In response, we propose LoftQ (LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that simultaneously quantizes an LLM and ffnds a proper low-rank initialization for LoRA ffne-tuning. Such an initialization alleviates the discrepancy between the quantized and full-precision model and signiffcantly improves generalization in downstream tasks. We evaluate our method on natural language understanding, question answering, summarization, and natural language generation tasks. Experiments show that our method is highly effective and outperforms existing quantization methods, especially in the challenging 2-bit and 2/4-bit mixed precision regimes. The code is available on https://github.com/yxli2123/LoftQ.
Sponsor
Date
Extent
Resource Type
Text
Resource Subtype
Undergraduate Research Option Thesis
Rights Statement
Rights URI