Yes, Fine-tuning techniques such as LoRA (Low-Rank Adaptation) and QLoRA (Quantized Low-Rank Adaptation) have primarily been developed and utilized for improving the efficiency of fine-tuning large language models, especially transformer-based architectures. However, the core principles of these techniques, which involve adapting pre-trained models with a reduced number of parameters, can theoretically be extended to other neural network architectures, including Convolutional Neural Networks (CNNs).
To my knowledge, there hasn't been significant literature or well-documented cases explicitly applying LoRA or QLoRA to pre-trained CNN architectures. Most research and applications of these techniques have focused on transformers due to their prominence in natural language processing tasks.
Nevertheless, adapting such techniques for CNNs would involve:
Modifying the LoRA/QLoRA framework to accommodate convolutional layers, considering the unique properties of convolutional filters compared to the dense layers in transformers.
Ensuring that parameter compression or quantization does not severely impact the performance of the CNN on tasks like image classification, object detection, or image segmentation.
Conducting extensive experimentation and evaluation on standard datasets to validate the effectiveness and robustness of these techniques in the context of CNNs.