AI models can now be customized with far less data and computing power

An overview of BiDoRA. BiDoRA performs PEFT using a BLO framework. At the lower level, BiDoRA learns the direction component ∆V of the update matrices using the training split of the downstream dataset. At the upper level, BiDoRA optimizes the magnitude component m with optimized ∆V from the lower level, using the validation split of the dataset. After determining the optimal magnitude, the direction component undergoes further fine-tuning on a combined set of both training and validation splits to maximize overall performance. Credit: Transactions on Machine Learning Research (2025).

Engineers at the University of California San Diego have created a new method to make large language models (LLMs)—such as the ones that power chatbots and protein sequencing tools—learn new tasks using significantly less data and computing power.

LLMs are made up of billions of parameters that determine how they process information. Traditional fine-tuning methods adjust all of these parameters, which can be costly and prone to overfitting—when a model memorizes patterns instead of truly understanding them, causing it to perform poorly on new examples.

The new method developed by UC San Diego engineers takes a smarter approach. Instead of retraining an entire model from scratch, it updates only the parts that matter most. As a result, the new method cuts costs and is more flexible and better at generalizing what it learns compared to existing fine-tuning methods.

The researchers showed that their method can fine-tune protein language models—which are used to study and predict the properties of proteins—even when very little training data are available. For example, in predicting whether certain peptides can cross the blood-brain barrier, the new method achieved higher accuracy than conventional methods while using 326 times fewer parameters. In predicting protein thermostability, it matched the performance of full fine-tuning while using 408 times fewer parameters.

“With our method, even small labs and startups without huge budgets, supercomputer-level resources or large datasets can adapt large AI models for their own needs,” said Pengtao Xie, a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering. “This work represents a step toward democratizing AI.”

The new method for fine-tuning and adapting LLMs is published in Transactions on Machine Learning Research.

More information:
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation, Transactions on Machine Learning Research (2025). openreview.net/forum?id=v2xCm3VYl4

Code: github.com/t2ance/BiDoRA

Provided by
University of California – San Diego


Citation:
AI models can now be customized with far less data and computing power (2025, October 21)
retrieved 21 October 2025
from https://techxplore.com/news/2025-10-ai-customized-power.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.