Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Two popular approaches for customizing large language models (LLMs) for downstream tasks are fine-tuning and in-context learning (ICL). In a recent study, researchers at Google DeepMind and Stanford University explored the generalization capabilities of these two methods. They find that ICL has greater generalization ability (though it comes at a higher computation cost during inference). They also propose a novel approach to get the best of both worlds.
The findings can help developers make crucial decisions when building LLM applications for their bespoke enterprise data.
Testing how language models learn new tricks
Fine-tuning involves taking a pre-trained LLM and further training it on a smaller, specialized dataset. This adjusts the model’s internal parameters to teach it new knowledge or skills. In-context learning (ICL), on the other hand, doesn’t change the model’s underlying parameters. Instead, it guides the LLM by providing examples of the desired task directly within the input prompt. The model then uses these examples to figure out how to handle a new, similar query.
The researchers set out to rigorously compare how well models generalize to new tasks using these two methods. They constructed “controlled synthetic datasets of factual knowledge” with complex, self-consistent structures, like imaginary family trees or hierarchies of fictional concepts.
To ensure they were testing the model’s ability to learn new information, they replaced all nouns, adjectives, and verbs with nonsense terms, avoiding any overlap with the data the LLMs might have encountered during pre-training.
The models were then tested on various generalization challenges. For instance, one test involved simple reversals. If a model was trained that “femp are more dangerous than glon,” could it correctly infer that “glon are less dangerous than femp”? Another test focused on simple syllogisms, a form of logical deduction. If told “All glon are yomp” and “All troff are glon,” could the model deduce that “All troff are yomp”? They also used a more complex “semantic structure benchmark” with a richer hierarchy of these made-up facts to test more nuanced understanding.
“Our results are focused primarily on settings about how models generalize to deductions and reversals from fine-tuning on novel knowledge structures, with clear implications for situations when fine-tuning is used to adapt a model to company …