Back to glossaryGLOSSARY · Concepts

Fine-tuning

Continuing to train a base LLM on your own data so it learns your domain or style. Less common in 2026 than in 2023 because frontier models are good enough out of the box for most use cases. Useful for specialised tasks where prompting is not enough.

How it works

You collect labeled examples (input → desired output) and run additional training on them. Modern fine-tuning is usually parameter-efficient (LoRA, QLoRA) — you train a small adapter rather than the full model. OpenAI, Anthropic (selectively), and most open-source providers offer fine-tuning.

Example

A legal-tech company fine-tunes Llama 4 Maverick on 10,000 contract clauses paired with their standard markup, producing a specialised model that outperforms prompting alone for that specific task.

Related terms

Need to actually use Fine-tuning?

We build production AI systems that put these concepts to work. 30 minutes, we map your use case.