← AI Glossary

What Is Fine-Tuning in AI?

Fine-tuning is the process of taking a pre-trained large language model and training it further on a smaller, specialized dataset to improve its performance on a specific task.

How Fine-Tuning Works

  1. Start with a pre-trained base model (e.g., Llama 3, GPT-4)
  2. Prepare a dataset of input-output examples for your task
  3. Train the model on this data, updating its weights
  4. The resulting model retains general knowledge but excels at your specific use case

When to Fine-Tune

Use CaseFine-Tuning?Alternative
Match a specific writing styleYesDetailed prompting
Answer questions about your docsNoRAG is better
Classify support ticketsYesFew-shot prompting
Generate domain-specific codeMaybeRAG + good prompts

Fine-Tuning vs. Prompting vs. RAG

  • Prompting: No training needed. Add instructions and examples to the prompt. Good for most tasks.
  • RAG: Retrieve relevant docs at query time. Best for factual, knowledge-heavy tasks.
  • Fine-tuning: Permanently bakes behavior into the model. Best for style, tone, and classification.

Methods

  • Full fine-tuning: Updates all model parameters. Expensive, requires significant GPU resources.
  • LoRA: Updates only a small subset of parameters. Much cheaper and faster.
  • RLHF: Fine-tuning with human feedback to align the model with human preferences.

Elvean brings all these concepts together in one native Mac app — local models, cloud APIs, agentic tools, and more.

Learn more about Elvean