Training & Fine-tuning

Catastrophic Forgetting

When a model loses previously learned knowledge after being fine-tuned on new data.

Definition

When you fine-tune a model on new data, there's a risk that training on the new material overwrites or disrupts the knowledge learned during pre-training. This is called catastrophic forgetting. A model fine-tuned aggressively on legal documents might forget how to do the general reasoning it was previously capable of. Techniques like LoRA, careful learning rate scheduling, and mixing original training data with new data help mitigate this problem.

Heard enough terminology — ready to talk outcomes?

We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.

← Back to glossary

Disclaimer

This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.

Document reference: ISO_webpage_knowledge-base_glossary_v1

Last modified: 29 March 2026

Knowledge Base·Training & Fine-tuning·Catastrophic Forgetting