Training & Fine-tuning

Overfitting

When a model memorises training data rather than learning general patterns.

Definition

Overfitting occurs when a model performs excellently on the data it was trained on but poorly on new, unseen examples. Instead of learning the underlying pattern, the model has essentially memorised the training examples. It's analogous to a student who memorises past exam answers rather than understanding the subject — they'll score well on a test with familiar questions but fail on unfamiliar ones. Regularisation, dropout, and careful data management are used to prevent overfitting.

Why this matters for your business

Overfitting is a key risk when fine-tuning AI on small, narrow datasets. If you fine-tune on too few examples, the model may perform well in testing but poorly in production on slightly different inputs.

Heard enough terminology — ready to talk outcomes?

We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.

← Back to glossary

Disclaimer

This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.

Document reference: ISO_webpage_knowledge-base_glossary_v1

Last modified: 29 March 2026

Knowledge Base·Training & Fine-tuning·Overfitting