Infrastructure & Deployment

Quantisation

Reducing a model's numerical precision to make it smaller and faster, with minimal quality loss.

Definition

Neural networks store their weights as floating-point numbers of varying precision. Quantisation reduces this precision — for example, from 32-bit to 8-bit or 4-bit numbers — making the model smaller and faster to run. The quality loss from quantisation is often surprisingly small, and quantised models can run on hardware that would otherwise be insufficient. Quantisation is the primary technique for deploying large models in resource-constrained environments.

Heard enough terminology — ready to talk outcomes?

We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.

← Back to glossary

Disclaimer

This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.

Document reference: ISO_webpage_knowledge-base_glossary_v1

Last modified: 29 March 2026

Knowledge Base·Infrastructure & Deployment·Quantisation