Infrastructure & Deployment
Model Serving
The infrastructure that makes a trained model available to receive and respond to requests.
Definition
Model serving is the engineering layer that takes a trained model and makes it available at scale — handling incoming requests, managing load, ensuring low latency, and returning responses. It includes inference servers, load balancers, scaling mechanisms, and monitoring. Serving is distinct from training: you train a model once, but you serve it continuously. The reliability, cost, and performance of model serving directly determines the user experience of any AI-powered product.
Related Terms
Inference
The process of a trained AI model actually running and producing outputs — as opposed to being trained.
Latency
The time between sending a prompt and receiving a response.
Throughput
How many requests an AI system can handle per unit of time.
MLOps
The practice of managing AI models in production — deployment, monitoring, updating.
Heard enough terminology — ready to talk outcomes?
We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.
Disclaimer
This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.
Document reference: ISO_webpage_knowledge-base_glossary_v1
Last modified: 29 March 2026
Knowledge Base·Infrastructure & Deployment·Model Serving