Performance & Evaluation

Latency

The time between sending a prompt and receiving a response.

Definition

Latency is the delay between submitting a request to an AI model and receiving its response. For conversational applications, high latency creates a poor user experience. For real-time applications — live customer service, fraud detection, safety systems — even small delays may be unacceptable. Latency is determined by model size, server capacity, network conditions, and response length. Smaller, quantised models generally have lower latency than large, full-precision models.

Why this matters for your business

When selecting an AI model for customer-facing applications, latency should be tested under realistic load conditions — vendor-quoted figures often reflect single-user benchmarks, not peak concurrent usage.

Heard enough terminology — ready to talk outcomes?

We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.

← Back to glossary

Disclaimer

This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.

Document reference: ISO_webpage_knowledge-base_glossary_v1

Last modified: 29 March 2026

Knowledge Base·Performance & Evaluation·Latency