Training & Fine-tuning
RLHF (Reinforcement Learning from Human Feedback)
Teaching an AI to improve its responses using human ratings to align it with human preferences.
Definition
RLHF is a training technique where human evaluators rate AI-generated responses — preferring one response over another, flagging harmful content, or scoring helpfulness. These human preferences are used to train a 'reward model' that can automatically judge response quality. The main AI model is then further trained to maximise this reward. RLHF is a key technique behind the usefulness and safety of ChatGPT and Claude.
Related Terms
Reinforcement Learning
Training where a model receives rewards or penalties based on the quality of its outputs.
Reward Model
An AI trained to score outputs, used to guide RLHF training.
AI Alignment
The challenge of ensuring AI systems pursue goals that match human values and intentions.
Constitutional AI
Anthropic's approach to training AI using a set of principles rather than only human feedback.
Heard enough terminology — ready to talk outcomes?
We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.
Disclaimer
This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.
Document reference: ISO_webpage_knowledge-base_glossary_v1
Last modified: 29 March 2026
Knowledge Base·Training & Fine-tuning·RLHF (Reinforcement Learning from Human Feedback)