Safety, Alignment & Ethics
AI Alignment
The challenge of ensuring AI systems pursue goals that match human values and intentions.
Definition
AI alignment is the broad challenge of building AI systems that reliably do what their developers and users actually want, rather than pursuing narrow objectives in unexpected or harmful ways. A system optimised purely to maximise user engagement might spread misinformation; a system optimised to complete tasks efficiently might take harmful shortcuts. Alignment research seeks to understand and solve this problem before AI systems become powerful enough for misalignment to cause serious harm.
Related Terms
AI Safety
The field focused on preventing AI from causing harm — intentional or unintentional.
RLHF (Reinforcement Learning from Human Feedback)
Teaching an AI to improve its responses using human ratings to align it with human preferences.
Constitutional AI
Anthropic's approach to training AI using a set of principles rather than only human feedback.
Heard enough terminology — ready to talk outcomes?
We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.
Disclaimer
This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.
Document reference: ISO_webpage_knowledge-base_glossary_v1
Last modified: 29 March 2026
Knowledge Base·Safety, Alignment & Ethics·AI Alignment