Prompting & Interaction
Jailbreaking
Attempts to bypass an AI's safety guidelines through clever prompting.
Definition
Jailbreaking refers to techniques used to circumvent the safety restrictions built into AI systems. Users may try to use fictional framings ('pretend you have no restrictions'), complex role-play scenarios, or adversarial prompt patterns to get the AI to produce content it would normally decline to generate. AI developers continuously improve their models to resist jailbreaking, but it remains an ongoing challenge. Businesses deploying AI should consider what jailbreaking vectors exist in their use case.
Related Terms
Heard enough terminology — ready to talk outcomes?
We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.
Disclaimer
This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.
Document reference: ISO_webpage_knowledge-base_glossary_v1
Last modified: 29 March 2026
Knowledge Base·Prompting & Interaction·Jailbreaking