Model Architecture
Transformer
The core architecture that powers most modern LLMs, introduced by Google in 2017.
Definition
The transformer is the architectural breakthrough that made modern AI possible. Before 2017, language models processed text sequentially — one word at a time. The transformer's key innovation was the attention mechanism, which allows the model to consider all words in a piece of text simultaneously and weigh how relevant each word is to every other word. This enabled training on much larger datasets and produced dramatically more capable models. Almost every major AI system today — ChatGPT, Claude, Gemini — is built on this architecture.
Related Terms
Attention Mechanism
The part of a transformer that lets the model focus on the most relevant words regardless of where they appear in a sentence.
Large Language Model (LLM)
A type of AI trained on vast amounts of text that can read, write, summarise, and reason with language.
Encoder
The part of a model that reads and understands input.
Decoder
The part of a model that generates output.
Heard enough terminology — ready to talk outcomes?
We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.
Disclaimer
This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.
Document reference: ISO_webpage_knowledge-base_glossary_v1
Last modified: 29 March 2026
Knowledge Base·Model Architecture·Transformer