Model Architecture

Positional Encoding

How a model tracks word order, since transformers don't naturally process text sequentially.

Definition

Transformers process all tokens in a sequence simultaneously — which is what makes them fast and powerful. But this means they don't automatically know the order of words. Positional encoding adds information about each token's position in the sequence, allowing the model to understand that 'the dog bit the man' means something different from 'the man bit the dog' even though it contains the same words.

Heard enough terminology — ready to talk outcomes?

We translate AI concepts into measurable business results. No upfront fees — you pay only when independently verified results are delivered.

← Back to glossary

Disclaimer

This definition is provided for educational and informational purposes only. It represents a general explanation of a technical concept and does not constitute professional, technical, or investment advice. Artificial intelligence is a rapidly evolving field; terminology, techniques, and capabilities change frequently. Coaley Peak Ltd makes no warranty as to the accuracy, completeness, or currency of the information provided. Nothing on this page should be relied upon as the sole basis for commercial, technical, legal, or investment decisions without independent professional advice.

Document reference: ISO_webpage_knowledge-base_glossary_v1

Last modified: 29 March 2026

Knowledge Base·Model Architecture·Positional Encoding