Difference between revisions of "Generative Pre-trained Transformer"
Jump to navigation
Jump to search
Line 2: | Line 2: | ||
* [[GPT-4]] (Mar 2023) | * [[GPT-4]] (Mar 2023) | ||
− | * [[GPT-3]] (Jun 2020, beta) | + | * [[GPT-3]] (Jun 2020, beta) the architecture is a decoder-only transformer network with a 2048-token-long context and 175 billion parameters, requiring 800GB to store. |
* [[GPT-2]] (Feb 2019) | * [[GPT-2]] (Feb 2019) | ||
Revision as of 15:24, 9 April 2023
wikipedia:Generative Pre-trained Transformer
- GPT-4 (Mar 2023)
- GPT-3 (Jun 2020, beta) the architecture is a decoder-only transformer network with a 2048-token-long context and 175 billion parameters, requiring 800GB to store.
- GPT-2 (Feb 2019)
See also
- Transformer, GPT, Transformer 8, Ethched, Megatron-Core
- GPT, GPT-2, GPT-3, GPT-4, GPT-4o, Tiktoken, Bigram, Transformer, PaLM, ChatGPT
Advertising: