Difference between revisions of "LLaMA2"
Jump to navigation
Jump to search
Line 1: | Line 1: | ||
− | [[wikipedia:LLaMA2]] | + | [[wikipedia:LLaMA2]] pretrained models are trained on 2 [[trillion]] [[tokens]]. |
* https://ai.meta.com/llama/ | * https://ai.meta.com/llama/ | ||
Latest revision as of 19:24, 22 December 2023
wikipedia:LLaMA2 pretrained models are trained on 2 trillion tokens.
Related[edit]
- q4
- Ollama uses 4-bit quantization
See also[edit]
Advertising: