Difference between revisions of "LLaMA2"
Jump to navigation
Jump to search
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | [[wikipedia: | + | [[wikipedia:LLaMA2]] pretrained models are trained on 2 [[trillion]] [[tokens]]. |
+ | * https://ai.meta.com/llama/ | ||
* <code>[[ollama run llama2]]</code> | * <code>[[ollama run llama2]]</code> | ||
Line 9: | Line 10: | ||
== See also == | == See also == | ||
* {{ollama}} | * {{ollama}} | ||
+ | * {{llama}} | ||
+ | * {{Meta AI}} | ||
* {{LLM}} | * {{LLM}} | ||
[[Category:AI]] | [[Category:AI]] |
Latest revision as of 19:24, 22 December 2023
wikipedia:LLaMA2 pretrained models are trained on 2 trillion tokens.
Related[edit]
- q4
- Ollama uses 4-bit quantization
See also[edit]
Advertising: