Difference between revisions of "LLaMA2"

From wikieduonline
Jump to navigation Jump to search
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[wikipedia:LLaMa]]
+
[[wikipedia:LLaMA2]] pretrained models are trained on 2 [[trillion]] [[tokens]].
 +
* https://ai.meta.com/llama/
  
[[ollama run llama2]]
+
* <code>[[ollama run llama2]]</code>
 +
 
 +
== Related ==
 +
* [[q4]]
 +
* Ollama uses 4-bit [[quantization]]
  
 
== See also ==
 
== See also ==
 
* {{ollama}}
 
* {{ollama}}
 +
* {{llama}}
 +
* {{Meta AI}}
 
* {{LLM}}
 
* {{LLM}}
  
 
[[Category:AI]]
 
[[Category:AI]]

Latest revision as of 19:24, 22 December 2023

Advertising: