LLaMA2
From wikieduonline
Revision as of 19:21, 22 December 2023 by
Welcome
(
talk
|
contribs
)
(
→See also
)
(
diff
)
← Older revision
|
Latest revision
(
diff
) |
Newer revision →
(
diff
)
Jump to navigation
Jump to search
wikipedia:LLaMa
https://ai.meta.com/llama/
ollama run llama2
Related
q4
Ollama uses 4-bit
quantization
See also
Ollama
,
[
show
|
server
|
run
]
LLaMA
,
LLaMA2
,
Llama 3-V 405B
LLM
,
MLLM
,
LoRA
,
LLaMA
,
LLaMA3
,
QLoRA
,
Falcon
,
PaLM 2
,
Gemini
,
Mixtral 8x7B
,
BitNet
,
Measuring Massive Multitask Language Understanding (MMLU)
,
NVLM
Category
:
AI
Advertising:
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
Variants
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information