Difference between revisions of "Mistral AI"
Jump to navigation
Jump to search
(Created page with "wikipedia:Mistral AI {{LLM}}") |
|||
(12 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | [[wikipedia:Mistral AI]] | + | [[wikipedia:Mistral AI]] ([https://www.crunchbase.com/organization/mistral-ai Crunchbase]) |
+ | * [[Arthur Mensch]], [[Timothée Lacroix]] and [[Guillaume Lample]] | ||
+ | * Products: | ||
+ | ** [[Mistral 7B]], [[Mixtral 8x7B]] | ||
+ | ** [[Mistral Small]], [[Mistral Medium]], [[Mistral Large]] | ||
+ | [[Mistral 7B]] uses [[Grouped-query attention (GQA)]] intended for faster inference and [[Sliding Window Attention (SWA)]] intended to handle longer sequences | ||
− | {{LLM}} | + | == Related == |
+ | * <code>[[ollama run mistral]]</code> | ||
+ | * <code>[[ollama run dolphin-mistral:latest]]</code> | ||
+ | * [[token]] | ||
+ | * [[Amazon Bedrock]] | ||
+ | |||
+ | == See also == | ||
+ | * {{Mistral}} | ||
+ | * {{LLM}} | ||
+ | |||
+ | [[Category:AI]] |
Latest revision as of 09:51, 10 April 2024
wikipedia:Mistral AI (Crunchbase)
- Products:
Mistral 7B uses Grouped-query attention (GQA) intended for faster inference and Sliding Window Attention (SWA) intended to handle longer sequences
Related[edit]
See also[edit]
Advertising: