Difference between revisions of "MTIA"
Jump to navigation
Jump to search
Tags: Mobile web edit, Mobile edit |
Tags: Mobile web edit, Mobile edit |
||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | [[wikipedia:MTIA]] | + | [[wikipedia:MTIA]] [[ASIC]] |
https://ai.facebook.com/blog/meta-training-inference-accelerator-AI-MTIA/ | https://ai.facebook.com/blog/meta-training-inference-accelerator-AI-MTIA/ | ||
+ | |||
+ | |||
+ | The accelerator is fabricated in TSMC 7nm process and runs at 800 MHz, providing 102.4 [[TOPS]] at [[INT8]] precision and 51.2 [[TFLOPS]] at [[FP16]] precision. It has a thermal design power (TDP) of 25 W. | ||
* [[DLRMs]] | * [[DLRMs]] | ||
+ | * [[PyTorch]] | ||
− | + | {{Flop}} | |
{{Meta}} | {{Meta}} | ||
{{AI}} | {{AI}} |
Latest revision as of 07:12, 21 May 2023
https://ai.facebook.com/blog/meta-training-inference-accelerator-AI-MTIA/
The accelerator is fabricated in TSMC 7nm process and runs at 800 MHz, providing 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision. It has a thermal design power (TDP) of 25 W.
FLOP, Exaflop, Teraflop, TFLOP, GFLOP Meta, Metaverse, Meta Quest: 2, 3 Pro, MTIA, LLaMA, Smart Glasses, FAIR, Meta AI
AI: Autonomous driving, OpenAI, Google AI,Eliezer Yudkowsky, DeepMind, Computer Vision, Neural network, Vertex AI, Instadeep, Deep learning, Infogrid, Sapling, AssemblyAI, V7, MTIA, Yann LeCun, AI WiW, Salesforce AI, Pika, Amazon Q, LLM, Ollama, Cloud AI Developer Services, Hugging Face, Databricks
Advertising: