Difference between revisions of "Neural Processing Unit (NPU)"
Jump to navigation
Jump to search
(One intermediate revision by the same user not shown) | |||
Line 3: | Line 3: | ||
* [[FSD Chip]] | * [[FSD Chip]] | ||
− | * [[M1]] (Nov 2020) 16 core 11 | + | * [[M1]] (Nov 2020) 16 core - 11 TOPS |
− | * [[M2]] | + | * [[M2]] 16 core - 15.8 TOPS |
− | * [[M3]] | + | * [[M3]] 16 core - 18 TOPS |
− | |||
== Related == | == Related == | ||
− | * [[A100]] GPU | + | * [[TFLOPS]], [[GPU]]: [[A100]] GPU |
* [[Neural Engine]] | * [[Neural Engine]] | ||
* [[Snapdragon]] | * [[Snapdragon]] |
Latest revision as of 11:06, 6 March 2024
wikipedia:Neural Processing Unit (AI accelerator)
Related[edit]
See also[edit]
- NPU, NN, TPU, Apple Neural Engine (ANE)
- Artificial neural networks, Neuronal network (NN), CNN, Micrograd, NPU, ConvNet, AlexNet, GoogLeNet, Apache MXNet, Neural architecture search, DAG, Feedforward neural network, NeurIPS, Feature Pyramid Network, TPU, NPU, Apple Neural Engine (ANE), LLM, TFLOPS
- CPU, GPU, NPU, TPU, DPU, Groq, Proliant, thread (
Pthreads
), processor, CPU socket, core, ARM , CPU Virtualization, Intel, AMD,nm
,lscpu
, AVX-512, Passthrough, CPU intensive, Graviton processor, Branch predictor, vCPU, SSE, Power
Advertising: