Difference between revisions of "NVIDIA L4 Tensor Core"
Jump to navigation
Jump to search
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | [[wikipedia:NVIDIA L4 Tensor Core]] ( | + | [[wikipedia:NVIDIA L4 Tensor Core]] ([https://nvidianews.nvidia.com/news/nvidia-and-google-cloud-deliver-powerful-new-generative-ai-platform-built-on-the-new-l4-gpu-and-vertex-ai Mar 2023) |
− | |||
* 30.3 [[TFLOPS]] | * 30.3 [[TFLOPS]] | ||
Line 6: | Line 5: | ||
* [[NVIDIA Ada Lovelace architecture]] | * [[NVIDIA Ada Lovelace architecture]] | ||
− | + | == Related == | |
* [[Vertex AI]] | * [[Vertex AI]] | ||
* [[T4]] | * [[T4]] | ||
− | * [[Amazon EC2 G6 instances]] (Apr 2024) | + | * [[Amazon EC2 G6 instances]] ([[AWS timeline|Apr 2024]]) |
== See also == | == See also == | ||
* {{Ada}} | * {{Ada}} | ||
* {{NVIDIA}} | * {{NVIDIA}} | ||
+ | |||
+ | [[Category:Nvidia]] |
Latest revision as of 11:44, 29 May 2024
wikipedia:NVIDIA L4 Tensor Core ([https://nvidianews.nvidia.com/news/nvidia-and-google-cloud-deliver-powerful-new-generative-ai-platform-built-on-the-new-l4-gpu-and-vertex-ai Mar 2023)
- 30.3 TFLOPS
Related[edit]
See also[edit]
- Ada, NVIDIA Ada Lovelace microarchitecture
- Nvidia, GPU, Nvidia tools,
nvidia-smi
, CUDA, Nvidia Drive, Tegra, EVGA Corporation, A10, A100, H100, T4, L4, K80,gpustat
, Nvidia Xavier, NVML, TOPS, Nvidia broadcast, Mellanox, Jensen Huang, NVIDIA driver, Nvidia RTX, Tensor Cores, Nvidia DGX, Nvidia Omniverse Cloud, Drive Thor, Ada, Hopper, NVIDIA device plugin for Kubernetes, NVIDIA DCGM, NVIDIA GPU Operator, Megatron-Core, NVLM, NVIDIA GPU Boost, NVSwitch, NVIDIA Driver R450+
Advertising: