Difference between revisions of "Single-precision floating-point format"
Jump to navigation
Jump to search
Tags: Mobile web edit, Mobile edit |
Tags: Mobile web edit, Mobile edit |
||
(9 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
[[wikipedia:Single-precision floating-point format]] | [[wikipedia:Single-precision floating-point format]] | ||
+ | [[float]] in [[C]] | ||
[[FP16]], [[FP32]] | [[FP16]], [[FP32]] | ||
Line 6: | Line 7: | ||
* [[Exaflop]]: [[Dojo]] ([[Tesla]]) | * [[Exaflop]]: [[Dojo]] ([[Tesla]]) | ||
+ | |||
+ | * [[Apple M1]] floating point (FP32) performance of 2.6 [[TFLOPs]] | ||
+ | |||
+ | == Related == | ||
+ | * [[Teraflop]] | ||
+ | * [[TOPS]] | ||
+ | * [[FP64]], [[doble]] in [[C]] | ||
+ | * [[Double-precision floating-point format]] | ||
== See also == | == See also == | ||
+ | * {{FP}} | ||
+ | * {{FLOP}} | ||
* {{GPU}} | * {{GPU}} | ||
+ | * {{ASIC}} | ||
[[Category:Computing]] | [[Category:Computing]] |
Latest revision as of 07:16, 21 May 2023
wikipedia:Single-precision floating-point format
float in C
Related[edit]
See also[edit]
- FP4, FP6, FP8, FP16, FP32, FP64, bc, Teraflop, INT8
- FLOP, Exaflop, Teraflop, TFLOP, GFLOP
- GPU, eGPU, OpenCL, ROCm, Nvidia, CUDA, Nvidia Tesla, AMD, RTX,
gpustat
, Teraflop, Ethash, Pytorch, Pascal GPU, TBDR, A100, H100, TRIP, AMD Instinct MI100, AMD Radeon, LPU, TFLOP - ASIC, Bitmain, TPU, MTIA, Ethched
Advertising: