Single-precision floating-point format
From wikieduonline
Revision as of 07:13, 21 May 2023 by
Welcome
(
talk
|
contribs
)
(
→See also
)
(
diff
)
← Older revision
|
Latest revision
(
diff
) |
Newer revision →
(
diff
)
Jump to navigation
Jump to search
wikipedia:Single-precision floating-point format
float
in
C
FP16
,
FP32
Exaflop
:
Dojo
(
Tesla
)
Apple M1
floating point (FP32) performance of 2.6
TFLOPs
Related
Teraflop
TOPS
FP64
,
doble
in
C
Double-precision floating-point format
See also
FP4
,
FP6
,
FP8
,
FP16
,
FP32
,
FP64
,
bc
,
Teraflop
,
INT8
FLOP
,
Exaflop
,
Teraflop
,
TFLOP
,
GFLOP
GPU
,
eGPU
,
OpenCL
,
ROCm
,
Nvidia
,
CUDA
,
Nvidia Tesla
,
AMD
,
RTX
,
gpustat
,
Teraflop
,
Ethash
,
Pytorch
,
Pascal GPU
,
TBDR
,
A100
,
H100
,
TRIP
,
AMD Instinct MI100
,
AMD Radeon
,
LPU
,
TFLOP
Category
:
Computing
Advertising:
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
Variants
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Permanent link
Page information