Difference between revisions of "API Metrics"

From wikieduonline
Jump to navigation Jump to search
 
Line 2: Line 2:
  
 
* Requests Per Minute (RPM): Requests per minute is a performance metric that measures the number of requests the API will handle per minute.
 
* Requests Per Minute (RPM): Requests per minute is a performance metric that measures the number of requests the API will handle per minute.
 +
 +
* Latency: Latency or Network Latency is the time it takes for data or a request to go from one system to another system. It can be either between client & server or server & server(in the case of distributed services).
 +
 +
* [[Time To First Hello World]]: TTFHW is the time the user needs to make his first API transaction from the web page.
 +
 +
* Errors Per Minute: Errors per Minute (or error rate) is the number of API calls with failure responses
 +
 +
* Memory Usage: Memory usage helps you understand the amount of resource utilization; a high memory usage can be an indicator of servers overloaded.
  
 
* CPU Usage: Keeping track of the CPU is one of the important aspects of performance because high CPU usage can mean the server is overloaded, which can cause a severe bottleneck.
 
* CPU Usage: Keeping track of the CPU is one of the important aspects of performance because high CPU usage can mean the server is overloaded, which can cause a severe bottleneck.
  
* Latency: Latency or Network Latency is the time it takes for data or a request to go from one system to another system. It can be either between client & server or server & server(in the case of distributed services).
 
  
* Memory Usage: Memory usage helps you understand the amount of resource utilization; a high memory usage can be an indicator of servers overloaded.
 
  
* Time To First Hello World: TTFHW is the time the user needs to make his first API transaction from the web page.
 
  
* Errors Per Minute: Errors per Minute (or error rate) is the number of API calls with failure responses
 
  
 
== See also ==
 
== See also ==

Latest revision as of 12:57, 26 March 2024

  • API Uptime: Uptime is the continuous availability of an API, in order words, making sure the API is fully-functional without any outages.
  • Requests Per Minute (RPM): Requests per minute is a performance metric that measures the number of requests the API will handle per minute.
  • Latency: Latency or Network Latency is the time it takes for data or a request to go from one system to another system. It can be either between client & server or server & server(in the case of distributed services).
  • Errors Per Minute: Errors per Minute (or error rate) is the number of API calls with failure responses
  • Memory Usage: Memory usage helps you understand the amount of resource utilization; a high memory usage can be an indicator of servers overloaded.
  • CPU Usage: Keeping track of the CPU is one of the important aspects of performance because high CPU usage can mean the server is overloaded, which can cause a severe bottleneck.



See also[edit]

Advertising: