The Two Metrics That Matter When Choosing an Aptos Node: Latency and Maximum QPS

Looking for the best node service provider on Aptos? Here is a comprehensive performance testing report on the NodeReal Aptos node service. Enjoy!

The Two Metrics That Matter When Choosing an Aptos Node: Latency and Maximum QPS

For developers who are just starting to build applications on Aptos, a node service is a good starting point. There are some metrics that matter when you choose a node service provider, which will impact your business especially when your business grows fast. Today, I will use metrics of Latency and Maximum QPS to explain how we evaluate an Aptos node service.  

  1. Latency represents the performance of a node service. The latency may impact your applications' responsiveness. The longer the node service latency, the longer time your application needs to respond to users. Usually, the latency increases when the workload of the node service increases. The lower the increase of latency when workload increases, the better the quality of the node service is, which means the better and more stable the node service can support when there is a usage spike in our application.
  2. The maximum QPS is the throughput of a node service. This is critical when your business requires a high-volume data query, because if the throughput of the node service is too low, it will be a bottleneck of your application.

In this blog, we will run simple and straightforward performance tests to compare the Aptos public node service and NodeReal freemium node service for these two aspects - latency and maximum QPS. Please refer below for the detailed test result.

Latency Test

We test the performance with the open-source API benchmark tool here.

Aptos Public Node

We test three concurrencies, which are 1, 10, and 100 respectively. Please refer to the test results below.

1, Concurrency 1: Let's set concurrency to 1, simulating one API consumer to query the node service.

~ % hey -n 100 -c 1 https://fullnode.mainnet.aptoslabs.com/v1/transactions
Summary:

  Total:	11.9172 secs

  Slowest:	0.7940 secs

  Fastest:	0.1081 secs

  Average:	0.1192 secs

  Requests/sec:	8.3913

  

  Total data:	11228840 bytes

  Size/request:	112288 bytes


Response time histogram:

  0.108 [1]	|

  0.177 [97]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  0.245 [1]	|

  0.314 [0]	|

  0.382 [0]	|

  0.451 [0]	|

  0.520 [0]	|

  0.588 [0]	|

  0.657 [0]	|

  0.725 [0]	|

  0.794 [1]	|



Latency distribution:

  10% in 0.1094 secs

  25% in 0.1103 secs

  50% in 0.1111 secs

  75% in 0.1124 secs

  90% in 0.1149 secs

  95% in 0.1170 secs

  99% in 0.7940 secs


Details (average, fastest, slowest):

  DNS+dialup:	0.0040 secs, 0.1081 secs, 0.7940 secs

  DNS-lookup:	0.0012 secs, 0.0000 secs, 0.1153 secs

  req write:	0.0000 secs, 0.0000 secs, 0.0001 secs

  resp wait:	0.1068 secs, 0.1026 secs, 0.2007 secs

  resp read:	0.0082 secs, 0.0035 secs, 0.1908 secs


Status code distribution:

  [200]	100 responses

In the test result above, we can see the response time histogram, and 97% of the responses are within 177ms, and from the latency distribution, we can see 90% of the responses are lower than 114ms.

2, Concurrency 10: Now we increase the concurrency to 10 to simulate 10 API consumers to access the node service at the same time.

hey -n 100 -c 10 https://fullnode.mainnet.aptoslabs.com/v1/transactions
Summary:

  Total:	1.8688 secs

  Slowest:	0.7592 secs

  Fastest:	0.1039 secs

  Average:	0.1791 secs

  Requests/sec:	53.5101

  

  Total data:	11697121 bytes

  Size/request:	116971 bytes


Response time histogram:

  0.104 [1]	|

  0.169 [83]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  0.235 [6]	|■■■

  0.301 [0]	|

  0.366 [0]	|

  0.432 [0]	|

  0.497 [0]	|

  0.563 [0]	|

  0.628 [0]	|

  0.694 [0]	|

  0.759 [10]	|■■■■■



Latency distribution:

  10% in 0.1066 secs

  25% in 0.1099 secs

  50% in 0.1116 secs

  75% in 0.1184 secs

  90% in 0.7001 secs

  95% in 0.7425 secs

  99% in 0.7592 secs


Details (average, fastest, slowest):

  DNS+dialup:	0.0349 secs, 0.1039 secs, 0.7592 secs

  DNS-lookup:	0.0069 secs, 0.0000 secs, 0.0699 secs

  req write:	0.0000 secs, 0.0000 secs, 0.0001 secs

  resp wait:	0.1132 secs, 0.0976 secs, 0.2041 secs

  resp read:	0.0309 secs, 0.0039 secs, 0.2832 secs


Status code distribution:

  [200]	100 responses

In the test result above, we can see the response time histogram, and 83% of the responses are within 169ms, and from the latency distribution, we can see 90% of the responses are lower than 700ms, which increases a lot from the 114ms when there is only one API consumer.

3, Concurrency 100: Now, let`s increase the workload even further, and change the concurrency to 100.

hey -n 100 -c 100 https://fullnode.mainnet.aptoslabs.com/v1/transactions
Summary:

  Total:	1.9781 secs

  Slowest:	1.9779 secs

  Fastest:	0.6089 secs

  Average:	1.0416 secs

  Requests/sec:	50.5538

  

  Total data:	10203573 bytes

  Size/request:	102035 bytes


Response time histogram:

  0.609 [1]	|■

  0.746 [3]	|■■■■

  0.883 [26]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  1.020 [28]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  1.157 [15]	|■■■■■■■■■■■■■■■■■■■■■

  1.293 [14]	|■■■■■■■■■■■■■■■■■■■■

  1.430 [5]	|■■■■■■■

  1.567 [4]	|■■■■■■

  1.704 [1]	|■

  1.841 [1]	|■

  1.978 [2]	|■■■



Latency distribution:

  10% in 0.8259 secs

  25% in 0.8677 secs

  50% in 0.9690 secs

  75% in 1.1712 secs

  90% in 1.4265 secs

  95% in 1.5193 secs

  99% in 1.9779 secs


Details (average, fastest, slowest):

  DNS+dialup:	0.2987 secs, 0.6089 secs, 1.9779 secs

  DNS-lookup:	0.0036 secs, 0.0022 secs, 0.0046 secs

  req write:	0.0001 secs, 0.0000 secs, 0.0008 secs

  resp wait:	0.3034 secs, 0.1012 secs, 1.0017 secs

  resp read:	0.4394 secs, 0.1344 secs, 0.8962 secs


Status code distribution:

  [200]	100 responses

In the test result above, we can see the response time histogram, and 26% of the responses are within 883ms, and more and more responses take longer time. From the latency distribution, we can see that 90% of the responses are lower than 1.42s, which increases even further from the 700ms when there are only 10 API consumers.

NodeReal Aptos Node

Now we test against the NodeReal Aptos node service.

1, Concurrency 1: Same as the public node, we also start testing with concurrency 1, simulating one API consumer.

hey -n 100 -c 1 https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions

Summary:

  Total:	6.1882 secs

  Slowest:	0.2606 secs

  Fastest:	0.0406 secs

  Average:	0.0619 secs

  Requests/sec:	16.1597

  


Response time histogram:

  0.041 [1]	|■

  0.063 [78]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  0.085 [8]	|■■■■

  0.107 [0]	|

  0.129 [5]	|■■■

  0.151 [1]	|■

  0.173 [1]	|■

  0.195 [1]	|■

  0.217 [2]	|■

  0.239 [2]	|■

  0.261 [1]	|■



Latency distribution:

  10% in 0.0421 secs

  25% in 0.0429 secs

  50% in 0.0436 secs

  75% in 0.0538 secs

  90% in 0.1212 secs

  95% in 0.1978 secs

  99% in 0.2606 secs


Details (average, fastest, slowest):

  DNS+dialup:	0.0008 secs, 0.0406 secs, 0.2606 secs

  DNS-lookup:	0.0001 secs, 0.0000 secs, 0.0054 secs

  req write:	0.0000 secs, 0.0000 secs, 0.0003 secs

  resp wait:	0.0586 secs, 0.0387 secs, 0.2224 secs

  resp read:	0.0025 secs, 0.0012 secs, 0.0057 secs


Status code distribution:

  [200]	100 responses

In the test result above, we can see the response time histogram, and 78% of the responses are within 63ms, which is lower than the public node, which is around 177ms. From the latency distribution, we can see that 90% of the responses are lower than 121ms.

2, Concurrency 10: Now we increase the concurrency to 10.

hey -n 100 -c 10 https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions


Summary:

  Total:	0.7128 secs

  Slowest:	0.2954 secs

  Fastest:	0.0383 secs

  Average:	0.0695 secs

  Requests/sec:	140.2833

  


Response time histogram:

  0.038 [1]	|

  0.064 [88]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  0.090 [1]	|

  0.115 [0]	|

  0.141 [0]	|

  0.167 [0]	|

  0.193 [0]	|

  0.218 [0]	|

  0.244 [0]	|

  0.270 [0]	|

  0.295 [10]	|■■■■■



Latency distribution:

  10% in 0.0406 secs

  25% in 0.0419 secs

  50% in 0.0441 secs

  75% in 0.0491 secs

  90% in 0.2829 secs

  95% in 0.2919 secs

  99% in 0.2954 secs


Details (average, fastest, slowest):

  DNS+dialup:	0.0103 secs, 0.0383 secs, 0.2954 secs

  DNS-lookup:	0.0024 secs, 0.0000 secs, 0.0243 secs

  req write:	0.0000 secs, 0.0000 secs, 0.0004 secs

  resp wait:	0.0573 secs, 0.0369 secs, 0.1895 secs

  resp read:	0.0017 secs, 0.0006 secs, 0.0044 secs


Status code distribution:

  [200]	100 responses

In the test result above, we can see the response time histogram, and 88% of the responses are within 64ms. From the latency distribution, we can see that 90% of the responses are lower than 282ms.

3, Concurrency 100: Let's put more pressure by increasing the concurrency to 100.

hey -n 100 -c 100 https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions


Summary:

  Total:	0.4171 secs

  Slowest:	0.4158 secs

  Fastest:	0.2748 secs

  Average:	0.3472 secs

  Requests/sec:	239.7462

  


Response time histogram:

  0.275 [1]	|■

  0.289 [13]	|■■■■■■■■■■■■■■■■■■

  0.303 [10]	|■■■■■■■■■■■■■■

  0.317 [11]	|■■■■■■■■■■■■■■■

  0.331 [11]	|■■■■■■■■■■■■■■■

  0.345 [5]	|■■■■■■■

  0.359 [1]	|■

  0.374 [5]	|■■■■■■■

  0.388 [8]	|■■■■■■■■■■■

  0.402 [29]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

  0.416 [6]	|■■■■■■■■



Latency distribution:

  10% in 0.2854 secs

  25% in 0.3045 secs

  50% in 0.3425 secs

  75% in 0.3932 secs

  90% in 0.3988 secs

  95% in 0.4087 secs

  99% in 0.4158 secs


Details (average, fastest, slowest):

  DNS+dialup:	0.1134 secs, 0.2748 secs, 0.4158 secs

  DNS-lookup:	0.0054 secs, 0.0034 secs, 0.0071 secs

  req write:	0.0001 secs, 0.0000 secs, 0.0018 secs

  resp wait:	0.2293 secs, 0.1304 secs, 0.3165 secs

  resp read:	0.0045 secs, 0.0004 secs, 0.0440 secs


Status code distribution:

  [200]	100 responses


From the latency distribution, we can see 90% of the responses are lower than 398ms.

Let's make a comparison of latency.

Latency Test Result

The y-axis is the latency, and the x-axis is the number of concurrency.

As we can see the diagram, when we increase the concurrency, the latency of both NodeReal and Aptos public node service increase, but increase rate of NodeReal is flatter, which means while the workload increases, NodeReal node service is more stable and faster.

Maximum QPS

As we explained above, Maximum QPS is a metric that represents the throughput of the node service. We will test the maximum QPS of both NodeReal and Aptos public by increasing the workload. We limit single thread QPS to 50, and increase the number of concurrency.

Aptos Public Node

1, Concurrency 10: First, let's start the concurrency to 10.

hey -n 1000 -c 10 -q 50 https://fullnode.mainnet.aptoslabs.com/v1/transactions

Summary:
  Total:	12.2181 secs
  Slowest:	0.6849 secs
  Fastest:	0.1016 secs
  Average:	0.1187 secs
  Requests/sec:	81.8456
  
  Total data:	113541443 bytes
  Size/request:	113541 bytes

Response time histogram:
  0.102 [1]	|
  0.160 [978]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.218 [11]	|
  0.277 [0]	|
  0.335 [0]	|
  0.393 [0]	|
  0.452 [0]	|
  0.510 [0]	|
  0.568 [0]	|
  0.627 [0]	|
  0.685 [10]	|


Latency distribution:
  10% in 0.1061 secs
  25% in 0.1092 secs
  50% in 0.1118 secs
  75% in 0.1153 secs
  90% in 0.1191 secs
  95% in 0.1229 secs
  99% in 0.6422 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0028 secs, 0.1016 secs, 0.6849 secs
  DNS-lookup:	0.0001 secs, 0.0000 secs, 0.0060 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0010 secs
  resp wait:	0.1058 secs, 0.0960 secs, 0.2008 secs
  resp read:	0.0100 secs, 0.0021 secs, 0.2858 secs

Status code distribution:
  [200]	1000 responses


From the summary in the test result, the QPS is around 81.8456.

2, Concurrency 100: Now we increase the concurrency to 100.

hey -n 1000 -c 100 -q 50 https://fullnode.mainnet.aptoslabs.com/v1/transactions

Summary:
  Total:	1.3384 secs
  Slowest:	0.4592 secs
  Fastest:	0.0870 secs
  Average:	0.1267 secs
  Requests/sec:	747.1592
  

Response time histogram:
  0.087 [1]	|
  0.124 [899]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.161 [0]	|
  0.199 [0]	|
  0.236 [0]	|
  0.273 [0]	|
  0.310 [0]	|
  0.348 [0]	|
  0.385 [0]	|
  0.422 [34]	|■■
  0.459 [66]	|■■■


Latency distribution:
  10% in 0.0891 secs
  25% in 0.0903 secs
  50% in 0.0937 secs
  75% in 0.0955 secs
  90% in 0.3972 secs
  95% in 0.4370 secs
  99% in 0.4540 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0338 secs, 0.0870 secs, 0.4592 secs
  DNS-lookup:	0.0047 secs, 0.0000 secs, 0.0484 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0019 secs
  resp wait:	0.0927 secs, 0.0866 secs, 0.1160 secs
  resp read:	0.0000 secs, 0.0000 secs, 0.0016 secs

Status code distribution:
  [429]	1000 responses


From the status code distribution, all of 1000 responses are with error code 429, which means we have reached the rate limit. That means the public node cannot serve higher QPS anymore. I will assume the maximum QPS is around 100.

NodeReal Aptos Node

1, Concurrency 10: We also start from a concurrency of 10.

hey -n 1000 -c 10 -q 50  https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions
Summary:
  Total:	4.5763 secs
  Slowest:	0.4355 secs
  Fastest:	0.0356 secs
  Average:	0.0444 secs
  Requests/sec:	218.5194
  

Response time histogram:
  0.036 [1]	|
  0.076 [989]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.116 [0]	|
  0.156 [0]	|
  0.196 [0]	|
  0.236 [0]	|
  0.276 [0]	|
  0.316 [0]	|
  0.356 [0]	|
  0.395 [0]	|
  0.435 [10]	|


Latency distribution:
  10% in 0.0380 secs
  25% in 0.0390 secs
  50% in 0.0401 secs
  75% in 0.0416 secs
  90% in 0.0435 secs
  95% in 0.0451 secs
  99% in 0.4304 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0013 secs, 0.0356 secs, 0.4355 secs
  DNS-lookup:	0.0005 secs, 0.0000 secs, 0.0611 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0031 secs
  resp wait:	0.0415 secs, 0.0348 secs, 0.3015 secs
  resp read:	0.0016 secs, 0.0006 secs, 0.0100 secs

Status code distribution:
  [200]	1000 responses

From the summary in the test result, the QPS is around 218, which is much higher than the public node at the same number of concurrency.

2, Concurrency 100: Let`s increase the number of concurrency to 100.

hey -n 1000 -c 100 -q 50  https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions
Summary:
  Total:	2.1097 secs
  Slowest:	1.4274 secs
  Fastest:	0.0392 secs
  Average:	0.1593 secs
  Requests/sec:	473.9994
  

Response time histogram:
  0.039 [1]	|
  0.178 [738]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.317 [147]	|■■■■■■■■
  0.456 [54]	|■■■
  0.594 [44]	|■■
  0.733 [6]	|
  0.872 [5]	|
  1.011 [3]	|
  1.150 [1]	|
  1.289 [0]	|
  1.427 [1]	|


Latency distribution:
  10% in 0.0696 secs
  25% in 0.0769 secs
  50% in 0.1127 secs
  75% in 0.1841 secs
  90% in 0.3336 secs
  95% in 0.5067 secs
  99% in 0.7442 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0142 secs, 0.0392 secs, 1.4274 secs
  DNS-lookup:	0.0048 secs, 0.0000 secs, 0.0506 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0003 secs
  resp wait:	0.1236 secs, 0.0363 secs, 1.2111 secs
  resp read:	0.0214 secs, 0.0006 secs, 0.5064 secs

Status code distribution:
  [200]	1000 responses

From the summary of the test result, the QPS is around 473.

3, Concurrency 200: Let's increase the load even further, set the concurrency to 200.

hey -n 1000 -c 200 -q 50  https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions


Summary:
  Total:	1.8396 secs
  Slowest:	1.4689 secs
  Fastest:	0.0399 secs
  Average:	0.2446 secs
  Requests/sec:	543.5983
  

Response time histogram:
  0.040 [1]	|
  0.183 [541]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.326 [210]	|■■■■■■■■■■■■■■■■
  0.469 [99]	|■■■■■■■
  0.611 [75]	|■■■■■■
  0.754 [26]	|■■
  0.897 [25]	|■■
  1.040 [9]	|■
  1.183 [9]	|■
  1.326 [4]	|
  1.469 [1]	|


Latency distribution:
  10% in 0.0707 secs
  25% in 0.0810 secs
  50% in 0.1696 secs
  75% in 0.3236 secs
  90% in 0.5472 secs
  95% in 0.7394 secs
  99% in 1.0758 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0270 secs, 0.0399 secs, 1.4689 secs
  DNS-lookup:	0.0015 secs, 0.0000 secs, 0.0087 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0018 secs
  resp wait:	0.1852 secs, 0.0386 secs, 1.2584 secs
  resp read:	0.0260 secs, 0.0004 secs, 0.7592 secs

Status code distribution:
  [200]	1000 responses

From the summary of the test result, the QPS is around 543, and it seems the QPS increase rate is getting flat.

3, Concurrency to 300: Let's see if we can get higher QPS. Increase the number of concurrency to 300.

hey -n 1000 -c 300 -q 50  https://aptos-mainnet.nodereal.io/v1/a8d6d6c702a640a3b152661aa0c0327a/v1/transactions
Summary:
  Total:	1.6419 secs
  Slowest:	1.3775 secs
  Fastest:	0.0392 secs
  Average:	0.3893 secs
  Requests/sec:	548.1454
  

Response time histogram:
  0.039 [1]	|
  0.173 [368]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.307 [130]	|■■■■■■■■■■■■■■
  0.441 [93]	|■■■■■■■■■■
  0.575 [51]	|■■■■■■
  0.708 [83]	|■■■■■■■■■
  0.842 [46]	|■■■■■
  0.976 [40]	|■■■■
  1.110 [32]	|■■■
  1.244 [38]	|■■■■
  1.377 [18]	|■■


Latency distribution:
  10% in 0.0750 secs
  25% in 0.0946 secs
  50% in 0.2349 secs
  75% in 0.6219 secs
  90% in 0.9723 secs
  95% in 1.1791 secs
  99% in 1.2869 secs

Details (average, fastest, slowest):
  DNS+dialup:	0.0699 secs, 0.0392 secs, 1.3775 secs
  DNS-lookup:	0.0021 secs, 0.0000 secs, 0.0103 secs
  req write:	0.0000 secs, 0.0000 secs, 0.0008 secs
  resp wait:	0.2931 secs, 0.0380 secs, 1.2160 secs
  resp read:	0.0260 secs, 0.0007 secs, 0.8135 secs

Status code distribution:
  [200]	900 responses

From the summary of the test result, the QPS is around 548.

Again, time to make a summary.  According to our load testing, the QPS increases while we increase the number of the concurrency. However, the increase rate is getting flatter until we hit the rate limit. Below is the comparison between the Aptos public node and NodeReal Node service.

Query Per Second

Obviously, the maximum QPS of NodeReal can reach around 580 when we increase the number of concurrency to 300, but for the public node, when we increase the number to 100, it hits the rate limit, and our calculation of maximum QPS is higher than 80 but lower than 100.

Conclusion

When we select the Aptos Node service, the latency and maximum QPS give us a very clear reference. Latency is the one that will impact the responsiveness of our application, and a lower latency increase rate (more flat in the diagram) while we increase the workload will provide a better customer experience and can also be beneficial to handle the traffic spike. The maximum QPS is the indicator of the throughput of node service. If you choose a node service that has a higher maximum QPS will help you handle more traffic when your business grows.

Build on Aptos with NodeReal

NodeReal, as a proud Node infrastructure of Aptos, is committed to the mission to empower dev community to join and build on Aptos faster and easier.

Check out our Aptos Know-How Tutorial Series to kick off your Aptos journey where NodeReal will be your trusted companion.  

About NodeReal

NodeReal is a one-stop blockchain infrastructure and service provider that embraces the high-speed blockchain era and empowers developers by “Make your Web3 Real”. We provide scalable, reliable, and efficient blockchain solutions for everyone, aiming to support the adoption, growth, and long-term success of the Web3 ecosystem.

Join Our Community

Join our community to learn more about NodeReal and stay up to date!

Discord | Twitter| Youtube | LinkedIn