This documentation was last updated on 23-11-21.

Introduction

As the Hub2 API is their main project, a solid quality of service is required.

In order to keep an appropriate latency and a reliable service, rate limits must be enforced on the most cost-heavy endpoints.

This limit is applied per IP address, so it is relative to each of the API users and the usage of one won’t impact the usage of the other.

That is why we enforce those limits.

API limits

Here is a table of rate limits. The global limit is for all other endpoints.

GET/transfers1 req / 30 sec2 req / min
POST/transfers30 req / 5 sec360 req / min
GET/transfers/:id5 req / 5 sec60 req / min*per :id
GET/transfers/:id/balance1 req / 10 sec6 req / min
GET/payments1 req / 30 sec2 req / min
GET/payments_intents1 req / 30 sec2 req / min
POST/payments_intents30 req / 5 sec360 req / min
GET/payments_intents/:id5 req / 5 sec60 req / min*per :id
POST/payments_intents/:id/authentication30 req / 5 sec360 req / min
POST/payments_intents/:id/payments30 req / 5 sec360 req / min
POST/payments_intents/:id/payments/sync30 req / 5 sec360 req / min
GET/payments_intents/:id/payment-fees75 req / 10 sec450 req / min
POST/terminal/payments5 req / 10 sec30 req / min
GET/terminal/payments/:id5 req / 10 sec30 req / min
GET/balance75 req / 10 sec450 req / min
**50 req / 5 sec600 req / minGlobal limit

Actual limit is how the code handles it in termes of requests / seconds.
Rationalized limit ease understanding and helps compare the different values on a same scale.

This limit applies both to mode sandbox and mode live.
Remember to stop sandbox traffic if the transaction stream is getting heavy.

How to handle limits

Whenever a request is received by the API beyond the limit of the endpoint, an error Too Many Requests with HTTP status 429 will be returned.

Please checkout MDN documentation about this.

The request failed and was not handled by the API because of the rate limit.

Headers are provided in the HTTP response for proper handling :

Header nameDescription
Retry-AfterIn case the limit is reached,
this header tells how long to wait before a new request will be allowed
X-RateLimit-LimitThe current limit on the endpoint
X-RateLimit-RemainingNumber of remaining requests before reaching the limit
X-RateLimit-ResetTime before a spot is free in the queue for a new request

Reactive solution (easier)

One way to handle rate limits from a client-side perspective is to retry requests if they fail for a 429 reason:

Proactive solution (harder)

The proactive solution is a bit trickier, it consist of keeping a pool of requests ready to be started with the exact size of the destination endpoint rate limit. Whenever the pool is empty, the next request waits in line for a token to free up.

Check out that interesting article on how to implement rate limit from the client-side perspective, especially approaches 4 and 4.1.

Conclusion

In a perfect world, no limit would be set on the API endpoints. However in the real world, it helps preventing abuse and keep a reliable service for everyone.

The team work on a daily basis to improve the stability and the performance of the API and this page will be updated as soon as upgrades allow to loosen the limits.