Rate limits
Those are the limits applied on our API.
This documentation was last updated on 23-11-21.
Introduction
As the Hub2 API is their main project, a solid quality of service is required.
In order to keep an appropriate latency and a reliable service, rate limits must be enforced on the most cost-heavy endpoints.
This limit is applied per IP address, so it is relative to each of the API users and the usage of one won’t impact the usage of the other.
That is why we enforce those limits.
API limits
Here is a table of rate limits. The global limit is for all other endpoints.
GET | /transfers | 1 req / 30 sec | 2 req / min | |
---|---|---|---|---|
POST | /transfers | 30 req / 5 sec | 360 req / min | |
GET | /transfers/:id | 5 req / 5 sec | 60 req / min | *per :id |
GET | /transfers/:id/balance | 1 req / 10 sec | 6 req / min | |
GET | /payments | 1 req / 30 sec | 2 req / min | |
GET | /payments_intents | 1 req / 30 sec | 2 req / min | |
POST | /payments_intents | 30 req / 5 sec | 360 req / min | |
GET | /payments_intents/:id | 5 req / 5 sec | 60 req / min | *per :id |
POST | /payments_intents/:id/authentication | 30 req / 5 sec | 360 req / min | |
POST | /payments_intents/:id/payments | 30 req / 5 sec | 360 req / min | |
POST | /payments_intents/:id/payments/sync | 30 req / 5 sec | 360 req / min | |
GET | /payments_intents/:id/payment-fees | 75 req / 10 sec | 450 req / min | |
POST | /terminal/payments | 5 req / 10 sec | 30 req / min | |
GET | /terminal/payments/:id | 5 req / 10 sec | 30 req / min | |
GET | /balance | 75 req / 10 sec | 450 req / min | |
* | * | 50 req / 5 sec | 600 req / min | Global limit |
Actual limit is how the code handles it in termes of requests / seconds.
Rationalized limit ease understanding and helps compare the different values on a same scale.
This limit applies both to mode sandbox
and mode live
.
Remember to stop sandbox traffic if the transaction stream is getting heavy.
How to handle limits
Whenever a request is received by the API beyond the limit of the endpoint, an error Too Many Requests
with HTTP status 429
will be returned.
Please checkout MDN documentation about this.
The request failed and was not handled by the API because of the rate limit.
Headers are provided in the HTTP response for proper handling :
Header name | Description |
---|---|
Retry-After | In case the limit is reached, this header tells how long to wait before a new request will be allowed |
X-RateLimit-Limit | The current limit on the endpoint |
X-RateLimit-Remaining | Number of remaining requests before reaching the limit |
X-RateLimit-Reset | Time before a spot is free in the queue for a new request |
Reactive solution (easier)
One way to handle rate limits from a client-side perspective is to retry requests if they fail for a 429 reason:
Proactive solution (harder)
The proactive solution is a bit trickier, it consist of keeping a pool of requests ready to be started with the exact size of the destination endpoint rate limit. Whenever the pool is empty, the next request waits in line for a token to free up.
Check out that interesting article on how to implement rate limit from the client-side perspective, especially approaches 4 and 4.1.
Conclusion
In a perfect world, no limit would be set on the API endpoints. However in the real world, it helps preventing abuse and keep a reliable service for everyone.
The team work on a daily basis to improve the stability and the performance of the API and this page will be updated as soon as upgrades allow to loosen the limits.