Skip to content
This repository has been archived by the owner on Jun 8, 2020. It is now read-only.

[binance] Hitting rate limiting when subscribing to large numbers of pairs #266

Open
cto opened this issue Dec 15, 2018 · 12 comments
Open
Assignees
Labels

Comments

@cto
Copy link

cto commented Dec 15, 2018

Dear all,
My reading order book from all > 400 pairs on Binance keeps hanging after around 1hour, and upon re-connecting I always have this error, which forces me to wait about > 1 hour to be able to connect again. I understand it's a temporary ban, but can someone explain why and how to fix that problem ?
Thanks,

------------ ERROR --------

Exception in thread "main" org.knowm.xchange.exceptions.ExchangeException: Failed to initialize: HTTP status code was not OK: 418
	at org.knowm.xchange.binance.BinanceExchange.remoteInit(BinanceExchange.java:141)
	at org.knowm.xchange.BaseExchange.applySpecification(BaseExchange.java:115)
	at info.bitrich.xchangestream.core.StreamingExchangeFactory.createExchange(StreamingExchangeFactory.java:92)

-------- THE CODE -----------

BinanceStreamingExchange streamingExchange =
                (BinanceStreamingExchange) StreamingExchangeFactory.INSTANCE.createExchange(BinanceStreamingExchange.class.getName());

ProductSubscription.ProductSubscriptionBuilder subBuilder = ProductSubscription.create();
for (CurrencyPair pair : streamingExchange.getExchangeSymbols()) {
    subBuilder.addOrderbook(pair);
}
    
streamingExchange.connect(subBuilder.build()).blockingAwait();
    
for (CurrencyPair pair : streamingExchange.getExchangeSymbols()) {             
    streamingExchange.getStreamingMarketDataService().getOrderBook(pair).subscribe(orderBook -> {
        System.out.println(orderBook.getAsks().get(0).getLimitPrice());
    }, throwable -> LOG.error("ERROR in getting placeOrder book: ", throwable));
}
@badgerwithagun
Copy link
Collaborator

When you request a streaming order book, we have to make REST requests for the initial snapshots. Just that on its own will cause a rate ban if you submit hundreds in quick succession.

It can sometimes take several requests to get the order book in sync, which makes things worse.

This also occurs if the connect drops or we get out-of-order updates on the socket, so even if you survive the initial connections, you might get a rate ban later after a brief connection drop.

I think we probably need a global blocking frequency limit on these requests, similar to the one used in XChange's Bitmex implementation, or connecting to all the order books will always cause this problem.

In the meantime, I don't think there's a good workaround :/

@cto
Copy link
Author

cto commented Dec 15, 2018

Thanks for your prompt reply

When you request a streaming order book, we have to make REST requests for the initial snapshots. Just that on its own will cause a rate ban if you submit hundreds in quick succession.

With WebSocket I still have to do call the "subscribe" even after REST, and it using WebSocket alone never fails me immediately initially, so having to use REST or not initially is less of a problem for me now.

It can sometimes take several requests to get the order book in sync, which makes things worse.

What do you mean ?

This also occurs if the connect drops or we get out-of-order updates on the socket, so even if you survive the initial connections, you might get a rate ban later after a brief connection drop.

Then can I use a frequent check below ?

if (!streamingExchange.isAlive()){ streamingExchange.connect(subBuilder.build()).blockingAwait(); }

I think we probably need a global blocking frequency limit on these requests, similar to the one used in XChange's Bitmex implementation, or connecting to all the order books will always cause this problem.

Can you point me to the file name you meant in XChange source code for Bitmex ?

In the meantime, I don't think there's a good workaround :/

Can I try to be slower in accepting update from Binance, like the following ?

for (CurrencyPair pair : streamingExchange.getExchangeSymbols()) {             
        streamingExchange.getStreamingMarketDataService().getOrderBook(pair).subscribe(orderBook -> {
            System.out.println(orderBook.getAsks().get(0).getLimitPrice());
            Thread.sleep(1000); //// <---- SLEEP A BIT HERE TO AVOID RATE LIMIT
                }, throwable -> LOG.error("ERROR in getting placeOrder book: ", throwable));
    }

@badgerwithagun
Copy link
Collaborator

Hey @cto, no I don't think there's anything you can do from your own code. xchange-stream does all the REST API calls itself, quietly in the background when you open a stream. It also does them automatically in the background if it detects a drift in the socket and realised it has to resync.

An xchange-stream change is needed to limit the global rate for all these background REST API calls. I will look at this later today and see if I can give you a temporary workaround while I think of the best solution.

@davidjirovec
Copy link
Contributor

davidjirovec commented Dec 16, 2018

I had same problem, I've solved it using AspectJ

import com.google.common.util.concurrent.RateLimiter;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@Aspect
public class ApiRateLimitAspect {

    private static final Logger logger = LoggerFactory.getLogger(ApiRateLimitAspect.class);

    private static final RateLimiter RATE_LIMITER = RateLimiter.create(2);

    @Around("execution(* org.knowm.xchange.binance.service.BinanceMarketDataServiceRaw.*(..))")
    public Object aroundBinanceMarketDataServiceRaw(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
        logger.debug("Rate limiting aspect called");
        RATE_LIMITER.acquire();
        return proceedingJoinPoint.proceed();
    }

}

But clean solution would be to be able to pass http client to use to xchange.

@badgerwithagun
Copy link
Collaborator

badgerwithagun commented Dec 16, 2018

Cunning workaround, @davidjirovec! I may borrow it for my own stuff ;).

Just thinking about the "best" solution here for the project:

RateLimiter (presumably the Guava one?) is a bit of a blunt instrument when applied across the entire market data service; in practice most exchanges allow bursts of requests as long as they don't exceed a certain number in a second or minute correction, RateLimiter does do this, my mistake.

I expect someone on the XChange project has already had the conversation about applying limits and decided it's the user's responsibility (using an approach like yours).

The special case here is that xchange-stream is firing these requests automatically and thus is the user! So... this is probably xchange-stream's problem to solve. I already added a wait in the order book code, which is specific to the market data service. I think all I need to do is to make that wait scoped instead to the API key, and use RateLimiter since it's much nicer.

PR on the way.

@cto
Copy link
Author

cto commented Dec 16, 2018

My current solution is to temporarily switch to Rest API (XChange) for Binance, although it can be used to any exchange, then use a waiting time between requests.

This waiting time is dynamically changed depending on how overlapping the contents of two consecutive responses are: too much overlapping (asymptotically to 99.99%) means two requests are two close to each other, too little overlapping (towards 0.1%) means two requests are too far apart.

A client solution, not so clean I must say, but working.

@badgerwithagun
Copy link
Collaborator

Hmm, @cto, in my testing, I'm having a lot of trouble getting this to work well. It's fundamentally a very hard problem, for reasons which having nothing to do with xchange-stream.

Binance forces these REST API fetches on us, and we can only attempt at most about 3-4 of these a second).

All that means that running your example above with my fix in place, it takes around 10-15 minutes before most of the order books are synchronized, and there are always a few order books which are so illiquid that they haven't even sent updates through yet!

I'm going to keep playing with it when I have time, but it won't be today.

@badgerwithagun
Copy link
Collaborator

This seems to work: https://github.com/badgerwithagun/xchange-stream/commits/fix-266.

It's a bit ugly at the moment. I went a bit crazy with concurrency tricks to try and resolve some bugs, so need to go back, tidy up and do more testing

@cto
Copy link
Author

cto commented Dec 17, 2018

Hi @badgerwithagun , thanks for your effort, I'll look at it,
By the way, a bit unrelated, but since it's about Binance Orderbook anyway, does anyone know why the class BinanceOrderbook organizes the BIDs and ASKs orderbooks as maps ?

It surely merges different orders having the same price into one, then there is no way to undo this information loss ?

public final class BinanceOrderbook {
    public final long lastUpdateId;
    public final SortedMap<BigDecimal, BigDecimal> bids;
    public final SortedMap<BigDecimal, BigDecimal> asks;

@badgerwithagun
Copy link
Collaborator

@cto: That's the point, I believe. It creates an aggregated order book.

@badgerwithagun
Copy link
Collaborator

I think we can solve this using the same approach agreed on #199. Add a BEFORE_REST_API_CALL generic exchange parameter which allows the user to push API calls made in the background through a shared rate limiter in their application.

@badgerwithagun
Copy link
Collaborator

I suspect I am going to need to add more widespread use of API calls from within xchange-stream to deliver Coinbase Pro's authenticated streams (see #274) so will probably have to tackle this at the same time.

@badgerwithagun badgerwithagun self-assigned this Jan 29, 2019
@badgerwithagun badgerwithagun changed the title Binance Read orderbook from websocket: HTTP status code was not OK: 418 [binance] Read orderbook from websocket: HTTP status code was not OK: 418 Dec 6, 2019
@badgerwithagun badgerwithagun changed the title [binance] Read orderbook from websocket: HTTP status code was not OK: 418 [binance] Hitting rate limiting when subscribing to large numbers of pairs Dec 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants