-
-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response middleware #48
Comments
I've started work on a PR for this and it's been a bit tough. Challenges include:
I could keep working on this, but have we considered migrating the This would wipe problem out categorically and provide a lot of benefits over the current approach. Specifically:
I don't know if many others are writing a lot of middleware today. If they aren't, maybe you could just deprecate the home-grown one to avoid confusion. |
I just wanted to +1 option for With current I've run side by side requests via Tesla, and K8s, both with Mint adapter and executed 50 concurrent requests. Very, very often K8s with Mint adapter ended up returning This only happened when I ran the test from within a pod on a cluster. When I did the same test locally, both Tesla and K8s were able to return valid results. I assume this is due to my slow network and congestion pipelining the requests. I've tried a very simple, Tesla (with Mint!) http_provider for K8s
And with those changes, requests completed successfully both locally and on cluster too. |
Oooh! Thanks for testing and the feedback. I got the feeling the current implementation is a bit overengineered. That being said, I'd like to be able to provide request, stream and stream_to. That's what drove me towards a basic mint impl. Your TeslaHTTPProvider doesn't implement the full behaviour (yet). But I'm gonna look into this for sure! |
My |
Could you provide your test code as well? |
I don't have it at hand but it was basically:
This is sequential, where I still observed problems and flaky results
Very simply snippet. The same code, with Tesla Mint returned all results. No visible errors from k8s API server side. |
ok let's continue here for the problem with the current implementation: #215 As for Tesla: It does not support response streaming at the moment and I'd need a separate solution for websocket connections (which would be ok, though). |
This is interesting. It looks like the Gun adapter might support streaming the body back. See here. From the docs: |
Ooh nice... anybody care to try to write an adapter that implements https://github.com/coryodaniel/k8s/blob/develop/lib/k8s/client/provider.ex? |
A lot of the middleware functionality has been developed and stubbed already. Should be straight forward to integrate.
See Feature #46
See Issue #42
The text was updated successfully, but these errors were encountered: