Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response middleware #48

Open
coryodaniel opened this issue Nov 28, 2019 · 9 comments
Open

Response middleware #48

coryodaniel opened this issue Nov 28, 2019 · 9 comments
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@coryodaniel
Copy link
Owner

A lot of the middleware functionality has been developed and stubbed already. Should be straight forward to integrate.

See Feature #46
See Issue #42

@coryodaniel coryodaniel added enhancement New feature or request help wanted Extra attention is needed good first issue Good for newcomers labels Nov 28, 2019
@coryodaniel coryodaniel added this to the 1.1 milestone Apr 20, 2020
@mruoss mruoss modified the milestones: 1.1, 1.2 Mar 13, 2022
@jvantuyl
Copy link

jvantuyl commented Jun 2, 2022

I've started work on a PR for this and it's been a bit tough. Challenges include:

  • Response handling basically assumes HTTPPoison structurally, making it hard to make a solution that will work generically (like the one for requests).
  • Ordering isn't exactly clear between request and response chains (i.e. Should we walk response middleware backwards or must they be listed backwards in order to get "nesting" behavior?)
  • API in examples seems missing (can't find K8s.Middleware.set/*).
  • Type for a client request is mixed together with behavior for request middleware.
  • There's not a clear separation between adapter opts, middleware opts, and request opts. This gets further muddled in the "initialize" middleware.
  • Functionality is unevenly split between core code and middleware. For example:
    • JSON encoding is a middleware, but JSON decoding is baked in.
    • The Initialize middleware makes other middleware position-sensitive with response to HTTP-provider modification of the request.

I could keep working on this, but have we considered migrating the K8s.Client.HTTPProvider to Tesla?

This would wipe problem out categorically and provide a lot of benefits over the current approach. Specifically:

  • Middleware baked in as a core feature.
  • Uses plug for middleware pipelines, which is already widely understood.
  • Already supports quite a few HTTP clients:
  • Already supports multiple JSON engines
  • Finch as an option means we also get a high-performance connection pool out of the box.
  • Plug-style middleware would allow handlers to catch exceptions, where the current model can't.
  • For middlewares that need both the request and response, ordering is easily kept consistent.
  • All selection and configuration of middleware and adapters is config-driven.
  • Configuration can still be overridden per request, leaving an opening for the DynamicHTTPProvider shim for testing.

I don't know if many others are writing a lot of middleware today. If they aren't, maybe you could just deprecate the home-grown one to avoid confusion.

@mruoss mruoss removed this from the 1.2 milestone Dec 7, 2022
@arathunku
Copy link

I just wanted to +1 option for Tesla support, and it's not without a reason!

With current Mint implementation, on 2.0.0-rc.5, I've observed flaky concurrent requests.

I've run side by side requests via Tesla, and K8s, both with Mint adapter and executed 50 concurrent requests. Very, very often K8s with Mint adapter ended up returning {:ok, nil} and logging error 'The response body is supposed to be JSON but could not be decoded.` I'm sure SSL, cert options and other options are matching because I reused values from K8s.Conn.

This only happened when I ran the test from within a pod on a cluster. When I did the same test locally, both Tesla and K8s were able to return valid results. I assume this is due to my slow network and congestion pipelining the requests.

I've tried a very simple, Tesla (with Mint!) http_provider for K8s

defmodule K8s.TeslaHTTPProvider do
  def request(method, url, body, headers, opts) do
    client(opts)
    |> Tesla.request(
      method: method,
      body: body,
      url: url,
      headers: headers |> Enum.map(fn {key, value} -> {to_string(key), value} end)
    )
    |> case do
      {:ok, %Tesla.Env{body: body, status: status}} when status >= 200 and status < 300 ->
        {:ok, body}

      {:ok, %Tesla.Env{status: status}} ->
        {:error, K8s.Client.HTTPError.new(message: "HTTP Error #{status}")}
    end
  end

  defp client(opts) do
    ssl = Keyword.fetch!(opts, :ssl)

    Tesla.client(
      [Tesla.Middleware.JSON],
      {Tesla.Adapter.Mint, transport_opts: ssl}
    )
  end
end

And with those changes, requests completed successfully both locally and on cluster too.

@mruoss
Copy link
Collaborator

mruoss commented Jan 18, 2023

Oooh! Thanks for testing and the feedback. I got the feeling the current implementation is a bit overengineered. That being said, I'd like to be able to provide request, stream and stream_to. That's what drove me towards a basic mint impl. Your TeslaHTTPProvider doesn't implement the full behaviour (yet). But I'm gonna look into this for sure!

@arathunku
Copy link

My TeslaHTTPProvider is definitely not a replacement 😆 Currently, I've got a very simple case where I only use list and get on few resources, so it's enough. I just wanted to confirm my theory that it's indeed something on K8s Mint impl side, not Tesla / Mint / k8s api server.

@mruoss
Copy link
Collaborator

mruoss commented Jan 18, 2023

Could you provide your test code as well?

@arathunku
Copy link

I don't have it at hand but it was basically:

{:ok, conn} = K8s.Conn.from_service_account()
conn =struct!(conn, url: "https://kubernetes.default.svc/")


(0..100)
|> Enum.to_list()
|> Enum.map(fn _ ->
  K8s.Client.list("v1", :nodes)
  |> K8s.Operation.put_query_param(:limit, 100)
  |> K8s.Client.put_conn(conn())
  |> K8s.Client.run()
end)

This is sequential, where I still observed problems and flaky results

10:10:27.319 [error] The response body is supposed to be JSON but could not be decoded.

10:10:27.863 [error] The response body is supposed to be JSON but could not be decoded.

10:10:28.520 [error] The response body is supposed to be JSON but could not be decoded.

10:10:30.565 [error] The response body is supposed to be JSON but could not be decoded.

10:10:31.989 [error] The response body is supposed to be JSON but could not be decoded.

10:10:32.813 [error] The response body is supposed to be JSON but could not be decoded.
....

[{:ok, nil}, {:ok, nil}, {:ok, ...list of nodes}, ....]

Very simply snippet. The same code, with Tesla Mint returned all results. No visible errors from k8s API server side.

@mruoss mruoss mentioned this issue Jan 18, 2023
4 tasks
@mruoss
Copy link
Collaborator

mruoss commented Jan 18, 2023

ok let's continue here for the problem with the current implementation: #215

As for Tesla: It does not support response streaming at the moment and I'd need a separate solution for websocket connections (which would be ok, though).

elixir-tesla/tesla#271

@jvantuyl
Copy link

jvantuyl commented Jan 19, 2023

This is interesting. It looks like the Gun adapter might support streaming the body back. See here.

From the docs:
:body_as - What will be returned in %Tesla.Env{} body key. Possible values:
* :plain - as binary (default).
* :stream - as stream. If you don't want to close connection (because you want to reuse it later) pass close_conn: false in adapter opts.
* :chunks - as chunks. You can get response body in chunks using Tesla.Adapter.Gun.read_chunk/3 function.

@mruoss
Copy link
Collaborator

mruoss commented Jan 19, 2023

Ooh nice... anybody care to try to write an adapter that implements https://github.com/coryodaniel/k8s/blob/develop/lib/k8s/client/provider.ex?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants