Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream of events from Auth server. #112

Open
taaraora opened this issue Jun 27, 2019 · 9 comments
Open

Stream of events from Auth server. #112

taaraora opened this issue Jun 27, 2019 · 9 comments

Comments

@taaraora
Copy link

I need to have some authentication audit info on users profile.
Specifically:

  1. last logged in timestamp
  2. last logged out timestamp
  3. last logged In IP
  4. last failed log-in IP
  5. total failed log-in count
  6. last failed log-in timestamp

Is it possible to get stream of log-in/log-out events for further processing?

@cainlevy
Copy link
Member

cainlevy commented Jul 1, 2019

Sounds like a good candidate for structured logging. Would that work for you? I don't yet have plans to add other event sinks.

@taaraora
Copy link
Author

taaraora commented Jul 3, 2019

Structural logging doesn't work for me. In my case, other microservices should react to these facts (events). For instance, fraud microservice should subscribe to those events and produce some changes on a user profile.

@hydroid7
Copy link

I'm facing with the same problem: integrating authn into a messaging driven system.

The solution I was thinking of is to write a reverse proxy (frameworks make life easier e. g. this) in order to send messages to the queue for specific routes.

@cainlevy
Copy link
Member

cainlevy commented Aug 6, 2019

Let's talk about an event system! I like the idea, but am a little cautious of setting up to support multiple protocols.

So my proposal is:

  1. Implement an event bus over normal HTTPS with basic auth(?).
  2. Encourage microservice adapters that convert AuthN events to whatever native messaging protocol people enjoy. These microservice adapters could live within the Keratin org, or at the very least have a listing in the AuthN documentation.
  3. Existing webhooks would convert to this event bus in a 2.0 release.
  4. Exact events and data TBD

@rafamel
Copy link

rafamel commented Aug 6, 2019

Hey @cainlevy, glad you're considering #120. Let me just jump in regarding what I put forwards there.

I'm personally not very opinionated on the implementation details. I think that as long as they allow:

  • Control over AuthN response success/failure.
  • Preventing changes (updating data, registering a user...)
  • Control over token data.

We'd already be covering most of it.

You mentioned an implementation w/ HTTPS with Basic Auth. I'm not exactly sure of what the need for auth here would be, since events would only come from AuthN and not viceversa?

I was thinking about the specific timing for the events (after actions, before, or both). For me, the question here arises in regards to actions that modify data (signup, password changes, etc); do we want these hooks to be called before, afterwards, or both? If we call them before we'll be able to block the action, but we won't have confirmation the action was successful on AuthN side; if we call them after we might not be able to undo changes (password change, account update) though we'll be sure it was a successful op data-wise.

We could:

  • Go just with before hooks, for the sake of simplicity. This is, I think, my preferred solution atm. We'd have a before hook call per action (signup, login, password change, etc) that POST the input data depending on the action (so for signup: username, password, email...), and which 200 response can prompt AuthN to either continue or fail the request: if we prompt AuthN to fail a request that implied data changes (a signup, a password change) they won't be performed. For actions that return an access token (signup, login), hook responses could also include a token_data or token_merge field for data to be merged with the token data AuthN will return to the client.
  • Go with after hooks, with a catch. We'd follow the same flow, but if the hook response prompts AuthN to fail a request that implied data changes, it would roll back the changes. This is not ideal, as it would add more complexity, and increase # database calls.
  • Go with both, which might add unnecessary complexity? I'm also thinking people might overuse this.

In any case, we wouldn't need to think about much more since further integration w/ 3rd parties could be done ad-hoc. As an example, we could always add further role-related data to the token (via the signup and login endpoints) regardless of the system we use to provide them. Integration w/ 3rd parties would be our responsibility, not AuthN.

Let me know your thoughts!

@cainlevy
Copy link
Member

cainlevy commented Aug 6, 2019

@rafamel I must apologize, I was imprecise and mixed up events with hooks. I'm going to reopen #120 as a complement to this one.

@silasdavis
Copy link
Contributor

I have used this redis library: https://github.com/vmihailenco/taskq along with some dispatch code to replace the webhook senders. Chiefly for email sending.

Let me get round to putting it in a gist to share, I think it would be a nice alternative to webhook handlers with reliable deliver and crash recovery.

@silasdavis
Copy link
Contributor

silasdavis commented Aug 8, 2019

Here is a stripped down repo containing the bits pertinent to workers: https://github.com/silasdavis/pericyte-workers

Core implementation here: https://github.com/silasdavis/pericyte-workers/blob/master/workers/dispatcher.go

Which returns a function you can fire password reset request messages into.

We can then fire a message from a handler: https://github.com/silasdavis/pericyte-workers/blob/master/handlers/post_password_reset.go#L36

Where that function is implemented by this: https://github.com/silasdavis/pericyte-workers/blob/master/services/password_reset.go#L61

With the closure above implementing the worker logic that gets registered on startup to handle such messages.

taskq the library I use makes use of the redis consumer group functionality so each pericyte can scale and share messages. It is crash-fault tolerant in that if a message is not acked by a consumer it will be reprocessed and also supports exponential backoff on retrying messages. Errors go through the error reporter.

@AlexCuse
Copy link
Contributor

AlexCuse commented Jul 9, 2023

https://github.com/ThreeDotsLabs/watermill offers a nice pattern for the "webhook to message" adapter - you basically can use their HTTP middleware as the receiver and output to any supported back end.

Here is an example sending messages to kafka https://github.com/ThreeDotsLabs/watermill/blob/master/_examples/real-world-examples/receiving-webhooks/main.go

And a list of currently supported back ends https://watermill.io/pubsubs/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants