Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Research: Add some kind of mechanism to allow ensuring data consistency with definable rules #211

Open
der-gabe opened this issue Jul 19, 2024 · 1 comment
Labels

Comments

@der-gabe
Copy link
Member

User Story

We want a way to define certain data consistency rule, for instance:

  • This attribute must be set if that other attribute is set
  • This attribute must have a certain value depending on the value of another attribute
  • This attribute's value must be greater than this other attribute's value

Possible approaches

This could be done in several ways:

  • A way to add such rules directly to Entity, Attribute or Value instances
    • It's an open question whether a violation should only throw up a warning, or block the user from submitting/saving the change, at all.
      • Perhaps this should be configurable in the rule itself, as well.
  • A generic task framework (similar to the Gitlab CI/CD pipeline) that allows scheduling and running tasks on predefined conditions
    • Tasks would probably need well defined exit/return status, such as simple PASS/SUCCESS & FAIL, or those two plus WARN & ERROR or whatever else.
      • Should allowable status be configurable?
      • How would the tasks themselves, their status and the pipeline as a whole be displayed to users?
    • A data consistency rule would then just be a type of task that e.g. checks and compares attribute values after a change has been made.
    • Again, it's an open question what exactly should happen when the test fails
      • Should such tasks simply exit, returning the appropriate status?
        • Would that alone be enough to alert users to a problem or is something more needed?
      • Should a Change be auto-rejected when it makes a test fail?
      • Should a direct data change1 be rolled back automatically when it makes a test fail?
      • In case there are more than two possible exit/return values Should that action depend on the exact of the task (iones). If so, how is that dependency
      • Perhaps this should be configurable in the task, somehow.

Procedural stuff

The fact that there are (at least) two possible approaches2 and open questions for each one is what makes this a research issue.

Footnotes

Footnotes

  1. i.e. one that bypasses Change.

  2. and perhaps even more that just haven't occurred to us

@crazyscientist
Copy link
Collaborator

Just for reference: #7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants