-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeded Random Number Generator #11
Seeded Random Number Generator #11
Conversation
Added optional cmdline parameter -s, --seed to generate RNG, useful to verify test cases. When --seed parameter is not provided, reverts to python default implementation to generate random number
For this problem, there is no "expected" or "correct" center allocation. Tests should focus on whether the constraints mentioned in #4 are satisfied. Even if we constrain the random sequence, the output is inherently dependent on the implementation detail. It may turn out you will need to change test assertions every time you tweak something in the implementation. |
The output is and always will be dependent upon the implementation details. My logic for introducing testing is to verify unrelated feature addition pull request which could change the program output and get merged. Right now, we've no way of verifying if the program behaves as intended, other than reviewing the code manually. |
I agree there should be tests but my request is not to write overly specific tests - Let's take a scenario School A has 416 students and with a fixed seed, it allocates students to
One of two things will happen -
Done with the rant what is my proposal?
|
I believe a seeded random generator is a great idea. Just for a different reason than writing a static test case upstream.
An optional argument still ensures that the results vary for every execution. The Of course in the early stages, code optimizations would not be expected as algorithms or approaches may vary until the codebase approaches a level of stability. Furthermore, it may also be used for conveniently sharing a result across devices although this may be a bit far-fetched.
I believe this argument is against a static test case upstream (which I'm against as well), but shouldn't the project coerce the contributors to ensure that their results stay valid? Contributors need to be coerced to justify their breaking changes to any codebase.
Again, this is against a static test case upstream but still, this isn't how open-source development works. Even in the event of a static test case upstream, any change will be visible to anyone so you cannot just change the test file. Not to mention such tests should be handled by an automated system. |
@justfoolingaround agree on reproducibility for development. only reservation was against static tests |
Please confirm |
Every PRNG needs a seed value to initialize and most implementation uses system time for seeding because it's always changing. In case of python, when seed argument is set to None, it reverts to using system time which is also its default implementation. docs over here (https://docs.python.org/3/library/random.html#random.Random)
and here (https://docs.python.org/3/library/random.html#random.seed)
|
Added optional cmdline parameter -s, --seed to generate random number with seed value, useful to verify test cases.
The program always return same output when run with same SEEDVALUE and same input value. i.e. the result become deterministic. This is useful to verify the correctness of program by testing the output against same input and same SEEDVALUE when changes are introduced in the algorithm.
When --seed parameter is not provided, reverts to python default implementation to generate random number.