Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/test cases #89

Merged
merged 7 commits into from
May 4, 2024
Merged

Conversation

samurato
Copy link
Contributor

@samurato samurato commented May 1, 2024

Summary

Added Test cases to validate the tests mentioned in #4 .

  • एक विद्यालयको परिक्षार्थी संख्या हेरी सकभर १००, २०० भन्दा बढी परीक्षार्थी एकै केन्द्रमा नपर्ने गरी बाँढ्न पर्ने
  • आफ्नै विद्यालयमा केन्द्र पार्न नहुने
  • दुई विद्यालयका परीक्षार्थीको केन्द्र एक अर्कामा पर्न नहुने, अर्थात् कुनै विद्यालयका परीक्षार्थीको केन्द्र परेको विद्यालयका परीक्षार्थीहरूको केन्द्र अघिल्लो विद्यालयमा पार्न नहुने ।
  • एकै स्वामित्व / व्यवस्थापनको भनी पहिचान भएका केन्द्रमा पार्न नहुने
  • विगतमा कुनै विद्यालयको कुनै केन्द्रमा पार्दा समस्या देखिएकोमा केन्द्र नदोहोऱ्याउन नहुने
  • प्रत्येक पटक केन्द्र तोक्ने प्रोग्राम चलाउदा फरक फरक नतिजा आउने गरी ऱ्यान्डमाइज भएको हुनु पर्ने

Pull request type

Please try to limit your pull request to one type, and submit multiple pull requests if needed.

Please tick the type of change your PR introduces:

  • Bugfix
  • Feature
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no API changes)
  • Build related changes
  • Documentation content changes
  • Other (please describe):

What is the current behaviour?

Currently there is no Tests to validate/ accept that the feature submission is passing the acceptance criteria.

Please describe the behaviour that needs to be modified or linked to a relevant issue.
This Pull request is adding a feature in utils to parse a process csv file and declared testcases to tackle the problem.
Either pytest or simple python test_cases.py needs to be run to ensure that the new submissions are passing the acceptance criteria

Issue Number: #4

Please describe the changes or updates being made by this pull request.

  • Added custom TSV parsers class in utils folder to retrieve the informations from the input and results folder
  • Added unit tests to validate the results are as expected
  • tests need to be run multiple times as the results are non deterministic to get the best desired outcome of this program

Checklist

Please review and complete the following checklist before submitting your pull request:

  • I have tested the changes locally and they work as intended.
  • I have provided a detailed description of the changes made.
  • I have reviewed the formatting and ensured it follows the project's style guidelines.
  • I have assigned the appropriate labels to the pull request.
  • I have added necessary documentation or updated existing documentation for the changes made.
  • I have addressed code review feedback and resolved any conflicts with the main branch.

Other information

Include screenshots of the component before and after the change.
As per the image the test is failing if any of the test cases outlined in the issue is failing

image

This was referenced May 1, 2024
test_results.py Outdated Show resolved Hide resolved
test_results.py Outdated Show resolved Hide resolved
test_results.py Outdated Show resolved Hide resolved
@samurato
Copy link
Contributor Author

samurato commented May 2, 2024

@sapradhan Could you please kindly review my new updates ??

@sapradhan
Copy link
Collaborator

looks good to me, much needed testing. is there a way to repeat test say 10-20 times, regenerate the output as well?

@samurato
Copy link
Contributor Author

samurato commented May 3, 2024

Yes absolutely, in my fork I have integrated GitHub actions to run this test 10 times and produce output. Link Here:-
https://github.com/samurato/center-randomize/actions

I am not sure if actions will run in the main repo if it will I will modify the actions and push it to the main repo.

@samurato
Copy link
Contributor Author

samurato commented May 3, 2024

Yes absolutely, in my fork I have integrated GitHub actions to run this test 10 times and produce output. Link Here:- https://github.com/samurato/center-randomize/actions

I am not sure if actions will run in the main repo if it will I will modify the actions and push it to the main repo.

Probably will create a new issue to implement this later if this PR is merged

@sumanashrestha
Copy link
Collaborator

a bit skeptical about test_undesired_cscode_scode_pair it fails with a version of prefs.tsv.

dont think the way get_scode_cscode_id disregards scode / cscode is the correct. at least problematic history is not bidirectional

skipping that test but merging as a lot of PRs are blocked because of lack of tests

@sumanashrestha sumanashrestha merged commit 86ba2c3 into moest-np:main May 4, 2024
@samurato
Copy link
Contributor Author

samurato commented May 4, 2024

a bit skeptical about test_undesired_cscode_scode_pair it fails with a version of prefs.tsv.

dont think the way get_scode_cscode_id disregards scode / cscode is the correct. at least problematic history is not bidirectional

skipping that test but merging as a lot of PRs are blocked because of lack of tests

Understood. I will work on a more optimised code to tackle those test cases.

@samurato
Copy link
Contributor Author

samurato commented May 4, 2024

a bit skeptical about test_undesired_cscode_scode_pair it fails with a version of prefs.tsv.
dont think the way get_scode_cscode_id disregards scode / cscode is the correct. at least problematic history is not bidirectional
skipping that test but merging as a lot of PRs are blocked because of lack of tests

Understood. I will work on a more optimised code to tackle those test cases.
@sapradhan @sumanashrestha

In my previous commits I had added individual test cases to very for each less preferred allocation. For which reason was assumed as a unique type

But I was made aware that the reason column is an open text field. Hence I generalised the logic for any combinations of less preferred to be flagged as fail. Can we assume the reason column to be a classifier or have a seperate column as a reason type ?

image

In that way we can customise tests for each reason.

@sumanashrestha
Copy link
Collaborator

the test should simply check for each row in output file if there is a preference score it should be > PREF_CUTOFF

Copy link
Contributor

@horrormyth horrormyth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the late review, noticed couple of things. Great stuff btw 🎖️

return reader

def get_columns(self):
with open(self.file_path, "r", newline="", encoding="utf-8") as file:
Copy link
Contributor

@horrormyth horrormyth May 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice not to open and close files everywhere in the code; this means that the testing will be pretty cumbersome. Not to mention the repetitions. This also leads to multiple opportunities for executing files all over the code more open doors for executing malicious file. I would like to know the rationale behind it. IO should be done only once as a data input, not wherever we can. Also, since the data size is small, we can load it in memory and pass it along so we don't have to worry about th reader being exhausted.

I think this TSV parser could be reformatted if we really need one; otherwise, just a couple of functions would have done the job.

class TsvFileParser:
    def __init__(self, file_path):
        self.file_path = file_path
        self.parsed_data : YourType = None
    
    def parse(self, ..) -> None:
        try:
         with open(self.file_path, "r", newline="", encoding="utf-8") as file:
            reader = csv.DictReader(file, delimiter="\t")
        except YourException as e:
            # log, exit depending on the need, and inform the user or raise with custom exception handled elsewhere. But logging, exiting, and telling the user should be enough here.

        data = []
        for row in reader:
            data.append(row)
        self.parsed_data = data
        return self


tsv_parser = TsvFileParser(lovely_file_name)
parsed_data = tsv_parser.parse().parsed_data
or
tsv_parser.parse()
parsed_data = tsv_parser.parsed_data

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed will optimise this code in my Next PR

data.append(row)
return data

def get_column_data(self, column_name):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These seem unnecessary functions; it would be good to not add any functions that we don't need. This becomes unmanageable down the line and then we have functions that don't have tests also which is going to be pretty heavy for anyone to manage these

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I beg to differ though. These functions are decoupled and maintained inside utils. In this way we have many other objects inherit the same functionality with less repeated code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, are we sure that this function is used elsewhere? Otherwise, leaving around unused and untested functions is a relatively bad idea.

os.remove(self.school_center_distance_file)

def test_results_exists(self):
"""_Test if the application in running which output the results in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test name should be descriptive enough so that one doesn't need to write any comments, if one needs to write a comment for a function/test, then usually there is a smell; it might not be the case here, but I think the descriptive test would be better off even looking at the console.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using the python auto-doc string so just writing the comment to stick to consistency makes it easier for autoamated documentations later on. I can remove it but basically its checking if the output file exisits or not :).

test/test_results.py Show resolved Hide resolved
test/test_results.py Show resolved Hide resolved
PREF_CUTOFF = -4


def get_scode_cscode_id(data):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this function just for this test or could have been used in the actual code ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one of the helpers for the test. Not very proud of what I have done but its a make shift hashmap or dictionary to identify collisions in centers and schools.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make it private or mark it as the fixture with pytest ?

return school_centers


class TestSchoolCenter(unittest.TestCase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Project intends to use pytest, but for some reason these tests are written in unittest 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can convert it to pytest later on but pytest should be able to run unittests as well with no issues

Copy link
Contributor

@horrormyth horrormyth May 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it would be great to use pytest as it brings a lot of power to fixtures management and testing.And, it's not about pytest being able to run unit tests; it's more of the above and consistency. Also, the fact that we intend to use pytest, so it would be nice not to have the disparity. But all good, one step at a time ✅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants