Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added MEA and accuracy to the GitHub workflow #34

Closed
wants to merge 1 commit into from

Conversation

Ra00f1
Copy link

@Ra00f1 Ra00f1 commented Oct 14, 2024

Issue Number

Resolves #27 and #26

Purpose

Technical Details

Testing Instructions

the GitHub part can be tested by generating a pull request and the accuracy and MAE should be calculated and posted as a txt to the pull request and the function itself can be tested with this command "python utils/metrics_test.py"

Screenshots

Summary by CodeRabbit

  • New Features

    • Integrated automated testing and reporting of model performance metrics into the pull request process.
    • Added metrics for model accuracy (0.75) and Mean Absolute Error (MAE) (approximately 0.3333) to provide insights into predictive performance.
  • Bug Fixes

    • Improved workflow reliability with new testing steps.
  • Documentation

    • Enhanced clarity on model performance through updated metrics output files.

@Ra00f1 Ra00f1 requested a review from a team as a code owner October 14, 2024 13:54
Copy link

coderabbitai bot commented Oct 14, 2024

Walkthrough

The pull request introduces several updates to the workflow configuration for a Python application related to PMML. Key changes include the addition of a "Run tests" step that executes a script for calculating accuracy and MAE metrics, and a "Post results to PR" step that comments these metrics on the pull request. Additionally, two new performance metrics, accuracy score and MAE, have been added to two output files, enhancing the reporting of model performance.

Changes

File Change Summary
.github/workflows/pmml.yml Added steps: "Run tests" to execute metrics_test.py and "Post results to PR" to comment metrics.
pmml/Metrics_output.txt Added metrics: accuracy_score: 0.75 and MAE: 0.3333.
pmml/utils/Metrics_output.txt Added metrics: accuracy_score: 0.75 and MAE: 0.3333.
pmml/utils/metrics_test.py New script for evaluating model performance, calculating accuracy and MAE, and writing results.

Assessment against linked issues

Objective Addressed Explanation
Add accuracy score threshold check to GitHub workflow

Possibly related PRs

  • Calculate accuracy score #25: The changes in this PR involve the calculation and reporting of the accuracy score, which is directly related to the new metrics being reported in the main PR's workflow updates.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (4)
pmml/utils/metrics_test.py (1)

13-14: LGTM: Accuracy calculation is correct. Minor suggestion for consistency.

The accuracy calculation and logging are implemented correctly. Good use of basename(file) for context in logging.

For consistency with the MAE calculation on line 17, consider using 'accuracy' instead of 'accuracy_score' in the f-string:

print(basename(__file__), f'accuracy: {accuracy}')
.github/workflows/pmml.yml (3)

45-47: LGTM! Consider adding error handling.

The new "Run tests" step correctly executes the metrics test script. However, to improve robustness and debugging capabilities, consider adding error handling and output capturing.

Here's a suggested improvement:

 - name: Run tests
   run: |
-    python utils/metrics_test.py
+    python utils/metrics_test.py || echo "::error::Metrics test failed"
+    if [ ! -f Metrics_output.txt ]; then
+      echo "::error::Metrics_output.txt was not generated"
+      exit 1
+    fi

This change will:

  1. Report an error if the Python script fails.
  2. Check if the expected output file is generated and fail the step if it's not.

58-76: LGTM! Consider adding error handling for undefined metrics.

The "Post results to PR" step is well-structured and correctly posts the metrics as a comment on the pull request. However, we can improve its robustness.

Here's a suggested improvement:

 - name: Post results to PR
   if: github.event_name == 'pull_request'
   uses: actions/github-script@v6
   with:
     script: |
-      const accuracy = "${{ steps.compute_metrics.outputs.accuracy }}";
-      const mae = "${{ steps.compute_metrics.outputs.mae }}";
+      const accuracy = "${{ steps.compute_metrics.outputs.accuracy || 'N/A' }}";
+      const mae = "${{ steps.compute_metrics.outputs.mae || 'N/A' }}";
+      if (accuracy === 'N/A' || mae === 'N/A') {
+        core.setFailed('Failed to retrieve metrics');
+        return;
+      }
       const body = `
       ### Metrics from latest run:
       - **Accuracy**: ${accuracy}
       - **MAE**: ${mae}
       `;
-
       github.rest.issues.createComment({
         issue_number: context.issue.number,
         owner: context.repo.owner,
         repo: context.repo.repo,
         body: body
       });

This change:

  1. Adds default 'N/A' values if metrics are undefined.
  2. Adds error handling to fail the workflow if metrics are missing.
  3. Removes an unnecessary empty line in the script.

77-77: Remove trailing spaces.

There are trailing spaces on this line, which is considered bad practice in YAML files and can potentially cause issues with some YAML parsers.

Please remove the trailing spaces from this line:

-          
+
🧰 Tools
🪛 yamllint

[error] 77-77: trailing spaces

(trailing-spaces)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 107bb19 and dd941d7.

📒 Files selected for processing (4)
  • .github/workflows/pmml.yml (1 hunks)
  • pmml/Metrics_output.txt (1 hunks)
  • pmml/utils/Metrics_output.txt (1 hunks)
  • pmml/utils/metrics_test.py (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • pmml/Metrics_output.txt
  • pmml/utils/Metrics_output.txt
🧰 Additional context used
🪛 yamllint
.github/workflows/pmml.yml

[error] 49-49: syntax error: expected , but found ''

(syntax)


[error] 77-77: trailing spaces

(trailing-spaces)

🔇 Additional comments (2)
pmml/utils/metrics_test.py (1)

1-4: LGTM: Imports are appropriate and well-structured.

The import statements are concise and import only the necessary functions and modules. This is a good practice for maintaining clean and readable code.

.github/workflows/pmml.yml (1)

Line range hint 1-77: Great job implementing MAE and accuracy metrics in the GitHub workflow!

The changes in this file successfully address the PR objectives by adding steps to calculate and report Mean Absolute Error (MAE) and accuracy metrics. These enhancements will provide valuable feedback during the pull request process, helping to maintain and improve model performance.

Key improvements:

  1. Added a step to run the metrics test script.
  2. Implemented extraction and output of accuracy and MAE values.
  3. Created a step to post the metrics as a comment on pull requests.

These changes align well with the goals outlined in issues #27 and #26, particularly in adding an accuracy score check to the workflow. The automated posting of metrics as a PR comment will greatly assist in code reviews and decision-making during merges.

To further improve the implementation, consider:

  1. Adding error handling to the test running step.
  2. Updating the deprecated set-output syntax and improving error handling in the metrics extraction step.
  3. Enhancing the robustness of the PR comment posting step.
  4. Removing trailing spaces to adhere to YAML best practices.

Overall, this is a solid implementation that significantly enhances the GitHub workflow for the PMML project.

🧰 Tools
🪛 yamllint

[error] 49-49: syntax error: expected , but found ''

(syntax)


[error] 77-77: trailing spaces

(trailing-spaces)

Comment on lines +5 to +11
# Load the preprocessed test data CSV into a DataFrame
storybooks_csv_path = '../step1_prepare/step1_3_storybooks_test.csv'
storybooks_dataframe = pandas.read_csv(storybooks_csv_path)
val_y = storybooks_dataframe['reading_level']

# Load Predicted values from step3_2_predictions.csv
val_predictions = pandas.read_csv('../step3_predict/step3_2_predictions.csv')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance robustness of data loading process.

While the data loading process is straightforward, consider the following improvements:

  1. Use absolute paths or environment variables instead of relative paths to ensure the script works correctly regardless of where it's run from.
  2. Add error handling for file not found scenarios.
  3. Validate the structure of the loaded CSV files to ensure they contain the expected columns.

Here's a suggested improvement:

import os
from pathlib import Path

# Use environment variables or construct paths relative to the script location
BASE_DIR = Path(__file__).resolve().parent.parent
storybooks_csv_path = os.path.join(BASE_DIR, 'step1_prepare', 'step1_3_storybooks_test.csv')
predictions_csv_path = os.path.join(BASE_DIR, 'step3_predict', 'step3_2_predictions.csv')

try:
    storybooks_dataframe = pandas.read_csv(storybooks_csv_path)
    val_y = storybooks_dataframe['reading_level']
    val_predictions = pandas.read_csv(predictions_csv_path)

    # Validate dataframe structure
    assert 'reading_level' in storybooks_dataframe.columns, "Missing 'reading_level' column in test data"
    assert val_predictions.shape[1] == 1, "Predictions should be a single column"

except FileNotFoundError as e:
    print(f"Error: {e}. Please ensure the CSV files exist in the correct location.")
    exit(1)
except AssertionError as e:
    print(f"Error: {e}. The structure of the CSV files is not as expected.")
    exit(1)

This modification improves the script's robustness and provides clearer error messages if issues arise.

Comment on lines +19 to +22
# Save the results to a file for the GitHub workflow to read
with open('Metrics_output.txt', 'w') as f:
f.write(f'accuracy_score: {accuracy}\n')
f.write(f'MAE: {mae}\n')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance error handling and file path management for results saving.

While the use of a context manager for file operations is good practice, consider the following improvements:

  1. Add error handling for potential IOErrors during file writing.
  2. Use an absolute path or environment variable for the output file location to ensure consistency regardless of where the script is run from.
  3. Consider appending to the file instead of overwriting, or include a timestamp in the filename to preserve historical results.

Here's a suggested improvement:

import os
from datetime import datetime

# Use an environment variable or a config file to set the output directory
output_dir = os.environ.get('METRICS_OUTPUT_DIR', '.')
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_file = os.path.join(output_dir, f'Metrics_output_{timestamp}.txt')

try:
    with open(output_file, 'w') as f:
        f.write(f'accuracy_score: {accuracy}\n')
        f.write(f'MAE: {mae}\n')
    print(f"Metrics successfully written to {output_file}")
except IOError as e:
    print(f"Error writing to file: {e}")

This modification improves error handling, uses a more robust file path, and includes a timestamp in the filename to preserve historical results.

Comment on lines +16 to +17
mae = mean_absolute_error(val_y, val_predictions)
print(basename(__file__), f'accuracy_score: {mae}')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix typo in MAE print statement.

The MAE calculation is correct, but there's a typo in the print statement.

Please apply the following fix:

- print(basename(__file__), f'accuracy_score: {mae}')
+ print(basename(__file__), f'MAE: {mae}')

This change ensures that the output correctly identifies the metric as MAE instead of accuracy_score.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
mae = mean_absolute_error(val_y, val_predictions)
print(basename(__file__), f'accuracy_score: {mae}')
mae = mean_absolute_error(val_y, val_predictions)
print(basename(__file__), f'MAE: {mae}')

Comment on lines +49 to +56
shell: bash
run: |
accuracy=$(grep -Po 'accuracy_score: \K[0-9.]+' Metrics_output.txt)
mae=$(grep -Po 'MAE: \K[0-9.]+' Metrics_output.txt)
echo "Accuracy: $accuracy"
echo "MAE: $mae"
echo "::set-output name=accuracy::$accuracy"
echo "::set-output name=mae::$mae"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update deprecated set-output command and add error handling.

The step correctly extracts the metrics, but there are a few improvements we can make:

  1. The ::set-output syntax is deprecated. Use the $GITHUB_OUTPUT environment file instead.
  2. Add error handling in case the grep commands don't find matches.
  3. Consider using awk instead of grep for more robust parsing.

Here's a suggested improvement:

 shell: bash
 run: |
-  accuracy=$(grep -Po 'accuracy_score: \K[0-9.]+' Metrics_output.txt)
-  mae=$(grep -Po 'MAE: \K[0-9.]+' Metrics_output.txt)
+  accuracy=$(awk '/accuracy_score:/ {print $2}' Metrics_output.txt)
+  mae=$(awk '/MAE:/ {print $2}' Metrics_output.txt)
+  if [ -z "$accuracy" ] || [ -z "$mae" ]; then
+    echo "::error::Failed to extract metrics from Metrics_output.txt"
+    exit 1
+  fi
   echo "Accuracy: $accuracy"
   echo "MAE: $mae"
-  echo "::set-output name=accuracy::$accuracy"
-  echo "::set-output name=mae::$mae"
+  echo "accuracy=$accuracy" >> $GITHUB_OUTPUT
+  echo "mae=$mae" >> $GITHUB_OUTPUT

This change:

  1. Uses awk for more robust parsing.
  2. Adds error checking if metrics are not found.
  3. Updates to the new $GITHUB_OUTPUT syntax for setting outputs.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
shell: bash
run: |
accuracy=$(grep -Po 'accuracy_score: \K[0-9.]+' Metrics_output.txt)
mae=$(grep -Po 'MAE: \K[0-9.]+' Metrics_output.txt)
echo "Accuracy: $accuracy"
echo "MAE: $mae"
echo "::set-output name=accuracy::$accuracy"
echo "::set-output name=mae::$mae"
shell: bash
run: |
accuracy=$(awk '/accuracy_score:/ {print $2}' Metrics_output.txt)
mae=$(awk '/MAE:/ {print $2}' Metrics_output.txt)
if [ -z "$accuracy" ] || [ -z "$mae" ]; then
echo "::error::Failed to extract metrics from Metrics_output.txt"
exit 1
fi
echo "Accuracy: $accuracy"
echo "MAE: $mae"
echo "accuracy=$accuracy" >> $GITHUB_OUTPUT
echo "mae=$mae" >> $GITHUB_OUTPUT
🧰 Tools
🪛 yamllint

[error] 49-49: syntax error: expected , but found ''

(syntax)

Copy link
Member

@jo-elimu jo-elimu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ra00f1 To clarify; We already have two files with the accuracy score and mean absolute error:

  1. https://github.com/elimu-ai/ml-storybook-reading-level/blob/main/pmml/step3_predict/step3_2_accuracy_score.txt
  2. https://github.com/elimu-ai/ml-storybook-reading-level/blob/main/pmml/step3_predict/step3_2_mean_absolute_error.txt

So there is no need to generate these values in more files.

The task to solve in #26 and #27 is to configure the GitHub workflow to fail once the thresholds drop above/below a certain value.

@jo-elimu jo-elimu closed this Oct 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add accuracy score threshold check to GitHub workflow
2 participants