Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add circular block bootstrapping #114

Open
reza-armuei opened this issue Nov 22, 2023 · 2 comments · May be fixed by #418
Open

Add circular block bootstrapping #114

reza-armuei opened this issue Nov 22, 2023 · 2 comments · May be fixed by #418
Assignees
Milestone

Comments

@reza-armuei
Copy link
Collaborator

A new score or metric should be developed on a separate feature branch, rebased against the main branch. Each merge request should include:

The implementation of the new metric or score in xarray, ideally with support for pandas and dask
100% unit test coverage
A tutorial notebook showcasing the use of that metric or score, ideally based on the standard sample data
API documentation (docstrings) using Napoleon (google) style, making sure to clearly explain the use of the metrics
A reference to the paper which described the metrics, added to the API documentation
For metrics which do not have a paper reference, an online source or reference should be provided
For metrics which are still under development or which have not yet had an academic publication, they will be placed in a holding area within the API until the method has been properly published and peer reviewed (i.e. scores.emerging). The 'emerging' area of the API is subject to rapid change, still of sufficient community interest to include, similar to a 'preprint' of a score or metric.
Add your score to summary_table_of_scores.md in the documentation

All merge requests should comply with the coding standards outlined in this document. Merge requests will undergo both a code review and a science review. The code review will focus on coding style, performance and test coverage. The science review will focus on the mathematical correctness of the implementation and the suitability of the method for inclusion within 'scores'.

A github ticket should be created explaining the metric which is being implemented and why it is useful.

@reza-armuei reza-armuei self-assigned this Nov 22, 2023
@reza-armuei
Copy link
Collaborator Author

Adding a library for black bootstrapping method to calculate confidence intervals. That is particularly useful when comparing two forecast systems and check if the differences are statistically significant.

@nikeethr
Copy link
Collaborator

nikeethr commented Jun 12, 2024

@tennlee @reza-armuei @nicholasloveday

After some thoughts I think this is a more sensible approach to get this feature over the line:

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants