Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Eval] Integrate MLE Bench Into Eval Harness #4328

Open
xingyaoww opened this issue Oct 10, 2024 · 0 comments
Open

[Eval] Integrate MLE Bench Into Eval Harness #4328

xingyaoww opened this issue Oct 10, 2024 · 0 comments
Labels
enhancement New feature or request evaluation Related to running evaluations with OpenHands

Comments

@xingyaoww
Copy link
Contributor

What problem or use case are you trying to solve?

OpenAI released MLE Bench: https://arxiv.org/pdf/2410.07095, which evaluated an earlier version of OpenHands. We should try to integrate the benchmark to our eval harness: https://github.com/All-Hands-AI/OpenHands/tree/main/evaluation

The code that was used to evaluated OpenHands on MLE Bench is available here: https://github.com/openai/mle-bench/tree/main/agents/opendevin

@xingyaoww xingyaoww added the enhancement New feature or request label Oct 10, 2024
@mamoodi mamoodi added the evaluation Related to running evaluations with OpenHands label Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request evaluation Related to running evaluations with OpenHands
Projects
Status: No status
Development

No branches or pull requests

2 participants