Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster scraping #20

Open
lassik opened this issue Nov 15, 2020 · 0 comments
Open

Faster scraping #20

lassik opened this issue Nov 15, 2020 · 0 comments

Comments

@lassik
Copy link
Member

lassik commented Nov 15, 2020

Running ./generate.sh, which scrapes all Scheme implementations from scratch, I now understand the point someone made about the scrapers being too slow.

The easiest solution would be to only run the scrapers that have changed since the last run. This could be done by:

  • Storing a copy of the last working version of each scraper .sh, and skipping it if the current .sh is equal to the copy.
  • Storing a checksum of the last working version and comparing checksums.
  • Using make.

Scrapers could also be run in parallel.

Any of the above would work, but we're going to hit the same problem when scraping other data (not just SRFI listings) so a more generic solution would be in order. I'll dust of the prototype Scheme framework I have for generic scraping.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant