forked from IonicaBizau/scrape-it
-
Notifications
You must be signed in to change notification settings - Fork 0
/
package.json
100 lines (100 loc) · 3.05 KB
/
package.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
{
"name": "scrape-it",
"description": "A Node.js scraper for humans.",
"keywords": [
"scrape",
"it",
"a",
"scraping",
"module",
"for",
"humans"
],
"license": "MIT",
"version": "5.2.0",
"main": "lib/index.js",
"types": "lib/index.d.ts",
"scripts": {
"test": "node test"
},
"author": "Ionică Bizău <[email protected]> (https://ionicabizau.net)",
"contributors": [
"ComFreek <[email protected]> (https://github.com/ComFreek)",
"Jim Buck <[email protected]> (https://github.com/JimmyBoh)"
],
"repository": {
"type": "git",
"url": "git+ssh://[email protected]/IonicaBizau/scrape-it.git"
},
"bugs": {
"url": "https://github.com/IonicaBizau/scrape-it/issues"
},
"homepage": "https://github.com/IonicaBizau/scrape-it#readme",
"blah": {
"h_img": "https://i.imgur.com/j3Z0rbN.png",
"cli": "scrape-it-cli",
"installation": [
{
"h2": "FAQ"
},
{
"p": "Here are some frequent questions and their answers."
},
{
"h3": "1. How to parse scrape pages?"
},
{
"p": "`scrape-it` has only a simple request module for making requests. That means you cannot directly parse ajax pages with it, but in general you will have those scenarios:"
},
{
"ol": [
"**The ajax response is in JSON format.** In this case, you can make the request directly, without needing a scraping library.",
"**The ajax response gives you HTML back.** Instead of calling the main website (e.g. example.com), pass to `scrape-it` the ajax url (e.g. `example.com/api/that-endpoint`) and you will you will be able to parse the response",
"**The ajax request is so complicated that you don't want to reverse-engineer it.** In this case, use a headless browser (e.g. Google Chrome, Electron, PhantomJS) to load the content and then use the `.scrapeHTML` method from scrape it once you get the HTML loaded on the page."
]
},
{
"h3": "2. Crawling"
},
{
"p": "There is no fancy way to crawl pages with `scrape-it`. For simple scenarios, you can parse the list of urls from the initial page and then, using Promises, parse each page. Also, you can use a different crawler to download the website and then use the `.scrapeHTML` method to scrape the local files."
},
{
"h3": "3. Local files"
},
{
"p": "Use the `.scrapeHTML` to parse the HTML read from the local files using `fs.readFile`."
}
]
},
"dependencies": {
"assured": "^1.0.13",
"@types/cheerio": "^0.22.13",
"cheerio": "^0.22.0",
"cheerio-req": "^1.2.3",
"err": "^2.1.11",
"is-empty-obj": "^1.0.11",
"iterate-object": "^1.3.3",
"obj-def": "^1.0.7",
"typpy": "^2.3.11"
},
"devDependencies": {
"lien": "^3.3.0",
"tester": "^1.4.4"
},
"files": [
"bin/",
"app/",
"lib/",
"dist/",
"src/",
"scripts/",
"resources/",
"menu/",
"cli.js",
"index.js",
"bloggify.js",
"bloggify.json",
"bloggify/"
]
}