Skip to content

A Kotlin-based testing/scraping/parsing library providing the ability to analyze and extract data from HTML (server & client-side rendered). It places particular emphasis on ease of use and a high level of readability by providing an intuitive DSL. It aims to be a testing lib, but can also be used to scrape websites in a convenient fashion.

License

Notifications You must be signed in to change notification settings

skrapeit/skrape.it

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Documentation maven central skrape-it @ kotlinlang.slack.com License

master build status last commit Codecov JDK

contributions welcome GitHub contributors Donate Awesome Kotlin Badge

πŸ›ŽοΈ 🚨 Help wanted 🚨 πŸ›ŽοΈ
Looking for Co-Maintainer(s), please contact [email protected] if you are interested in helping to maintain and evolve skrape{it} ❀️

skrape{it} is a Kotlin-based HTML/XML testing and web scraping library that can be used seamlessly in Spring-Boot, Ktor, Android or other Kotlin-JVM projects. The ability to analyze and extract HTML including client-side rendered DOM trees and all other XML-related markup specifications such as SVG, UML, RSS,... makes it unique. It places particular emphasis on ease of use and a high level of readability by providing an intuitive DSL. First and foremost skrape{it} aims to be a testing tool (not tied to a particular test runner), but it can also be used to scrape websites in a convenient fashion.

Features

Parsing

  • Deserialization of HTML/XML from websites, local html files and html as string to data classes / POJOs.
  • Designed to deserialize HTML but can handle any XML-related markup specifications such as SVG, UML, RSS or XML itself.
  • DSL to select html elements as well as supporting CSS query-selector syntax by string invocation.

Http-Client

  • Http-Client without verbosity and ceremony to make requests and corresponding request options like headers, cookies etc. in a fluent style interface.
  • Pre-configure client regarding auth and other request settings
  • Can handle client side rendered web pages. Javascript execution results can optionally be considered in the response body.

Idiomatic

  • Easy to use, idiomatic and type-safe DSL to ensure a high level of readability.
  • Build-in matchers/assertions based on infix functions to archive a very high level of readability.
  • DSL is behaving like a Fluent-Api to make data extraction/scraping as comfortable as possible.

Compatibility

  • Not bind to a specific test-runner, framework or whatever.
  • Open to use any other assertion library of your choice.
  • Open to implement your own fetcher
  • Supports non-blocking fetching / Coroutine support

Extensions

In addition, extensions for well-known testing libraries are provided to extend them with the mentioned skrape{it} functionality. Currently available:


Quick Start

Read the Docs

You'll always find the latest documentation, release notes and examples regarding official releases at https://docs.skrape.it. The README file you are reading right now provides example related to the latest master. Just use it if you won't wait for latest changes to be released. If you don't want to read that much or just want to get a rough overview on how to use skrape{it}, you can have a look at the Documentation by Example section which refers to the current master.

Installation

All our official/stable releases will be published to mavens central repository.

Add dependency

Gradle
dependencies {
    implementation("it.skrape:skrapeit:1.2.2")
}
Maven
<dependency>
    <groupId>it.skrape</groupId>
    <artifactId>skrapeit</artifactId>
    <version>1.2.2</version>
</dependency>

using bleeding edge features before official release

We are offering snapshot releases by publishing every successful build of a commit that has been pushed to master branch. Thereby you can just install the latest implementation of skrape{it}. Be careful since these are non-official releases and may be unstable as well as breaking changes can occur at any time.

Add experimental stuff
Gradle
repositories {
    maven { url = uri("https://oss.sonatype.org/content/repositories/snapshots/") }
}
dependencies {
    implementation("it.skrape:skrapeit:0-SNAPSHOT") { isChanging = true } // version number will stay - implementation may change ...
}

// optional
configurations.all {
    resolutionStrategy {
        cacheChangingModulesFor(0, "seconds")
    }
}
Maven
<repositories>
    <repository>
        <id>snapshot</id>
        <url>https://oss.sonatype.org/content/repositories/snapshots/</url>
    </repository>
</repositories>

...

<dependency>
    <groupId>it.skrape</groupId>
    <artifactId>skrapeit</artifactId>
    <version>0-SNAPSHOT</version>
</dependency>

Documentation by Example

(referring to current master)

You can find further examples in the projects integration tests.

Android

We have a working Android sample using jetpack-compose in our example projects as living documentation.

Parse and verify HTML from String

@Test
fun `can read and return html from String`() {
    htmlDocument("""
        <html>
            <body>
                <h1>welcome</h1>
                <div>
                    <p>first p-element</p>
                    <p class="foo">some p-element</p>
                    <p class="foo">last p-element</p>
                </div>
            </body>
        </html>""") {

            h1 {
                findFirst {
                    text toBe "welcome"
                }
            }
            p {
                withClass = "foo"
                findFirst {
                    text toBe "some p-element"
                    className toBe "foo"
                }
            }
            p {
                findAll {
                    text toContain "p-element"
                }
                findLast {
                    text toBe "last p-element"
                }
            }
        }
    }
}

Parse HTML and extract

data class MySimpleDataClass(
    val httpStatusCode: Int,
    val httpStatusMessage: String,
    val paragraph: String,
    val allParagraphs: List<String>,
    val allLinks: List<String>
)

class HtmlExtractionService {

    fun extract() {
        val extracted = skrape(HttpFetcher) {
            request {
                url = "http://localhost:8080"
            }

            response {
                MySimpleDataClass(
                    httpStatusCode = status { code },
                    httpStatusMessage = status { message },
                    allParagraphs = document.p { findAll { eachText } },
                    paragraph = document.p { findFirst { text } },
                    allLinks = document.a { findAll { eachHref } }
                )
            }
        }
        print(extracted)
        // will print:
        // MyDataClass(httpStatusCode=200, httpStatusMessage=OK, paragraph=i'm a paragraph, allParagraphs=[i'm a paragraph, i'm a second paragraph], allLinks=[http://some.url, http://some-other.url])
    }
}

Parse HTML and extract it

data class MyDataClass(
        var httpStatusCode: Int = 0,
        var httpStatusMessage: String = "",
        var paragraph: String = "",
        var allParagraphs: List<String> = emptyList(),
        var allLinks: List<String> = emptyList()
)

class HtmlExtractionService {

    fun extract() {
        val extracted = skrape(HttpFetcher) {
            request {
                url = "http://localhost:8080"
            }           

            extractIt<MyDataClass> {
                it.httpStatusCode = statusCode
                it.httpStatusMessage = statusMessage.toString()
                htmlDocument {
                    it.allParagraphs = p { findAll { eachText }}
                    it.paragraph = p { findFirst { text }}
                    it.allLinks = a { findAll { eachHref }}
                }
            }
        }
        print(extracted)
        // will print:
        // MyDataClass(httpStatusCode=200, httpStatusMessage=OK, paragraph=i'm a paragraph, allParagraphs=[i'm a paragraph, i'm a second paragraph], allLinks=[http://some.url, http://some-other.url])
    }
}

Testing HTML responses:

@Test
fun `dsl can skrape by url`() {
    skrape(HttpFetcher) {
        request {
            url = "http://localhost:8080/example"
        }       
        response {
            htmlDocument {
                // all official html and html5 elements are supported by the DSL
                div {
                    withClass = "foo" and "bar" and "fizz" and "buzz"

                    findFirst {
                        text toBe "div with class foo"

                        // it's possible to search for elements from former search results
                        // e.g. search all matching span elements within the above div with class foo etc...
                        span {
                            findAll {
                                // do something
                            }                       
                        }                   
                    }

                    findAll {
                        toBePresentExactlyTwice
                    }
                }
                // can handle custom tags as well
                "a-custom-tag" {
                    findFirst {
                        toBePresentExactlyOnce
                        text toBe "i'm a custom html5 tag"
                        text
                    }
                }
                // can handle custom tags written in css selctor query syntax
                "div.foo.bar.fizz.buzz" {
                    findFirst {
                        text toBe "div with class foo"
                    }
                }

                // can handle custom tags and add selector specificas via DSL
                "div.foo" {

                    withClass = "bar" and "fizz" and "buzz"

                    findFirst {
                        text toBe "div with class foo"
                    }
                }
            }
        }
    }
}

Scrape a client side rendered page:

fun getDocumentByUrl(urlToScrape: String) = skrape(BrowserFetcher) { // <--- pass BrowserFetcher to include rendered JS
    request { url = urlToScrape }
    response { htmlDocument { this } }
}


fun main() {
    // do stuff with the document
    println(getDocumentByUrl("https://docs.skrape.it").eachLink)
}

Scrape async

skrape{it}'s AsyncFetcher provides coroutine support

suspend fun getAllLinks(): Map<String, String> = skrape(AsyncFetcher) {
    request {
        url = "https://my-fancy.website"
    }
    response {
        htmlDocument { eachLink }
    }
}

Configure HTTP-Client:

class ExampleTest {
    val myPreConfiguredClient = skrape(HttpFetcher) {
        // url can be a plain url as string or build by #urlBuilder
        request {
            method = Method.POST // defaults to GET
            
            url = "" // you can  either pass url as String (defaults to 'http://localhost:8080')
            url { // or build url (will respect value from url as String param)
                // thereby you can pass a url and just override or add parts
                protocol = UrlBuilder.Protocol.HTTPS // defaults to given scheme from url param (HTTP if not set)
                host = "skrape.it" // defaults to given host from url param (localhost if not set)
                port = 12345  // defaults to given port from url param (8080 if not set explicitly - none port if given url param value does noit have port) - set to -1 to remove port
                path = "/foo" // defaults to given path from url param (none path if not set)
                queryParam { // can handle adding query parameters of several types (defaults to none)
                    "foo" to "bar" // add query paramter foo=bar
                    "aaa" to false // add query paramter aaa=false
                    "bbb" to .4711 // add query paramter bbb=0.4711
                    "ccc" to 42    // add query paramter ccc=42
                    "ddd" to listOf("a", 1, null) // add query paramter ddd=a,1,null
                    +"xxx"         // add query paramter xxx (just key, no value)
                }
            }
            timeout = 5000 // optional -> defaults to 5000ms
            followRedirects = true // optional -> defaults to true
            userAgent = "some custom user agent" // optional -> defaults to "Mozilla/5.0 skrape.it"
            cookies = mapOf("some-cookie-name" to "some-value") // optional
            headers = mapOf("some-custom-header" to "some-value") // optional
        }
    }
    
    @Test
    fun `can use preconfigured client`() {
    
        myPreConfiguredClient.response {
            status { code toBe 200 }
            // do more stuff
        }
    
        // slightly modify preconfigured client
        myPreConfiguredClient.apply {
            request {
                followRedirects = false
            }
        }.response {
            status { code toBe 301 }
            // do more stuff
        }
    }
}

send request body

1) plain as string

most low level option, needs to set content-type header "by hand"
skrape(HttpFetcher) {
    request {
        url = "https://www.my-fancy.url"
        method = Method.GET
        headers = mapOf("Content-Type" to "application/json")
        body = """{"foo":"bar"}"""
    }
    response {
        htmlDocument {
            ...

2) plain text with auto added content-type header that can be optionally overwritten

skrape(HttpFetcher) {
    request {
        url = "https://www.my-fancy.url"
        method = Method.POST
        body {
            data = "just a plain text" // content-type header will automatically set to "text/plain"
            contentType = "your-custom/content" // can optionally override content-type
        }
    }
    response {
        htmlDocument {
            ...

3) with helper functions for json or xml bodies

supports json and xml autocompletion when using intellij
skrape(HttpFetcher) {
    request {
        url = "https://www.my-fancy.url"
        method = Method.POST
        body {
            json("""{"foo":"bar"}""") // will automatically set content-type header to "application/json" 
            // or
            xml("<foo>bar</foo>") // will automatically set content-type header to "text/xml" 
            // or
            form("foo=bar") // will automatically set content-type header to "application/x-www-form-urlencoded" 
        }
    }
    response {
        htmlDocument {
            ...

4 with on the fly created json via dsl

skrape(HttpFetcher) {
    request {
        url = "https://www.my-fancy.url"
        method = Method.POST
        body {
            // will automatically set content-type header to "application/json"
            // will create {"foo":"bar","xxx":{"a":"b","c":[1,"d"]}} as request body
            json {
                "foo" to "bar"
                "xxx" to json {
                    "a" to "b"
                    "c" to listOf(1, "d")
                }
            }
        }
    }
    response {
        htmlDocument {
            ...

5 with on the fly created form via dsl

skrape(HttpFetcher) {
    request {
        url = "https://www.my-fancy.url"
        method = Method.POST
        body {
            // will automatically set content-type header to "application/x-www-form-urlencoded"
            // will create foo=bar&xxx=1.5 as request body
            form {
                "foo" to "bar"
                "xxx" to 1.5
            }
        }
    }
    response {
        htmlDocument {
            ...

Get in touch

If you need help, have questions on how to use skrape{it} or want to discuss features please don't hesitate to use the projects discussions section on GitHub or raise an issue if you found a bug.

πŸ’– Support the project

Skrape{it} is and always will be free and open-source. I try to reply to everyone needing help using these projects. Obviously, the development, maintenance takes time.

However, if you are using this project and be happy with it or just want to encourage me to continue creating stuff or fund the caffeine and pizzas that fuel its development, there are few ways you can do it :-

  • Starring and sharing the project πŸš€ to help make it more popular
  • Giving proper credit when you use skrape{it}, tell your friends and others about it πŸ˜ƒ
  • Sponsor skrape{it} with a one-time donations via PayPal by just click this button β†’ Donate or use the GitHub sponsors program to support on a monthly basis πŸ’–

Stargazers repo roster for @skrapeit/skrape.it

About

A Kotlin-based testing/scraping/parsing library providing the ability to analyze and extract data from HTML (server & client-side rendered). It places particular emphasis on ease of use and a high level of readability by providing an intuitive DSL. It aims to be a testing lib, but can also be used to scrape websites in a convenient fashion.

Topics

Resources

License

Stars

Watchers

Forks