Skip to content

Commit

Permalink
Change the data source in README
Browse files Browse the repository at this point in the history
  • Loading branch information
womeimingzi11 committed Nov 20, 2020
1 parent ee77d38 commit cceb778
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 310 deletions.
6 changes: 3 additions & 3 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ amapGeocode is inspired by [baidumap](https://github.com/badbye/baidumap) and [b
However, AutoNavi has significant high precise, in my case, the Results from Baidu were unsatisfactory.

## BIG NEWS: Parallel is Here!
Since `v0.5`, parallel operation finally come to `amapGeocode` with the `parallel` package as the backend. There is a really huge performance improvement for batch queries. Here is a demo from my PC with below specification
Since `v0.5`, parallel operation finally come to `amapGeocode` with the `parallel` package as the backend. There is a really huge performance improvement for batch queries. Here is a demo from my PC with below specification.

>
1. CPU: AMD Ryzen 3600 @ 3.6GHz (6 cores with 12 threads)
Expand All @@ -55,7 +55,7 @@ Since `v0.5`, parallel operation finally come to `amapGeocode` with the `paralle
library(amapGeocode)
library(readr)
sample_site <-
read_csv('sample_site.csv')
read_csv('https://gist.githubusercontent.com/womeimingzi11/0fa3f4744f3ebc0f4484a52649f556e5/raw/47a69157f3e26c4d3bc993f3715b9ba88cda9d93/sample_site.csv')
str(sample_site)
Expand All @@ -70,7 +70,7 @@ new <- getCoord(sample_site$address)
proc.time() - start_time
```

Around 10 TIMES FASTER!
Around 8-10 TIMES FASTER with 300 records.

All you need to do is **upgrade** `amapGeocode` to the **latest version** without changing any code!

Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ from Baidu were unsatisfactory.
Since `v0.5`, parallel operation finally come to `amapGeocode` with the
`parallel` package as the backend. There is a really huge performance
improvement for batch queries. Here is a demo from my PC with below
specification
specification.

> 1. CPU: AMD Ryzen 3600 @ 3.6GHz (6 cores with 12 threads)
> 2. RAM: 32GB DDR4 2933MHz
Expand All @@ -77,7 +77,7 @@ specification
``` r
library(amapGeocode)
library(readr)
sample_site <- read_csv("sample_site.csv")
sample_site <- read_csv("https://gist.githubusercontent.com/womeimingzi11/0fa3f4744f3ebc0f4484a52649f556e5/raw/47a69157f3e26c4d3bc993f3715b9ba88cda9d93/sample_site.csv")

str(sample_site)
#> tibble [300 x 1] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
Expand All @@ -92,17 +92,17 @@ start_time <- proc.time()
old <- lapply(sample_site$address, amapGeocode:::getCoord.individual)
proc.time() - start_time
#> user system elapsed
#> 3.17 0.34 80.91
#> 2.80 0.33 76.83

# Here is the new implement
start_time <- proc.time()
new <- getCoord(sample_site$address)
proc.time() - start_time
#> user system elapsed
#> 0.02 0.14 9.20
#> 0.03 0.12 8.09
```

Around 10 TIMES FASTER\!
Around 8-10 TIMES FASTER with 300 records.

All you need to do is **upgrade** `amapGeocode` to the **latest
version** without changing any code\!
Expand Down
2 changes: 1 addition & 1 deletion cran-comments.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
## Test environments
* local R installation, R 4.1.0 devel
* local R installation, R 4.0.3
* ubuntu 16.04 (on travis-ci), R 4.0.2
* win-builder (devel)

Expand Down
Loading

0 comments on commit cceb778

Please sign in to comment.