[R] Web-scraping newbie - dynamic table into R?
jdnewm|| @end|ng |rom dcn@d@v|@@c@@u@
Sun Apr 19 22:10:24 CEST 2020
Another bit of advice is to look for the underlying API... that is usually more performant than scraping anyway. Try using the developer tools in Chrome to find out how they are populating the page for clues, or just Google it.
Finally, you might try the RSelenium package. I don't have first hand experience with it but it is reputed to be designed to scrape dynamic web pages.
On April 18, 2020 1:50:02 PM PDT, Julio Farach <jfarach using gmail.com> wrote:
>How do I scrape the last 10 Keno draws from the Georgia lottery into R?
>I'm trying to pull the last 10 draws of a Keno lottery game into R.
>read several tutorials on how to scrape websites using the rvest
>Chrome's Inspect Element, and CSS or XPath, but I'm likely stuck
>I started with:
>> Kenopage <- "
>> Keno <- Read.hmtl(Kenopage)
>From there, I've been unable to progress, despite hours spend on
>combinations of CSS and XPath calls with "html_notes."
>Failed example: DrawNumber <- Keno %>% rvest::html_nodes("body") %>%
>xml2::xml_find_all("//span[contains(@class,'Draw Number')]") %>%
>Someone mentioned using the V8 package in R, but it's new to me.
>How do I get started?
Sent from my phone. Please excuse my brevity.
More information about the R-help