r - 使用 R 和 rvest 进行网络抓取

标签 r web-scraping rvest

我有一个项目,我应该从新闻网站上抓取一系列文章。我对新闻的标题和正文感兴趣。在大多数情况下,网站会维护一个基本 URL,例如:

https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html https://tmp.americanthinker.com/blog/2015/01/california_begins_giving_drivers_licenses_to_illegal_aliens.html

由于要下载的文章太多(超过1000篇),我想到创建一个自动下载所有数据的功能。向量将提供所有网址(每行一个):

article 
[1] "https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html"                   
[2] "https://tmp.americanthinker.com/blog/2015/01/california_begins_giving_drivers_licenses_to_illegal_aliens.html"
[3] "https://www.americanthinker.com/articles/2018/11/immigrants_will_not_fund_our_retirement.html"                
> str(article)
 chr [1:3] "https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html" ...
> summary(article)
   Length     Class      Mode 
        3 character character 

因此,脚本将使用向量作为地址源,并创建一个包含每篇文章的标题和文本的数据框。但会弹出一些错误。以下是我根据一系列 Stack Overflow 帖子编写的代码:

套餐

library(rvest)
library(purrr)
library(xml2) 
library(dplyr)
library(readr)

导入 CSV 并导出为矢量

base <- read_csv(file.choose(), col_names = FALSE)
article <- pull(base,X1)

第一次尝试

articles_final <- map_df(article, function(i){
  pages<-read_html(article)
  title <-
    article %>%  map_chr(. %>% html_node("h1") %>% html_text())
  content <-
    article %>% map_chr(. %>% html_nodes('.article_body span') %>% html_text() %>% paste(., collapse = ""))
  article_table <- data.frame("Title" = title, "Content" = content)
  return(article_table)
})  

第二次尝试

map_df(1:3, function(i){
  page <- read_html(sprintf(article,i))
  data.frame(Title = html_text(html_nodes(page,'.h1')),
             Content= html_text(html_nodes(page,'.article_body span')),
             Site = "American Thinker"
             )
}) -> articles_final

在这两种情况下,我在运行这些函数时都会收到以下错误:

Error in doc_parse_file (con, encoding = encoding, as_html = as_html, options = options):
Expecting a single string value: 
[type = character; extent = 3].

我需要它来下载和分析文章

非常感谢您的帮助。

编辑

我尝试了下面建议的代码:

I tried and it dod not work, some problem with my coding:
> map_dfc(.x = article,
+         .f = function(x){
+           foo <- tibble(Title = read_html(x) %>%
+                           html_nodes("h1") %>% 
+                           html_text() %>%
+                           .[nchar(.) > 0],
+                         Content = read_html(x) %>% 
+                           html_nodes("p") %>% 
+                           html_text(),
+                         Site = "AmericanThinker")%>%
+             filter(nchar(Content) > 0)
+           }
+         ) -> out
Error: Argument 3 must be length 28, not 46

但是当你看到一个新的错误弹出时

最佳答案

这是我为你尝试过的。我正在使用选择器小工具并检查页面源代码。经过一番检查,我认为你需要使用 <title><div class="article_body">map()部分是循环浏览 article 中的三篇文章并创建一个数据框。每一行代表每一篇文章。我认为您仍然需要进行一些字符串操作才能获得干净的文本。但这将帮助您抓取所需的内容。

library(tidyverse)
library(rvest)

article <- c("https://www.americanthinker.com/articles/2019/11/why_rich_people_love_poor_immigrants.html",
         "https://tmp.americanthinker.com/blog/2015/01/california_begins_giving_drivers_licenses_to_illegal_aliens.html",
         "https://www.americanthinker.com/articles/2018/11/immigrants_will_not_fund_our_retirement.html")

 map_dfr(.x = article,
         .f = function(x){

                 tibble(Title = read_html(x) %>%
                                html_nodes("title") %>%
                                html_text(),
                        Content = read_html(x) %>%
                                  html_nodes(xpath = "//div[@class='article_body']") %>%
                                  html_text(),
                        Site = "AmericanThinker")}) -> result


#  Title                              Content                                                  Site      
#  <chr>                              <chr>                                                    <chr>     
#1 Why Rich People Love Poor Immigra… "Soon after the Immigration Act of 1965 was passed, rea… AmericanT…
#2 California begins giving driver's… "The largest state in  the union began handing out driv… AmericanT…
#3 Immigrants Will Not Fund Our Reti… "Ask Democrats why they support open borders, and they … AmericanT…

关于r - 使用 R 和 rvest 进行网络抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59580096/

相关文章:

python-3.x - 爱彼迎房东在特定城市或地点的排名

python - 将 url 解码为 utf-8-sig 时起始字节无效

r - 从维基百科表格中抓取网址

r - 用 r 抓取 ajax 站点

python - 网络从表格中抓取某一行

r - 使用 .onLoad() 将对象加载到 R 包中的全局环境中

r - R图例在情节中的位置

sql - 通过 R sqlSave 更新 SQL 表

r - 使用 rvest 设置 cookie

r - 将步骤与 data.table 一起使用