r - 尝试遍历 HTML 表格并创建数据框

标签 r web-scraping rvest

我正在尝试创建一个动态循环来运行多个 URL 并从每个表中抓取数据,将所有内容连接到一个数据框中。我尝试了一些想法,如下图所示,但到目前为止没有任何效果。这种东西并不真正适合我的驾驶室,但我正在努力学习它是如何工作的。如果有人可以帮助我完成这项工作,我将不胜感激。

谢谢。

静态网址: http://www.nfl.com/draft/2015/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:qb

library(rvest)

#create a master dataframe to store all of the results
complete<-data.frame()

yearsVector <- c("2010", "2011", "2012", "2013", "2014", "2015")
positionVector <- c("qb", "rb", "wr", "te", "ol", "dl", "lb", "cb", "s")
for (i in 1:length(yearsVector)) 
{
  for (j in 1:length(positionVector)) 
  {
    # create a url template 
    URL.base<-"http://www.nfl.com/draft/"
    URL.intermediate <- "/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:"
    #create the dataframe with the dynamic values
    URL <- paste0(URL.base, yearsVector, URL.intermediate, positionVector)
    #print(URL)

    #read the page - store the page to make debugging easier
    page<- read_html(URL)

    #This needs work since the page is dynamicly generated.
    DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill=TRUE)
    #About 530 names returned, may need to search and extracted requested info.



    # to find the players last names
    lastnames<-str_locate_all(page, "lastName")[[1]]
    names<- str_sub(page, lastnames[,2]+4, lastnames[,2]+20)
    names<-str_extract(names, "[A-Z][a-zA-Z]*")

    length(names[-c(1:16)])
    #Still need to delete the first 16 names (don't know if this is consistent across all years

    #to find the players positions
    positions<-str_locate_all(page, "pos")[[1]]
    ppositions<- str_sub(page, positions[,2]+4, positions[,2]+10)
    pos<-str_extract(ppositions, "[A-Z]*")

    pos<- pos[pos !=""]
    #Still need to clean delete the first 16 names (don't know if this is consistent across all years


    #store the temp values into the master dataframe
    complete<-rbind(complete, DF)
  }
}

我编辑了我的 OP 以合并您的代码 Dave。我想我快到了,但这里有些地方不太对劲。我收到此错误。

eval(substitute(expr), envir, enclos) 错误:需要一个值

我知道 URL 是正确的!

http://postimg.org/image/ccmvmnijr/

我认为问题在于这一行:

page <- read_html(URL)

或者,也许这一行:

DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill = TRUE)

你能帮我越过终点线吗?谢谢!

最佳答案

试试这个答案!我修复了 URL 的创建并设置了一个主数据框来存储请求的信息。该页面是动态生成的,因此使用这些来自 rvest 的标准工具是行不通的。所有的球员(大约16个领域),大学和选秀信息都存储在页面上,这是一个搜索和提取的问题。

library(rvest)
library(stringr)
library(dplyr)

#create a master dataframe to store all of the results
complete<-data.frame()

yearsVector <- c( "2011", "2012", "2013", "2014", "2015")
#all position information is stored on each page no need to create sparate queries
#positionVector <- c("qb", "rb", "wr", "te", "ol", "dl", "lb", "cb", "s")
positionVector <- c("qb")
for (i in 1:length(yearsVector)) 
{
  for (j in 1:length(positionVector)) 
  {
    # create a url template 
    URL.base<-"http://www.nfl.com/draft/"
    URL.intermediate <- "/tracker?icampaign=draft-sub_nav_bar-drafteventpage-tracker#dt-tabs:dt-by-position/dt-by-position-input:"
    #create the dataframe with the dynamic values
    URL <- paste0(URL.base, yearsVector[i], URL.intermediate, positionVector[j])
    print(yearsVector[i])
    print(URL)

    #read the page - store the page to make debugging easier
    page<- read_html(URL)

    #This needs work since the page is dynamicly generated.
    #DF <- html_nodes(page, xpath = ".//table") %>% html_table(fill=TRUE)
    #About 539 names returned, may need to search and extracted requested info.
    #find records for each player
    playersloc<-str_locate_all(page, "\\{\"personId.*?\\}")[[1]]
    players<-str_sub(page, playersloc[,1]+1, playersloc[,2]-1)
    #fix the cases where the players are named Jr.
    players<-gsub(", ", "_", players  )

    #split and reshape the data in a data frame
    play2<-strsplit(gsub("\"", "", players), ',')
    data<-sapply(strsplit(unlist(play2), ":"), FUN=function(x){x[2]})
    df<-data.frame(matrix(data, ncol=16, byrow=TRUE))
    #name the column names
    names(df)<-sapply(strsplit(unlist(play2[1]), ":"), FUN=function(x){x[1]})

    #sort out the pick information
    picks<-str_locate_all(page, "\\{\"id.*?player.*?\\}")[[1]]
    picks<-str_sub(page, picks[,1]+1, picks[,2]-1)
    #fix the cases where there are commas in the notes section.
    picks<-gsub(", ", "_", picks  )
    picks<-strsplit(gsub("\"", "", picks), ',')
    data<-sapply(strsplit(unlist(picks), ":"), FUN=function(x){x[2]})
    picksdf<-data.frame(matrix(data, ncol=6, byrow=TRUE))
    names(picksdf)<-sapply(strsplit(unlist(picks[1]), ":"), FUN=function(x){x[1]})

    #sort out the college information
    schools<-str_locate_all(page, "\\{\"id.*?name.*?\\}")[[1]]
    schools<-str_sub(page, schools[,1]+1, schools[,2]-1)
    schools<-strsplit(gsub("\"", "", schools), ',')
    data<-sapply(strsplit(unlist(schools), ":"), FUN=function(x){x[2]})
    schoolsdf<-data.frame(matrix(data, ncol=3, byrow=TRUE))
    names(schoolsdf)<-sapply(strsplit(unlist(schools[1]), ":"), FUN=function(x){x[1]})

    #merge the 3 tables together
    df<-left_join(df, schoolsdf, by=c("college" =  "id"))
    df<-left_join(df, picksdf, by=c("pick" =  "id"))

    #store the temp values into the master dataframe
    complete<-rbind(complete, df)
  }
}

找出正确的正则表达式来查找和定位所需信息非常棘手。看起来 2010 年的数据使用不同的格式使用大学信息,因此我忽略了那一年。 另外,请确保您没有违反本网站的服务条款。 祝你好运

关于r - 尝试遍历 HTML 表格并创建数据框,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/40264390/

相关文章:

r - 使用 ggplot2 将每个第 N 轴标签加粗

excel - 从 Web 抓取到 Excel 时复制数据时出错

r - 创建一个包含所有美国县失业数据的 data.frame

vba - 使用 vba 从受密码保护的网站中抓取数据用户定义类型未定义

java - 如何检查 URL 是否包含 botw.org 上的链接?

html - R 解析网页中的不完整文本(HTML)

r - 如何读取非常大的文本文件(~15GB)?

r - 在 R 中创建子矩阵后,为什么 str() 显示因子级别的信息不正确?

sql - 在日期中查找简单的频率模式