尝试从多个网页中抓取表格并将其存储在列表中。该列表将第一个网页的结果打印 3 次。
import pandas as pd
import requests
from bs4 import BeautifulSoup
dflist = []
for i in range(1,4):
s = requests.Session()
res = requests.get(r'http://www.ironman.com/triathlon/events/americas/ironman/world-championship/results.aspx?p=' + str(i) + 'race=worldchampionship&rd=20181013&agegroup=Pro&sex=M&y=2018&ps=20#axzz5VRWzxmt3')
soup = BeautifulSoup(res.content,'lxml')
table = soup.find_all('table')
dfs = pd.read_html(str(table))
dflist.append(dfs)
s.close()
print(dflist)
最佳答案
您在 '?p=' + str(i)
之后遗漏了 &
,因此您的请求都将 p
设置为 ${NUMBER}race=worldchampionship
,ironman.com 可能无法理解并忽略它。在 'race=worldchampionship'
开头插入 &
。
为了防止将来出现此类错误,您可以将 URL 的查询参数作为 dict
传递给 params
关键字参数,如下所示:
params = {
"p": i,
"race": "worldchampionship",
"rd": "20181013",
"agegroup": "Pro",
"sex": "M",
"y": "2018",
"ps": "20",
}
res = requests.get(r'http://www.ironman.com/triathlon/events/americas/ironman/world-championship/results.aspx#axzz5VRWzxmt3', params=params)
关于python循环requests.get()只返回第一个循环,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53092613/