我正在尝试访问此站点上的数据:http://surge.srcc.lsu.edu/s1.html 。到目前为止,我的代码循环遍历下拉菜单,并且我想循环表格顶部的页面 [1] [2] .. 等。我尝试使用 Select,但收到错误,Select 无法与 span 一起使用:“UnexpectedTagNameException:Select 仅适用于元素,不适用于 ”。
# importing libraries
from selenium import webdriver
import time
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import re
driver = webdriver.Firefox()
driver.get("http://surge.srcc.lsu.edu/s1.html")
# definition for switching frames
def frame_switch(css_selector):
driver.switch_to.frame(driver.find_element_by_css_selector(css_selector))
# data is in an iframe
frame_switch("iframe")
html_source = driver.page_source
nameSelect = Select(driver.find_element_by_xpath('//select[@id="storm_name"]'))
stormCount = len(nameSelect.options)
data=[]
for i in range(1, stormCount):
print("starting loop on option storm " + nameSelect.options[i].text)
nameSelect.select_by_index(i)
time.sleep(3)
yearSelect = Select(driver.find_element_by_xpath('//select[@id="year"]'))
yearCount = len(yearSelect.options)
for j in range(1, yearCount):
print("starting loop on option year " + yearSelect.options[j].text)
yearSelect.select_by_index(j)
time.sleep(2)
这是我在选择页面时遇到问题的地方:
change_page=Select(driver.find_element_by_class_name("yui-pg-pages"))
page_count = len(change_page.options)
for k in range(1, page_count):
change_page.select_by_index(k)
# Select Page & run following code
soup = BeautifulSoup(driver.page_source, 'html.parser')
print(soup.find_all("tbody", {"class" : re.compile(".*")})[1])
# get the needed table body
table=soup.find_all("tbody", {"class" : re.compile(".*")})[1]
rows = table.find_all('tr')
for row in rows:
cols = row.find_all('td')
cols = [ele.text.strip() for ele in cols]
data.append(cols)
最佳答案
改用 xpath 选择器。
driver.find_element_by_xpath('//a[@class="yui-pg-next"]')
然后在可以与下一个按钮交互的同时循环即可。如果在循环浏览页面时页面数量可以改变,我更喜欢这种方法。您不需要使用Select
。事实上,我认为 Select
除了下拉菜单之外没有其他用途。
或者,如果您需要使用页面链接来执行此操作,因为页面不会经常更改,您可以尝试以下操作:
# Use find_elements_by_xpath to select multiple elements.
pages = driver.find_elements_by_xpath('//a[@class="yui-pg-page"]')
# loop through results
for page_link in pages:
page_link.click()
# do stuff.
关于python - 循环浏览页面以进行网页抓取,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36481856/