如何获取子类别中所有页面的所有产品?我附上了程序。现在我的程序只能从第一页开始。我想从所有 +400 个页面中获取该子类别中的所有产品,以便转到下一页提取所有产品,然后转到下一页等。如果有任何帮助,我将不胜感激。
# selenium imports
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import random
PROXY ="88.157.149.250:8080";
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % PROXY)
# //a[starts-with(@href, 'https://www.amazon.com/')]/@href
LINKS_XPATH = '//*[contains(@id,"result")]/div/div[3]/div[1]/a'
browser = webdriver.Chrome(chrome_options=chrome_options)
browser.get(
'https://www.amazon.com/s/ref=lp_11444071011_nr_p_8_1/132-3636705-4291947?rh=n%3A3375251%2Cn%3A%213375301%2Cn%3A10971181011%2Cn%3A11444071011%2Cp_8%3A2229059011')
links = browser.find_elements_by_xpath(LINKS_XPATH)
for link in links:
href = link.get_attribute('href')
print(href)
最佳答案
当你想要获取大量数据时,最好通过直接 HTTP 请求来获取,而不是使用 Selenium 导航到每个页面...
尝试迭代所有页面并抓取所需数据,如下所示
import requests
from lxml import html
page_counter = 1
links = []
while True:
headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0"}
url = "https://www.amazon.com/s/ref=sr_pg_{0}?rh=n%3A3375251%2Cn%3A!3375301%2Cn%3A10971181011%2Cn%3A11444071011%2Cp_8%3A2229059011&page={0}&ie=UTF8&qid=1517398836".format(page_counter)
response = requests.get(url, headers=headers)
if response.status_code == 200:
source = html.fromstring(response.content)
links.extend(source.xpath('//*[contains(@id,"result")]/div/div[3]/div[1]/a/@href'))
page_counter += 1
else:
break
print(links)
附注检查this ticket能够将代理与requests
库一起使用
关于python - 如何获取子类别中所有页面的所有产品(python,amazon),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48541117/