我使用 BeautifulSoup4 从一些网页中抓取数据。例如,在以下情况下,URL 为 https://wadsfred.aliexpress.com/store/425826/search/1.html ,共有96页。我的问题是脚本在几页后抛出错误。通常,当代码到达 15-20 页时。错误信息:
回溯(最近一次调用最后一次): 文件“main.py”,第 34 行,位于 if next_page.text != '下一页': AttributeError:“NoneType”对象没有属性“text”
感谢您提前提供的帮助!
import requests
import os
import csv
from itertools import count
from bs4 import BeautifulSoup
os.chdir('C:\MyFolder')
page_nr = 1
price = []
min_order = []
prod_name = []
for page_number in count(start = 1):
url =
'https://wadsfred.aliexpress.com/store/425826/search/{}'.format(page_nr) +
'.html'
print(url)
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
for div_b in soup.find_all('div', {'class':'cost'}):
price.append(div_b.text)
for min_or in soup.find_all('span', {'class':'min-order'}):
min_order.append(min_or.text)
for pr_name in soup.find_all('div', {'class':'detail'}):
for pr_h in pr_name.find_all('h3'):
for pr_title in pr_h.find_all('a'):
prod_name_s = (pr_title.get('title').strip())
prod_name.append(prod_name_s[:120])
print(len(prod_name))
page_nr = page_nr + 1
next_page = soup.find('a', {'class':'ui-pagination-next'})
if next_page.text != 'Next':
break
最佳答案
它重定向到登录页面,将用户代理添加到您的请求
heads = {"User-Agent" : 'Mozilla/5.0......'}
for page_number in count(start = 1):
.....
response = requests.get(url, headers=heads)
更好地使用requests.session()
来创建持久 session (cookies)
关于Python - 使用 BeautifulSoup 迭代页面,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/53649559/