我试图循环浏览多个页面,但我的代码没有提取任何内容。我对抓取有点陌生,所以请耐心等待。我制作了一个容器,这样我就可以定位每个列表。我还创建了一个变量来定位您按下以转到下一页的 anchor 标记。我真的很感激我能得到的任何帮助。谢谢。
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
for page in range(0,25):
file = "breakfeast_chicago.csv"
f = open(file, "w")
Headers = "Nambusiness_name, business_address, business_city, business_region, business_phone_number\n"
f.write(Headers)
my_url = 'https://www.yellowpages.com/search?search_terms=Stores&geo_location_terms=Chicago%2C%20IL&page={}'.format(page)
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
# html parsing
page_soup = soup(page_html, "html.parser")
# grabs each listing
containers = page_soup.findAll("div",{"class": "result"})
new = page_soup.findAll("a", {"class":"next ajax-page"})
for i in new:
try:
for container in containers:
b_name = i.find("container.h2.span.text").get_text()
b_addr = i.find("container.p.span.text").get_text()
city_container = container.findAll("span",{"class": "locality"})
b_city = i.find("city_container[0].text ").get_text()
region_container = container.findAll("span",{"itemprop": "postalCode"})
b_reg = i.find("region_container[0].text").get_text()
phone_container = container.findAll("div",{"itemprop": "telephone"})
b_phone = i.find("phone_container[0].text").get_text()
print(b_name, b_addr, b_city, b_reg, b_phone)
f.write(b_name + "," +b_addr + "," +b_city.replace(",", "|") + "," +b_reg + "," +b_phone + "\n")
except: AttributeError
f.close()
最佳答案
如果使用 BS4 尝试:find_all
尝试使用 import pdb;pdb.set_trace()
进入跟踪并尝试调试 for 循环中选择的内容。
此外,如果通过 JavaScript 加载,某些内容可能会被隐藏。
每个“点击”的 anchor 标记或 href 只是另一个网络请求,如果您打算点击该链接,请考虑减慢每个请求之间的请求数量,这样您就不会被阻止。
关于python - 使用 python 3.6.3 使用 beautifulsoup4 抓取多个页面,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47521063/