我正在尝试从 http://www.indiainfoline.com/top-news 中抓取标题、摘要、日期和链接。对于每个分区。与class':'row'
。
link = 'http://www.indiainfoline.com/top-news'
redditFile = urllib2.urlopen(link)
redditHtml = redditFile.read()
redditFile.close()
soup = BeautifulSoup(redditHtml, "lxml")
productDivs = soup.findAll('div', attrs={'class' : 'row'})
for div in productDivs:
result = {}
try:
import pdb
#pdb.set_trace()
heading = div.find('p', attrs={'class': 'heading fs20e robo_slab mb10'}).get_text()
title = heading.get_text()
article_link = "http://www.indiainfoline.com"+heading.find('a')['href']
summary = div.find('p')
但是没有任何组件被获取。关于如何解决这个问题有什么建议吗?
最佳答案
看到html源代码中有很多class=row
,你需要过滤掉实际行数据存在的节 block 。在您的情况下,id="search-list"
下存在所有 16 个预期行。因此,首先提取部分,然后提取行。由于 .select
返回数组,因此我们必须使用 [0]
来提取数据。获得行数据后,您需要迭代并提取标题、article_url、摘要等..
from bs4 import BeautifulSoup
link = 'http://www.indiainfoline.com/top-news'
redditFile = urllib2.urlopen(link)
redditHtml = redditFile.read()
redditFile.close()
soup = BeautifulSoup(redditHtml, "lxml")
section = soup.select('#search-list')
rowdata = section[0].select('.row')
for row in rowdata[1:]:
heading = row.select('.heading.fs20e.robo_slab.mb10')[0].text
title = 'http://www.indiainfoline.com'+row.select('a')[0]['href']
summary = row.select('p')[0].text
输出:
PFC board to consider bonus issue; stock surges by 4%
http://www.indiainfoline.com/article/news-top-story/pfc-pfc-board-to-consider-bonus-issue-stock-surges-by-4-117080300814_1.html
PFC board to consider bonus issue; stock surges by 4%
...
...
关于python - 使用 beautifulsoup 从 div 中抓取页面内容,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45481121/