出于学习目的,我正在尝试下载 Buzzfeed 文章的所有帖子图像。
这是我的代码:
import lxml.html
import string
import random
import requests
url ='http://www.buzzfeed.com/mjs538/messages-from-creationists-to-people-who-believe-in-evolutio?bftw'
headers = headers = {
'User-Agent': 'Mozilla/5.0',
'From': 'admin@jhvisser.com'
}
page= requests.get(url)
tree = lxml.html.fromstring(page.content)
#print(soup.prettify()).encode('ascii', 'ignore')
images = tree.cssselect("div.sub_buzz_content img")
def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))
for image in images:
with open(id_generator() + '.jpg', 'wb') as handle:
request = requests.get(image.attrib['src'], headers=headers, stream=True)
for block in request.iter_content(1024):
if not block:
break
handle.write(block)
检索到的都是110字节大小的图像,查看它们只是一张空图像。我在这里的代码中做错了什么导致了这个问题吗?如果有更简单的方法,我就不必使用请求。
最佳答案
如果您仔细查看要抓取的网页的源代码,您会发现您想要的图像网址并未在 img 的
标记,但位于 src
属性中指定rel:bf_image_src
属性中。
将 image.attrib['src']
更改为 image.attrib['rel:bf_image_src']
应该可以解决您的问题。
我无法复制您的代码(它声称未安装 cssselect
),但此代码带有 BeautifulSoup和 urllib2在我的电脑上运行顺利,并下载了全部22张图片。
from itertools import count
from bs4 import BeautifulSoup
import urllib2
from time import sleep
url ='http://www.buzzfeed.com/mjs538/messages-from-creationists-to-people-who-believe-in-evolutio?bftw'
headers = {
'User-Agent': 'Non-commercical crawler, Steinar Lima. Contact: https://stackoverflow.com/questions/21616904/images-downloaded-are-blank-images-instead-of-actual-images'
}
r = urllib2.Request(url, headers=headers)
soup = BeautifulSoup(urllib2.urlopen(r))
c = count()
for div in soup.find_all('div', id='buzz_sub_buzz'):
for img in div.find_all('img'):
print img['rel:bf_image_src']
with open('images/{}.jpg'.format(next(c)), 'wb') as img_out:
req = urllib2.Request(img['rel:bf_image_src'], headers=headers)
img_out.write(urllib2.urlopen(req).read())
sleep(5)
关于python - 下载的图像是空白图像,而不是实际图像,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/21616904/