我有一个程序可以从存储在数据库中的 url 中获取内容。我正在使用 beautifulsoup
和 urllib2
来抓取内容。当我输出结果时,我发现程序在遇到(看起来像)403 错误时崩溃了。那么如何防止我的程序因 403/404 等错误而崩溃?
相关输出:
Traceback (most recent call last):
File "web_content.py", line 29, in <module>
grab_text(row)
File "web_content.py", line 21, in grab_text
f = urllib2.urlopen(row)
File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 438, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 372, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
最佳答案
你可以用try/except
包围请求,例如
try:
urllib2.openurl(url)
except urllib2.HTTPError, e:
print e
参见 http://www.voidspace.org.uk/python/articles/urllib2.shtml#handling-exceptions一些很好的例子和信息。
关于python,urllib2,因 404 错误而崩溃,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10117885/