python - 如何捕获 requests.get() 异常

标签 python exception-handling web-scraping python-requests yellow-pages

我正在为 yellowpages.com 开发网络抓取工具,它似乎总体上运行良好。但是,在遍历长查询的分页时,requests.get(url) 将随机返回 <Response [503]>。或 <Response [404]> .偶尔,我会收到更糟糕的异常,例如:

requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.yellowpages.com', port=80): Max retries exceeded with url: /search?search_terms=florists&geo_location_terms=FL&page=22 (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10053] An established connection was aborted by the software in your host machine',))

使用 time.sleep() 似乎消除了 503 错误,但 404 和异常仍然是问题。

我正在尝试弄清楚如何“捕获”各种响应,以便我可以进行更改(等待、更改代理、更改用户代理)并重试和/或继续。伪代码是这样的:

If error/exception with request.get:
    wait and/or change proxy and user agent
    retry request.get
else:
    pass

在这一点上,我什至无法使用以下方法捕获问题:

try:
    r = requests.get(url)
except requests.exceptions.RequestException as e:
    print (e)
    import sys #only added here, because it's not part of my stable code below
    sys.exit()

我从哪里开始的完整代码 github及以下:

import requests
from bs4 import BeautifulSoup
import itertools
import csv

# Search criteria
search_terms = ["florists", "pharmacies"]
search_locations = ['CA', 'FL']

# Structure for Data
answer_list = []
csv_columns = ['Name', 'Phone Number', 'Street Address', 'City', 'State', 'Zip Code']


# Turns list of lists into csv file
def write_to_csv(csv_file, csv_columns, answer_list):
    with open(csv_file, 'w') as csvfile:
        writer = csv.writer(csvfile, lineterminator='\n')
        writer.writerow(csv_columns)
        writer.writerows(answer_list)


# Creates url from search criteria and current page
def url(search_term, location, page_number):
    template = 'http://www.yellowpages.com/search?search_terms={search_term}&geo_location_terms={location}&page={page_number}'
    return template.format(search_term=search_term, location=location, page_number=page_number)


# Finds all the contact information for a record
def find_contact_info(record):
    holder_list = []
    name = record.find(attrs={'class': 'business-name'})
    holder_list.append(name.text if name is not None else "")
    phone_number = record.find(attrs={'class': 'phones phone primary'})
    holder_list.append(phone_number.text if phone_number is not None else "")
    street_address = record.find(attrs={'class': 'street-address'})
    holder_list.append(street_address.text if street_address is not None else "")
    city = record.find(attrs={'class': 'locality'})
    holder_list.append(city.text if city is not None else "")
    state = record.find(attrs={'itemprop': 'addressRegion'})
    holder_list.append(state.text if state is not None else "")
    zip_code = record.find(attrs={'itemprop': 'postalCode'})
    holder_list.append(zip_code.text if zip_code is not None else "")
    return holder_list


# Main program
def main():
    for search_term, search_location in itertools.product(search_terms, search_locations):
        i = 0
        while True:
            i += 1
            url = url(search_term, search_location, i)
            r = requests.get(url)
            soup = BeautifulSoup(r.text, "html.parser")
            main = soup.find(attrs={'class': 'search-results organic'})
            page_nav = soup.find(attrs={'class': 'pagination'})
            records = main.find_all(attrs={'class': 'info'})
            for record in records:
                answer_list.append(find_contact_info(record))
            if not page_nav.find(attrs={'class': 'next ajax-page'}):
                csv_file = "YP_" + search_term + "_" + search_location + ".csv"
                write_to_csv(csv_file, csv_columns, answer_list)  # output data to csv file
                break

if __name__ == '__main__':
    main()

预先感谢您花时间阅读这篇长篇文章/回复:)

最佳答案

我一直在做类似的事情,这对我有用(主要是):

# For handling the requests to the webpages
import requests
from requests_negotiate_sspi import HttpNegotiateAuth


# Test results, 1 record per URL to test
w = open(r'C:\Temp\URL_Test_Results.txt', 'w')

# For errors only
err = open(r'C:\Temp\URL_Test_Error_Log.txt', 'w')

print('Starting process')

def test_url(url):
    # Test the URL and write the results out to the log files.

    # Had to disable the warnings, by turning off the verify option, a warning is generated as the
    # website certificates are not checked, so results could be "bad". The main site throws errors
    # into the log for each test if we don't turn it off though.
    requests.packages.urllib3.disable_warnings()
    headers={'User-Agent': 'Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36'}
    print('Testing ' + url)
    # Try the website link, check for errors.
    try:
        response = requests.get(url, auth=HttpNegotiateAuth(), verify=False, headers=headers, timeout=5)
    except requests.exceptions.HTTPError as e:
        print('HTTP Error')
        print(e)
        w.write('HTTP Error, check error log' + '\n')
        err.write('HTTP Error' + '\n' + url + '\n' + e + '\n' + '***********' + '\n' + '\n')
    except requests.exceptions.ConnectionError as e:
        # some external sites come through this, even though the links work through the browser
        # I suspect that there's some blocking in place to prevent scraping...
        # I could probably work around this somehow.
        print('Connection error')
        print(e)
        w.write('Connection error, check error log' + '\n')
        err.write(str('Connection Error') + '\n' + url + '\n' + str(e) + '\n' + '***********' + '\n' + '\n')
    except requests.exceptions.RequestException as e:
        # Any other error types
        print('Other error')
        print(e)
        w.write('Unknown Error' + '\n')
        err.write('Unknown Error' + '\n' + url + '\n' + e + '\n' + '***********' + '\n' + '\n')
    else:
        # Note that a 404 is still 'successful' as we got a valid response back, so it comes through here
        # not one of the exceptions above.
        response = requests.get(url, auth=HttpNegotiateAuth(), verify=False)
        print(response.status_code)
        w.write(str(response.status_code) + '\n')
        print('Success! Response code:', response.status_code)
    print('========================')

test_url('https://stackoverflow.com/')

我目前仍然遇到某些网站超时的问题,您可以按照我的尝试在此处解决该问题: 2 Valid URLs, requests.get() fails on 1 but not the other. Why?

关于python - 如何捕获 requests.get() 异常,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38857883/

相关文章:

python - 在 Mac OS 上从 transformers 类导入管道函数时,Jupyter 内核死机

python-3.x - 使用 XPath 将图像 URL 提取为字符串

python - 尝试显示小部件时出现线程问题

python - 从python中的二进制数据中获取unicode字符串

python - 通过 Argmax 沿另一个数组的轴屏蔽一个 2D Numpy 数组

oracle - sql max 函数没有出现异常

C : How do you simulate an 'exception' ?

java - Iterator 实现应该如何处理已检查的异常?

java - 使用 Java 连接到 URL 并获取 401

javascript - 这段 JS 代码是如何工作的,它是以什么形式编码的?