python - phantomjs 引发 OSError : [Errno 9] Bad file descriptor

标签 python selenium scrapy phantomjs

当我在Scrapy中间件中使用phantomjs时,有时会引发:

Traceback (most recent call last):
 File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-
packages/scrapy/core/downloader/middleware.py", line 37, in 
process_request
response = yield method(request=request, spider=spider)
File "/home/ttc/ruyi-
scrapy/saibolan/saibolan/hz_webdriver_middleware.py", line 47, in 
 process_request
driver.quit()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/webdriver.py", line 76, in quit
self.service.stop()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 149, in stop
self.send_remote_shutdown_command()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/service.py", line 67, in send_remote_shutdown_command
os.close(self._cookie_temp_file_handle)
OSError: [Errno 9] Bad file descriptor

其实不是每次都出现,我爬了80页,出现了30次,而且是在phantomjs中间件中

class HZPhantomjsMiddleware(object):

def __init__(self, settings):
    self.phantomjs_driver_path = settings.get('PHANTOMJS_DRIVER_PATH')
    self.cloud_mode = settings.get('CLOUD_MODE')

@classmethod
def from_crawler(cls, crawler):
    return cls(crawler.settings)

def process_request(self, request, spider):
    # 线上需要 display, 本地调试可以注释掉
    # if self.cloud_mode:
    #     display = Display(visible=0, size=(800, 600))
    #     display.start()
    dcap = dict(DesiredCapabilities.PHANTOMJS)
    dcap["phantomjs.page.settings.userAgent"] = (
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36")
    driver = webdriver.PhantomJS(
        self.phantomjs_driver_path, desired_capabilities=dcap)
    # chrome_options = webdriver.ChromeOptions()
    # prefs = {"profile.managed_default_content_settings.images": 2}
    # chrome_options.add_experimental_option("prefs", prefs)
    # driver = webdriver.Chrome(self.chrome_driver_path, chrome_options=chrome_options)
    driver.get(request.url)
    try:
        element = WebDriverWait(driver, 15).until(
            ec.presence_of_element_located(
                (By.XPATH, '//div[@class="txt-box"]|//h4[@class="weui_media_title"]|//div[@class="rich_media_content "]'))
        )
        body = driver.page_source
        time.sleep(1)
        driver.quit()
        return HtmlResponse(request.url, body=body, encoding='utf-8', request=request)
    except:
        driver.quit()
        spider.logger.error('Ignore request, url: {}'.format(request.url))
        raise IgnoreRequest()

我不知道是什么导致了这个错误。

最佳答案

截至 2016 年 7 月,driver.close() 和 driver.quit() 对我来说还不够。这杀死了节点进程,但没有杀死它产生的 phantomjs 子进程。

在关于 this GitHub issue 的讨论之后,对我有用的唯一解决方案是运行:

import signal

driver.service.process.send_signal(signal.SIGTERM) # kill the specific phantomjs child proc
driver.quit()                                      # quit the node proc

关于python - phantomjs 引发 OSError : [Errno 9] Bad file descriptor,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44995042/

相关文章:

python - 如何解决导入错误 : cannot import name 'LoginManager' from 'flask_login' in Flask?

python - 使用 Python numpy einsum 获取 2 个矩阵之间的点积

python - 循环浏览页面以进行网页抓取

java - 在不同语言语法之间转换正则表达式的工具?

java - 用 Selenium 查找属性

Selenium 点击 Highcharts 系列

java - 如果 ValueOF 或 Integer.parseInt 不起作用,如何在 Java 中划分两个字符串?

python - 官方scrapy例子出错?

python - 使用 Scrapy 进行 NTLM 身份验证以进行网络抓取

python - Scrapy错误: Spider must return Request, BaseItem或无,得到 'dict'