python - Scrapy Authenticated Spider 获取内部服务器错误

标签 python python-2.7 authentication scrapy internal-server-error

我正在尝试制作经过身份验证的蜘蛛。我在这里提到了几乎所有与 Scrapy 身份验证蜘蛛相关的帖子,我找不到我的问题的任何答案。我使用了以下代码:

import scrapy

from scrapy.spider import BaseSpider
from scrapy.selector import  Selector
from scrapy.http import FormRequest, Request
import  logging
from PWC.items import PwcItem


class PwcmoneySpider(scrapy.Spider):
    name = "PWCMoney"
    allowed_domains = ["pwcmoneytree.com"]
    start_urls = (
        'https://www.pwcmoneytree.com/SingleEntry/singleComp?compName=Addicaid',
    )

    def parse(self, response):
        return [scrapy.FormRequest("https://www.pwcmoneytree.com/Account/Login",
                                   formdata={'UserName': 'user', 'Password': 'pswd'},
                                   callback=self.after_login)]

    def after_login(self, response):
      if "authentication failed" in response.body:
        self.log("Login failed", level=logging.ERROR)
        return
    # We've successfully authenticated, let's have some fun!
    print("Login Successful!!")
    return Request(url="https://www.pwcmoneytree.com/SingleEntry/singleComp?compName=Addicaid",
               callback=self.parse_tastypage)


    def parse_tastypage(self, response):
      for sel in response.xpath('//div[@id="MainDivParallel"]'):
                                      item = PwcItem()
                                      item['name'] = sel.xpath('div[@id="CompDiv"]/h2/text()').extract()
                                      item['location'] = sel.xpath('div[@id="CompDiv"]/div[@id="infoPane"]/div[@class="infoSlot"]/div/a/text()').extract()
                                      item['region'] = sel.xpath('div[@id="CompDiv"]/div[@id="infoPane"]/div[@id="contactInfoDiv"]/div[1]/a[2]/text()').extract()
                                      yield item

我得到了以下输出:

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Python27\PWC>scrapy crawl PWCMoney -o test.csv
2016-04-29 11:37:35 [scrapy] INFO: Scrapy 1.0.5 started (bot: PWC)
2016-04-29 11:37:35 [scrapy] INFO: Optional features available: ssl, http11
2016-04-29 11:37:35 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'PW
C.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['PWC.spiders'], 'FEED_URI':
 'test.csv', 'BOT_NAME': 'PWC'}
2016-04-29 11:37:35 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-29 11:37:36 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-29 11:37:36 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-29 11:37:36 [scrapy] INFO: Enabled item pipelines:
2016-04-29 11:37:36 [scrapy] INFO: Spider opened
2016-04-29 11:37:36 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-04-29 11:37:36 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-29 11:37:37 [scrapy] DEBUG: Retrying <POST https://www.pwcmoneytree.com/
Account/Login> (failed 1 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Retrying <POST https://www.pwcmoneytree.com/
Account/Login> (failed 2 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Gave up retrying <POST https://www.pwcmoneyt
ree.com/Account/Login> (failed 3 times): 500 Internal Server Error
2016-04-29 11:37:38 [scrapy] DEBUG: Crawled (500) <POST https://www.pwcmoneytree
.com/Account/Login> (referer: None)
2016-04-29 11:37:38 [scrapy] DEBUG: Ignoring response <500 https://www.pwcmoneyt
ree.com/Account/Login>: HTTP status code is not handled or not allowed
2016-04-29 11:37:38 [scrapy] INFO: Closing spider (finished)
2016-04-29 11:37:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 954,
 'downloader/request_count': 3,
 'downloader/request_method_count/POST': 3,
 'downloader/response_bytes': 30177,
 'downloader/response_count': 3,
 'downloader/response_status_count/500': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 4, 29, 6, 7, 38, 674000),
 'log_count/DEBUG': 6,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2016, 4, 29, 6, 7, 36, 193000)}
2016-04-29 11:37:38 [scrapy] INFO: Spider closed (finished)

由于我是 python 和 Scrapy 的新手,我似乎无法理解错误,希望这里的人能帮助我。

所以,我修改了这样的代码 Rejected的建议,只显示修改的部分:

allowed_domains = ["pwcmoneytree.com"]
    start_urls = (
        'https://www.pwcmoneytree.com/Account/Login',
    )

    def start_requests(self):
        return [scrapy.FormRequest.from_response("https://www.pwcmoneytree.com/Account/Login",
                                   formdata={'UserName': 'user', 'Password': 'pswd'},
                                   callback=self.logged_in)]

出现如下错误:

C:\Python27\PWC>scrapy crawl PWCMoney -o test.csv
2016-04-30 11:04:47 [scrapy] INFO: Scrapy 1.0.5 started (bot: PWC)
2016-04-30 11:04:47 [scrapy] INFO: Optional features available: ssl, http11
2016-04-30 11:04:47 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'PW
C.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['PWC.spiders'], 'FEED_URI':
 'test.csv', 'BOT_NAME': 'PWC'}
2016-04-30 11:04:50 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-30 11:04:54 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-30 11:04:54 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-30 11:04:54 [scrapy] INFO: Enabled item pipelines:
Unhandled error in Deferred:
2016-04-30 11:04:54 [twisted] CRITICAL: Unhandled error in Deferred:


Traceback (most recent call last):
  File "c:\python27\lib\site-packages\scrapy\cmdline.py", line 150, in _run_comm
and
    cmd.run(args, opts)
  File "c:\python27\lib\site-packages\scrapy\commands\crawl.py", line 57, in run

    self.crawler_process.crawl(spname, **opts.spargs)
  File "c:\python27\lib\site-packages\scrapy\crawler.py", line 153, in crawl
    d = crawler.crawl(*args, **kwargs)
  File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1274, in
unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1128, in
_inlineCallbacks
    result = g.send(result)
  File "c:\python27\lib\site-packages\scrapy\crawler.py", line 72, in crawl
    start_requests = iter(self.spider.start_requests())
  File "C:\Python27\PWC\PWC\spiders\PWCMoney.py", line 16, in start_requests
    callback=self.logged_in)]
  File "c:\python27\lib\site-packages\scrapy\http\request\form.py", line 36, in
from_response
    kwargs.setdefault('encoding', response.encoding)
exceptions.AttributeError: 'str' object has no attribute 'encoding'
2016-04-30 11:04:54 [twisted] CRITICAL:

最佳答案

从您的错误日志中可以看出,是对 https://www.pwcmoneytree.com/Account/Login 的 POST 请求给您带来了 500 错误。

我尝试使用 POSTman 手动发出相同的 POST 请求.它给出了 500 错误代码和包含此错误消息的 HTML 页面:

The required anti-forgery cookie "__RequestVerificationToken" is not present.

这是许多 API 和网站用来防止 CSRF 攻击的功能。如果您仍想抓取网站,则必须先访问登录表单并在登录前获取正确的 cookie。

关于python - Scrapy Authenticated Spider 获取内部服务器错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/36930865/

相关文章:

Python-Child 类触发父类

django - 是否可以在该用户登录后查看该用户是否为首次登录? - Django

node.js - 了解 Passport 序列化反序列化

ios - 使用 Swift 解析登录

python - 优化Python在三迭代语句中的计数

Python如何通过单元测试检查内部功能

python - 等价于 Python 中的 C++ union

python - PyQt4 QAbstractListModel - 仅显示第一个数据小部件

python - 基本的hadoop mapreduce作业正在启动,但尚未完成

python - 根据分组列值对表元素进行分组