python-2.7 - Scrapy没有进入解析函数

标签 python-2.7 web-scraping web-crawler scrapy

我正在运行下面的蜘蛛,但它没有进入解析方法,我不知道为什么,请有人帮忙。

我的代码如下

    from scrapy.item import Item, Field
    from scrapy.selector import Selector
    from scrapy.spider import BaseSpider
    from scrapy.selector import HtmlXPathSelector


    class MyItem(Item):
        reviewer_ranking = Field()
        print "asdadsa"


    class MySpider(BaseSpider):
        name = 'myspider'
        allowed_domains = ["amazon.com"]
        start_urls = ["http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp"]
        print"sadasds"
        def parse(self, response):
            print"fggfggftgtr"
            sel = Selector(response)
            hxs = HtmlXPathSelector(response)
            item = MyItem()
            item["reviewer_ranking"] = hxs.select('//span[@class="a-size-small a-color-secondary"]/text()').extract()
            return item

我得到的输出如下

    $ scrapy runspider crawler_reviewers_data.py
    asdadsa
    sadasds
    /home/raj/Documents/IIM A/Daily sales rank/Daily      reviews/Reviews_scripts/Scripts_review/Reviews/Reviewer/crawler_reviewers_data.py:18:     ScrapyDeprecationWarning: crawler_reviewers_data.MySpider inherits from deprecated class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (warning only on first subclass, there may be others)
    class MySpider(BaseSpider):
    2014-06-24 19:21:35+0530 [scrapy] INFO: Scrapy 0.22.2 started (bot: scrapybot)
    2014-06-24 19:21:35+0530 [scrapy] INFO: Optional features available: ssl, http11
    2014-06-24 19:21:35+0530 [scrapy] INFO: Overridden settings: {}
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled item pipelines: 
    2014-06-24 19:21:35+0530 [myspider] INFO: Spider opened
    2014-06-24 19:21:35+0530 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2014-06-24 19:21:35+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6027
    2014-06-24 19:21:35+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6084
    2014-06-24 19:21:36+0530 [myspider] DEBUG: Crawled (403) <GET     http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp> (referer: None) ['partial']
    2014-06-24 19:21:36+0530 [myspider] INFO: Closing spider (finished)
    2014-06-24 19:21:36+0530 [myspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 259,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 28487,
 'downloader/response_count': 1,
 'downloader/response_status_count/403': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2014, 6, 24, 13, 51, 36, 631236),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2014, 6, 24, 13, 51, 35, 472849)}
    2014-06-24 19:21:36+0530 [myspider] INFO: Spider closed (finished)

请帮助我,我现在陷入困境。

最佳答案

这是亚马逊使用的一种反网络爬行技术 - 您将得到 403 - Forbidden因为它需要随请求一起发送 User-Agent header 。

一种选择是将其手动添加到 Request产生于start_requests() :

class MySpider(BaseSpider):
    name = 'myspider'
    allowed_domains = ["amazon.com"]

    def start_requests(self):
        yield Request("https://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp",
                      headers={'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"})

    ...

另一个选项是设置 DEFAULT_REQUEST_HEADERS在项目范围内设置。

另请注意,Amazon 提供了一个 API,其中包含 python wrapper ,考虑使用它。

希望有帮助。

关于python-2.7 - Scrapy没有进入解析函数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/24388550/

相关文章:

python - Beautiful Soup - 选择没有类的下一个跨度元素的文本

python - Python 中的 Scraper 的开始

html - 您如何将整个网站存档以供离线查看?

javascript - Python 爬虫机械化/javascript

Python numpy 从 1.6 更新到 1.8

Python 用 if/else append 一个字符串?

python-2.7 - 从目录中读取图像并用空格分隔 python 传递每个图像

python - 先按字母排序包含字母数字项目的列表

python - BeautifulSoup .get 未返回 'href'

python - 是否可以从 Last.FM API 获取记录和听众?