python - Scrapy 登录后解析 url 列表

标签 python scrapy scrapy-spider

我对python不是很熟悉,所以请耐心等待。
我有一个scrapy爬虫,它应该像它应该的那样工作,但现在我需要做一个新的,但这次它应该爬取一个登录的 session 。
所以我的scrapy使用从站点地图获得的url列表作为start_urls,它应该向登录表单发出请求,然后,如果登录它应该开始解析我的列表......

到目前为止,这是我的代码:

class StockPricesSpider(Spider):
    name = "logged-in"
    allowed_domains = ["example.com"]
    d = strftime("%Y-%m-%d", gmtime())
    start_urls = ['https://www.example.com/customer/account/login/']

    def parse(self, response):
        return [FormRequest.from_response(response,
                    formdata={'username': 'myuser', 'password': 'mypass'},
                    callback=self.after_login)]

    def after_login(self, response):
        # check login succeed before going on
        if "Invalid login or password." in response.body:
            self.log("Login failed", level=log.ERROR)
            return
        else:
             logging.log(logging.INFO,'Logged in and start parsing')
             return Request("http://www.example.com/", callback=self.parse_products)

    def parse_products(self, response):
        f = open("data/sitemaps/urls04102015.txt")
        start_urls = [url.strip() for url in f.readlines()]
        f.close()
        d = strftime("%Y-%m-%d", gmtime())
        if os.path.exists("data/results/stock_"+d+".csv"):
            os.remove("data/results/stock_"+d+".csv")             

        sel = Selector(response)
        separator = ";"
        items = []

        item = MyPrices()
        sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract()
        logging.log(logging.INFO, sku)
        if len(sku) > 0:        
            item['sku'] = "med_" + sel.xpath('.//strong[@itemprop="productID"]/text()').extract()[0].strip()
            ...
        items.append(item)         
        return items

所以这不起作用,因为我没有正确调用解析器。
所以基本上,我没有收到错误,但网址也没有被解析。
所以登录有效,我成功登录,但在那之后(登录后)我该怎么做scrapy(解析url列表)?

编辑
我找到了解决我的问题的新方法,但它也无法正常工作。请帮我调试这个(或第一种方法)
class StockPricesSpiderX(InitSpider):
    name = "logged-in"
    allowed_domains = ["example.com"]
    login_page = 'https://www.example.com/ro/customer/account/login/' 
    d = strftime("%Y-%m-%d", gmtime())
    f = open("data/sitemaps/urls04102015.txt")
    start_urls = [url.strip() for url in f.readlines()]
    f.close()
    if os.path.exists("data/results/stock_"+d+".csv"):
        os.remove("data/results/stock_"+d+".csv")

    def init_request(self):
        """ Called before crawler starts """
        logging.log(logging.INFO, 'before crawler starts...')
        return Request(url=self.login_page, callback=self.login)

    def login(self, response):
        """ Generate login request """
        logging.log(logging.INFO, 'do login...')
        return FormRequest.from_response(response,
                                         formdata={'name':'myuser','password':'mypass'},
                                         callback=self.check_login_response)
    def check_login_response(self,response):
        """ Check the response returned by login request to see if we are logged in """
        if "Invalid login or password." in response.body:
            logging.log(logging.INFO,'... BAD LOGIN ...')
        else:
            logging.log(logging.INFO, 'GOOD LOGIN... initialize')
            self.initialized()

    def parse_item(self, response):
        sel = Selector(response)
        separator = ";"
        items = []
        item = StockPrices()
        sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract()
        logging.log(logging.INFO, sku)
        ...
        items.append(item)         
        return items

执行日志显示:
2015-12-03 14:54:16 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2015-12-03 14:54:16 [scrapy] INFO: Optional features available: ssl, http11
2015-12-03 14:54:16 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'products.spiders', 'FEED_URI': 'calinxautomat.csv', 'LOG_LEVEL': 'INFO', 'DUPEFILTER_CLASS': 'scrapy.dupefilter.RFPDupeFilter', 'SPIDER_MODULES': ['products.spiders'], 'DEFAULT_ITEM_CLASS': 'products.items.Subcategories', 'FEED_FORMAT': 'csv'}
2015-12-03 14:54:21 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-03 14:54:23 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-03 14:54:23 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-03 14:54:23 [scrapy] INFO: Enabled item pipelines: myWriteToCsv
2015-12-03 14:54:23 [root] INFO: before crawler starts...
2015-12-03 14:54:23 [scrapy] INFO: Spider opened
2015-12-03 14:54:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-03 14:54:25 [root] INFO: do login...
2015-12-03 14:54:26 [scrapy] INFO: Closing spider (finished)
2015-12-03 14:54:26 [scrapy] INFO: Dumping Scrapy stats:

...

所以这个似乎没有通过登录阶段......就像回调没有从formRequest退出......
我究竟做错了什么?

最佳答案

parse_products()分配给 start_urls将使用该例程的本地变量,而不是您在蜘蛛顶部设置的全局类。无论如何,我认为分配给 start_urls 不会做你想做的事,scrapy 不会注意到然后解析它们。您需要做的是将要解析的新 url 排队。

for url in f.readlines()
    yield Request(url.strip(), callback=self.parse_products)

更新:来自您的更新:scrapy 有一个 url 过滤器,因此它不会重新访问页面。见 this , tldr: 设置 dont_filter=True在表单请求中

关于python - Scrapy 登录后解析 url 列表,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34064847/

相关文章:

Python-摆公式

python - Scrapy 中的持久重复过滤

scrapy - Python Scrapy 错误。不再支持使用多个蜘蛛运行 'scrapy crawl'

python - 关于将shell命令插入python

python - 多台主机上的并行 rsync

python - 将 C 结构传递给 C DLL 中的函数

python - Scrapy - urlparse.urljoin 的行为与 str.join 相同吗?

python - Scrapy CrawlSpider 不关注链接

Xpath:为什么 normalize-space 无法删除空白空间和\n?

javascript - Scrapy如何与Javascript打交道