nested - scrapy的多重嵌套请求

标签 nested scrapy

我尝试废弃 www.flightradar24.com 网站上的一些飞机时刻表信息用于研究项目。

我想要获取的json文件的层次结构是这样的:

Object ID
 - country
   - link
   - name
   - airports
     - airport0 
       - code_total
       - link
       - lat
       - lon
       - name
       - schedule
          - ...
          - ...
      - airport1 
       - code_total
       - link
       - lat
       - lon
       - name
       - schedule
          - ...
          - ...

CountryAirport使用项目存储,正如您在 json 文件中看到的 CountryItem (link,name属性)最终存储多个AirportItem (code_total、链接、纬度、经度、名称、时间表):

class CountryItem(scrapy.Item):
    name = scrapy.Field()
    link = scrapy.Field()
    airports = scrapy.Field()
    other_url= scrapy.Field()
    last_updated = scrapy.Field(serializer=str)

class AirportItem(scrapy.Item):
    name = scrapy.Field()
    code_little = scrapy.Field()
    code_total = scrapy.Field()
    lat = scrapy.Field()
    lon = scrapy.Field()
    link = scrapy.Field()
    schedule = scrapy.Field()

这是我的scrapy代码AirportsSpider这样做:

class AirportsSpider(scrapy.Spider):
    name = "airports"
    start_urls = ['https://www.flightradar24.com/data/airports']
    allowed_domains = ['flightradar24.com']

    def clean_html(self, html_text):
        soup = BeautifulSoup(html_text, 'html.parser')
        return soup.get_text()

    rules = [
    # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(LxmlLinkExtractor(allow=('data/airports/',)), callback='parse')
    ]


    def parse(self, response):
        count_country = 0
        countries = []
        for country in response.xpath('//a[@data-country]'):
            if count_country > 5:
                break
            item = CountryItem()
            url =  country.xpath('./@href').extract()
            name = country.xpath('./@title').extract()
            item['link'] = url[0]
            item['name'] = name[0]
            count_country += 1
            countries.append(item)
            yield scrapy.Request(url[0],meta={'my_country_item':item}, callback=self.parse_airports)

    def parse_airports(self,response):
        item = response.meta['my_country_item']
        airports = []

        for airport in response.xpath('//a[@data-iata]'):
            url = airport.xpath('./@href').extract()
            iata = airport.xpath('./@data-iata').extract()
            iatabis = airport.xpath('./small/text()').extract()
            name = ''.join(airport.xpath('./text()').extract()).strip()
            lat = airport.xpath("./@data-lat").extract()
            lon = airport.xpath("./@data-lon").extract()

            iAirport = AirportItem()
            iAirport['name'] = self.clean_html(name)
            iAirport['link'] = url[0]
            iAirport['lat'] = lat[0]
            iAirport['lon'] = lon[0]
            iAirport['code_little'] = iata[0]
            iAirport['code_total'] = iatabis[0]

            airports.append(iAirport)

        for airport in airports:
            json_url = 'https://api.flightradar24.com/common/v1/airport.json?code={code}&plugin\[\]=&plugin-setting\[schedule\]\[mode\]=&plugin-setting\[schedule\]\[timestamp\]={timestamp}&page=1&limit=50&token='.format(code=airport['code_little'], timestamp="1484150483")
            yield scrapy.Request(json_url, meta={'airport_item': airport}, callback=self.parse_schedule)

        item['airports'] = airports

        yield {"country" : item}

    def parse_schedule(self,response):

        item = response.request.meta['airport_item']
        jsonload = json.loads(response.body_as_unicode())
        json_expression = jmespath.compile("result.response.airport.pluginData.schedule")
        item['schedule'] = json_expression.search(jsonload)

说明:

  • 在我的第一次解析中,我对我发现的每个国家/地区链接调用了一个请求 CountryItem通过 meta={'my_country_item':item} 创建。每个请求回调 self.parse_airports

  • 在我的第二级解析中parse_airports ,我明白了CountryItem使用 item = response.meta['my_country_item'] 创建我创建了一个新项目 iAirport = AirportItem()对于我在这个国家/地区页面中找到的每个机场。现在我想得到 schedule每个AirportItem的信息创建并存储在 airports列表。

  • 在第二级解析中parse_airports ,我在 airports 上运行 for 循环捕获schedule使用新请求的信息。因为我想将此时间表信息包含到我的 AirportItem 中,所以我将此项目包含到元信息 meta={'airport_item': airport} 中。该请求运行的回调parse_schedule

  • 在第三级解析中parse_schedule ,我将scrapy收集的时刻表信息注入(inject)到之前使用response.request.meta['airport_item']创建的AirportItem中。

但是我的源代码有问题,scrapy正确地废弃了所有信息(国家、机场、时间表),但我对嵌套项目的理解似乎不正确。正如你所看到的,我生成的 json 包含 country > list of (airport) ,但不是country > list of (airport > schedule )

enter image description here

我的代码在 github 上:https://github.com/IDEES-Rouen/Flight-Scrapping

最佳答案

问题是您 fork 了您的项目,根据您的逻辑,每个国家/地区只需要 1 个项目,因此在解析国家/地区后,您无法在任何时候生成多个项目。您想要做的是将它们全部堆叠到一件元素中。
为此,您需要创建一个解析循环:

def parse_airports(self, response):
    item = response.meta['my_country_item']
    item['airports'] = []

    for airport in response.xpath('//a[@data-iata]'):
        url = airport.xpath('./@href').extract()
        iata = airport.xpath('./@data-iata').extract()
        iatabis = airport.xpath('./small/text()').extract()
        name = ''.join(airport.xpath('./text()').extract()).strip()
        lat = airport.xpath("./@data-lat").extract()
        lon = airport.xpath("./@data-lon").extract()

        iAirport = dict()
        iAirport['name'] = 'foobar'
        iAirport['link'] = url[0]
        iAirport['lat'] = lat[0]
        iAirport['lon'] = lon[0]
        iAirport['code_little'] = iata[0]
        iAirport['code_total'] = iatabis[0]
        item['airports'].append(iAirport)

    urls = []
    for airport in item['airports']:
        json_url = 'https://api.flightradar24.com/common/v1/airport.json?code={code}&plugin\[\]=&plugin-setting\[schedule\]\[mode\]=&plugin-setting\[schedule\]\[timestamp\]={timestamp}&page=1&limit=50&token='.format(
            code=airport['code_little'], timestamp="1484150483")
        urls.append(json_url)
    if not urls:
        return item

    # start with first url
    next_url = urls.pop()
    return Request(next_url, self.parse_schedule,
                   meta={'airport_item': item, 'airport_urls': urls, 'i': 0})

def parse_schedule(self, response):
    """we want to loop this continuously for every schedule item"""
    item = response.meta['airport_item']
    i = response.meta['i']
    urls = response.meta['airport_urls']

    jsonload = json.loads(response.body_as_unicode())
    item['airports'][i]['schedule'] = 'foobar'
    # now do next schedule items
    if not urls:
        yield item
        return
    url = urls.pop()
    yield Request(url, self.parse_schedule,
                  meta={'airport_item': item, 'airport_urls': urls, 'i': i + 1})

关于nested - scrapy的多重嵌套请求,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/41634126/

相关文章:

lua - 如何在嵌套情况下最好地找到特定值的键?

MySQL 选择里面的选择里面的选择

python - Scrapy - 在 REST API 中导出/存储项目的最佳方式

python - 项目不包含在 Scrapy 中的 for 循环中

javascript - Javascript 解析器出现索引错误

CSS 嵌套属性...至少我希望它是这么叫的

python - 制作或预定义嵌套字典/JSON 的结构 || Python

javascript - 如何从嵌套对象中获取键?

python - Scrapy:crawlspider 不生成嵌套回调中的所有链接

python - Scrapy - 从 URL 获取文件大小和类型而不下载文件?