python - 我抓取了标题、价格、链接和信息表,我将其命名为planet_data,当我写入csv文件时,我得到了重复的planet_data

标签 python web-scraping

我想删除重复的planet_data

import requests
import csv
from bs4 import BeautifulSoup 
requests.packages.urllib3.disable_warnings()
import pandas as pd
url = 'https://www.paraibainternational.com/collections/gemstone?view=list'

while True:
    session = requests.Session()
    session.headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"}


    content = session.get(url, verify=False).content

    soup = BeautifulSoup(content, "html.parser")

    posts = soup.find_all('div',{'class':'product-details'})

    npo_jobs = {}
    data = []
    data_desciption = []

    for url in posts:
        title = url.find('h2').text
        price = url.find('span',{'money'}).text
        link = url.find('a').get('href')
        urls = ('https://www.paraibainternational.com/'+ link)
        url_response = requests.get(urls)
        url_data = url_response.text
        url_soup = BeautifulSoup(url_data, 'html.parser')
        print(title)
        print(price)
        print(link)
        desciption = url_soup.find('div',{'class':'rte main-product-description-product'})
        #print(desciption)
        info = desciption.find_all('li')
        for index,i in enumerate(desciption):

            planet_data = dict()
            values = [ td.text for td in desciption.find_all('li')]

            planet_data['Weight'] = desciption.find_all('li')[1].text.strip()
            planet_data['Shape'] = desciption.find_all('li')[2].text.strip()
            planet_data['Dimensions'] = desciption.find_all('li')[3].text.strip()
            planet_data['Color'] = desciption.find_all('li')[4].text.strip()
            planet_data['Clarity'] = desciption.find_all('li')[5].text.strip()
            planet_data['Cutting'] = desciption.find_all('li')[6].text.strip()
            planet_data['Treatment'] = desciption.find_all('li')[7].text.strip()
            planet_data['Origin'] = desciption.find_all('li')[8].text.strip()
            planet_data['Hardness'] = desciption.find_all('li')[6].text.strip()
            planet_data['Price Per Carat'] = desciption.find_all('li')[10].text.strip()
            if index == 0:
                data.append((title,price,planet_data,link))
            else:
                data.append((None,None,planet_data,None))
            #print(desciption[1])




            #data.append((title,price,planet_data,link)) 
        #for tr in url_soup.find_all('tr'):
            #planet_data = dict()
            #values = [td.text for td in tr.find_all('td')]
            #planet_data['name'] = tr.find('td').text.strip()
            #planet_data['info'] = tr.find_all('td')[1].text.strip()
            #data_desciption.append((planet_data))
            #print(planet_data)







        #data.extend(data_desciption)   



        #npo_jobs= [title,price,row,link]




    #data_new = data +","+ data_desciption
    #urls = soup.find('a',{'class': 'next i-next'}).get('href')
    #url = urls
    #print(url)




    with open('inde1ygfhtfht7xs.csv', 'a') as csv_file:
     writer = csv.writer(csv_file)
     writer.writerow(['title','price','Weight','Shape','Dimensions','Color','Clarity','Cutting','Treatment','Origin','Hardness','Price Per Carat','link'])
     #The for loop
     for title, price,planet_data,link in data:
        writer.writerow([title,price,planet_data['Weight'],planet_data['Shape'],planet_data['Dimensions'] ,planet_data['Color'],planet_data['Clarity'],planet_data['Cutting'],planet_data['Treatment'],planet_data['Origin'],planet_data['Hardness'],planet_data['Price Per Carat'] , link])

    #npo_jobs_df = pd.DataFrame.from_dict(npo_jobs, orient ='index', columns=['title', 'price','row','link'])
    #npo_jobs_df.to_csv('npo_jobs.csv')         

当我写入 CSV 时,我得到了重复的planet_data 结果,但我只想得到 1 个planet_data。

最佳答案

删除while循环和内部for-loop并在for-loop外部初始化数据列表,下面的代码将抓取产品详细信息的首页。

例如

import requests
import csv
from bs4 import BeautifulSoup
import pandas as pd
requests.packages.urllib3.disable_warnings()

url = 'https://www.paraibainternational.com/collections/gemstone?view=list'
session = requests.Session()
session.headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"}

content = session.get(url, verify=False).content
soup = BeautifulSoup(content, "html.parser")
posts = soup.find_all('div',{'class':'product-details'})

data = []

for url in posts:
    planet_data = dict()

    title = url.find('h2').text.strip()
    price = url.find('span',{'money'}).text.strip()
    link = url.find('form').find('a',href=True).get('href')

    urls = ('https://www.paraibainternational.com/'+ link)
    url_response = requests.get(urls)

    url_data = url_response.text
    url_soup = BeautifulSoup(url_data, 'html.parser')

    desciption = url_soup.find('div',{'class':'rte main-product-description-product'})
    values = [ td.text for td in desciption.find_all('li')]

    planet_data['Weight'] = desciption.find_all('li')[1].text.strip()
    planet_data['Shape'] = desciption.find_all('li')[2].text.strip()
    planet_data['Dimensions'] = desciption.find_all('li')[3].text.strip()
    planet_data['Color'] = desciption.find_all('li')[4].text.strip()
    planet_data['Clarity'] = desciption.find_all('li')[5].text.strip()
    planet_data['Cutting'] = desciption.find_all('li')[6].text.strip()
    planet_data['Treatment'] = desciption.find_all('li')[7].text.strip()
    planet_data['Origin'] = desciption.find_all('li')[8].text.strip()
    planet_data['Hardness'] = desciption.find_all('li')[6].text.strip()
    planet_data['Price Per Carat'] = desciption.find_all('li')[10].text.strip()
    planet_data['title'] = title
    planet_data['price'] = price
    planet_data['link'] = link
    data.append(planet_data)

print(data)

O/P:

[{'Weight': 'Weight (Carats): 3.14', 'Shape': 'Shape: Cushion', 'Dimensions': 'Dimensions (L x W x D) (mm): 8.61 x 8.44 x 6.28', 'Color': 'Color: Neon Blue', 'Clarity': 'Clarity: SI', 'Cutting': 'Cutting: Excellent', 'Treatment': 'Treatment:\xa0Heat', 'Origin': 'Origin: Brazil', 'Hardness': 'Cutting: Excellent', 'Price Per Carat': 'Price Per Carat: $60,000', 'title': 'Paraiba Tourmaline Brazil 3.14 Carats', 'price': '$188,400.00', 'link': '/collections/gemstone/products/paraiba-tourmaline-3-14-carats'}, {'Weight': 'Weight (Carats): 2.78', 'Shape': 'Shape: Round', 'Dimensions': 'Dimensions (L x W x D) (mm): 8.0 x 8.0 x 5.3', 'Color': 'Color: Pink', 'Clarity': 'Clarity: IF', 'Cutting': 'Cutting: Excellent', 'Treatment': 'Treatment:\xa0Heat', 'Origin': 'Origin:\xa0Africa', 'Hardness': 'Cutting: Excellent', 'Price Per Carat': 'Price Per Carat: $80', 'title': 'Pink Tourmaline 2.78 Carats', 'price': '$222.40', 'link': '/collections/gemstone/products/pink-tourmaline-2-78-carats-round'}, {'Weight': 'Weight (Carats): 2.78', 'Shape': 'Shape: Oval', 'Dimensions': 'Dimensions (L x W x D) (mm): 9.8 x 8.9 x 5.7', 'Color': 'Color: Intense Pink', 'Clarity': 'Clarity: IF', 'Cutting': 'Cutting: Excellent', 'Treatment': 'Treatment:\xa0Heat', 'Origin': 'Origin:\xa0Africa', 'Hardness': 'Cutting: Excellent', 'Price Per Carat': 'Price Per Carat: $430', 'title': 'Pink Tourmaline 2.78 Carats', 'price': '$1,195.40', 'link': '/collections/gemstone/products/pink-tourmaline-2-78-carats-oval'}, {'Weight': 'Weight (Carats): 2.59', 'Shape': 'Shape: Pear', 'Dimensions': 'Dimensions (L x W x D) (mm): 12.0 x 7.5 x 5.4', 'Color': 'Color: Green', 'Clarity': 'Clarity: IF', 'Cutting': 'Cutting: Excellent', 'Treatment': 'Treatment:\xa0Heat', 'Origin': 'Origin:\xa0Africa', 'Hardness': 'Cutting: Excellent', 'Price Per Carat': 'Price Per Carat: $230', 'title': 'Green Tourmaline 2.59 Carats', 'price': '$595.70', 'link': '/collections/gemstone/products/green-tourmaline-2-59-carats-pear'}]

关于python - 我抓取了标题、价格、链接和信息表,我将其命名为planet_data,当我写入csv文件时,我得到了重复的planet_data,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57073595/

相关文章:

python - 剥离网页的不相关部分

python - 如何修复类型错误: __call__() of arguments error in keras prediction

python - Matplotlib - 强制在脚本内显示

web-scraping - 抓取此页面时,我遇到 scrapy 超时错误

python - 如何抓取ajax返回的网页内容?

Python:BeautifulSoup Scrape,类(class)的空白描述搞乱了数据

python - Scrapy 元素声明中的 IF 语句

python - 使用 Python 拆分字符串时遇到问题

python - 从 Dataflow python 作业写入 bigquery 中的分区表

python - fuse -Python : Unable to run example