我正在尝试收集 this 上可用数据集的点赞数网站。
我一直无法找到一种方法来可靠地识别和抓取数据集标题和类似整数之间的关系:
因为它嵌入在 HTML 中,如下所示:
我以前使用过一个抓取工具来获取有关资源 url 的信息。在这种情况下,我能够捕获父级 h3
的最后一个子级 a
以及具有类 .dataset-item
的父级。
我想调整我现有的代码来收集目录中每个资源的点赞次数,而不是 URL。下面是我使用的 url scraper 的代码:
from bs4 import BeautifulSoup as bs
import requests
import csv
from urllib.parse import urlparse
json_api_links = []
data_sets = []
def get_links(s, url, css_selector):
r = s.get(url)
soup = bs(r.content, 'lxml')
base = '{uri.scheme}://{uri.netloc}'.format(uri=urlparse(url))
links = [base + item['href'] if item['href'][0] == '/' else item['href'] for item in soup.select(css_selector)]
return links
results = []
#debug = []
with requests.Session() as s:
for page in range(1,2): #set number of pages
links = get_links(s, 'https://data.nsw.gov.au/data/dataset?page={}'.format(page), '.dataset-item h3 a:last-child')
for link in links:
data = get_links(s, link, '[href*="/api/3/action/package_show?id="]')
json_api_links.append(data)
#debug.append((link, data))
resources = list(set([item.replace('opendata','') for sublist in json_api_links for item in sublist])) #can just leave as set
for link in resources:
try:
r = s.get(link).json() #entire package info
data_sets.append(r)
title = r['result']['title'] #certain items
if 'resources' in r['result']:
urls = ' , '.join([item['url'] for item in r['result']['resources']])
else:
urls = 'N/A'
except:
title = 'N/A'
urls = 'N/A'
results.append((title, urls))
with open('data.csv','w', newline='') as f:
w = csv.writer(f)
w.writerow(['Title','Resource Url'])
for row in results:
w.writerow(row)
我想要的输出会是这样的:
最佳答案
该方法非常简单。您给定的网站在列表标签中包含必需的元素。而你需要做的就是获取那个 <li>
的源代码标签,然后只获取标题,它有一个特定的类,同样适用于类似的计数。
要注意的是,文本包含一些噪音。要解决此问题,您可以使用正则表达式从给定的点赞数输入中提取数字 ('\d+')。以下代码给出了预期的结果:
from bs4 import BeautifulSoup as soup
import requests
import re
import pandas as pd
source = requests.get('https://data.nsw.gov.au/data/dataset')
sp = soup(source.text,'lxml')
element = sp.find_all('li',{'class':"dataset-item"})
heading = []
likeList = []
for i in element:
try:
header = i.find('a',{'class':"searchpartnership-url-analytics"})
heading.append(header.text)
except:
header = i.find('a')
heading.append(header.text)
like = i.find('span',{'id':'likes-count'})
likeList.append(re.findall('\d+',like.text)[0])
dict = {'Title': heading, 'Likes': likeList}
df = pd.DataFrame(dict,index=False)
print(df)
希望对您有所帮助!
关于python - 如何在网站上抓取嵌入的整数,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56698194/