python - 如何使用 pandas 和 beautiful soup 来抓取多个网页地址上的表格?

标签 python pandas web-scraping beautifulsoup

我想从网站上的表格中提取数据。该表存在于 165 个网页中,我想将其全部抓取。我只能拿到第一页。

我尝试过pandas、beautifulsoup、requests

offset = 0
teacher_list = []
while offset <= 4500:

calls_df, = 
pd.read_html("https://projects.newsday.com/databases/long- 
island/teacher-administrator-salaries-2017-2018/?offset=0" + 
str(offset), header=0, parse_dates=["Start date"])

    offset = offset + 1500
    print(calls_df)

    # calls_df = "https:" + calls_df
    collection_page = requests.get(calls_df)
    page_html = collection_page.text

    soup = BeautifulSoup(page_html, "html.parser")

    print(page_html)
    print(soup.prettify())


print(teacher_list)
offset = offset + 1500
print(teacher_list,calls_df.to_csv("calls.csv", index=False))

最佳答案

您可以使用步骤参数来增加您的网址

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd

df = pd.DataFrame()

with requests.Session() as s:
    for i in range(0, 246001, 1500):
        url = 'https://projects.newsday.com/databases/long-%20island/teacher-administrator-salaries-2017-2018/?offset={}'.format(i)
        r = s.get(url)
        soup = bs(r.content, 'lxml')
        dfCurrent = pd.read_html(str(soup.select_one('html')))[0]
        dfCurrent.dropna(how='all', inplace = True)
        df = pd.concat([df, dfCurrent])
df = df.reset_index(drop=True)
df.to_csv(r"C:\Users\User\Desktop\test.csv", encoding='utf-8-sig')

关于python - 如何使用 pandas 和 beautiful soup 来抓取多个网页地址上的表格?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55315527/

相关文章:

python - 为什么我会收到有关 String not defined 的错误消息?

python - 值错误 : index must be monotonic increasing or decreasing : Adding milliseconds

javascript - 怎么打开网站隐藏的信息

python - 获取 URL 列表的内容

python - 将 Boost Python 与 shared_ptr<const T> 一起使用

python - Django 每个模型都必须有一个 date_created 和 date_modified 字段

python - 制作包含字符串和整数的 ndarray

pandas - future 警告 : Passing datetime64-dtype data to TimedeltaIndex is deprecated

python - 修改 Pandas Series 列中的所有值

python - 使用 Python 进行网页抓取