python - Pandas:将所有 re.search 结果从 BeautifulSoup 写入 csv

标签 python pandas beautifulsoup urllib2

我有一个 Python pandas 脚本的开头,它在 Google 上搜索值并获取它可以在第一页上找到的任何 PDF 链接。

我有两个问题,如下所示。

import pandas as pd
from bs4 import BeautifulSoup
import urllib2
import re

df = pd.DataFrame(["Shakespeare", "Beowulf"], columns=["Search"])    

print "Searching for PDFs ..."

hdr = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
    "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
    "Accept-Encoding": "none",
    "Accept-Language": "en-US,en;q=0.8",
    "Connection": "keep-alive"}

def crawl(search):
    google = "http://www.google.com/search?q="
    url = google + search + "+" + "PDF"
    req = urllib2.Request(url, headers=hdr)

    pdf_links = None
    placeholder = None #just a column placeholder

    try:
        page = urllib2.urlopen(req).read()
        soup = BeautifulSoup(page)
        cite = soup.find_all("cite", attrs={"class":"_Rm"})
        for link in cite:
            all_links = re.search(r".+", link.text).group().encode("utf-8")
            if all_links.endswith(".pdf"):
                pdf_links = re.search(r"(.+)pdf$", all_links).group()
            print pdf_links

    except urllib2.HTTPError, e:
        print e.fp.read()

    return pd.Series([pdf_links, placeholder])

df[["PDF links", "Placeholder"]] = df["Search"].apply(crawl)

df.to_csv(FileName, index=False, delimiter=",")

print pdf_links 的结果将是:

davidlucking.com/documents/Shakespeare-Complete%20Works.pdf
sparks.eserver.org/books/shakespeare-tempest.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
www.w3.org/People/maxf/.../hamlet.pdf
calhoun.k12.il.us/teachers/wdeffenbaugh/.../Shakespeare%20Sonnets.pdf
www.yorku.ca/inpar/Beowulf_Child.pdf
www.yorku.ca/inpar/Beowulf_Child.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
https://is.muni.cz/el/1441/.../2._Beowulf.pdf
www.penguin.com/static/pdf/.../beowulf.pdf
www.neshaminy.org/cms/lib6/.../380/text.pdf
www.neshaminy.org/cms/lib6/.../380/text.pdf
sparks.eserver.org/books/beowulf.pdf

csv 输出将如下所示:

Search         PDF Links
Shakespeare    calhoun.k12.il.us/teachers/wdeffenbaugh/.../Shakespeare%20Sonnets.pdf
Beowulf        sparks.eserver.org/books/beowulf.pdf

问题:

  • 有没有办法将所有结果作为行写入 csv 而不是 只是最下面那一个吗?如果可能,请在 Search 中包含与 “Shakespeare”“Beowulf” 对应的每一行的值?
  • 如何写出完整的 pdf 链接,而不让长链接自动缩写为 "..."

最佳答案

这将使用 soup.find_all("a",href=True) 为您提供所有正确的 pdf 链接,并将它们保存在 Dataframe 和 csv 中:

hdr = {
    "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
    "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
    "Accept-Encoding": "none",
    "Accept-Language": "en-US,en;q=0.8",
    "Connection": "keep-alive"}


def crawl(columns=None, *search):
    df = pd.DataFrame(columns= columns)
    for term in search:
        google = "http://www.google.com/search?q="
        url = google + term + "+" + "PDF"
        req = urllib2.Request(url, headers=hdr)
        try:
            page = urllib2.urlopen(req).read()
            soup = BeautifulSoup(page)
            pdfs = []
            links = soup.find_all("a",href=True)
            for link in links:
                lk = link["href"]
                if lk.endswith(".pdf"):
                     pdfs.append((term, lk))
            df2 = pd.DataFrame(pdfs, columns=columns)
            df = df.append(df2, ignore_index=True)
        except urllib2.HTTPError, e:
            print e.fp.read()
    return df


df = crawl(["Search", "PDF link"],"Shakespeare","Beowulf")
df.to_csv("out.csv",index=False)

输出.csv:

Search,PDF link
Shakespeare,http://davidlucking.com/documents/Shakespeare-Complete%20Works.pdf
Shakespeare,http://www.w3.org/People/maxf/XSLideMaker/hamlet.pdf
Shakespeare,http://sparks.eserver.org/books/shakespeare-tempest.pdf
Shakespeare,https://phillipkay.files.wordpress.com/2011/07/william-shakespeare-plays.pdf
Shakespeare,http://www.artsvivants.ca/pdf/eth/activities/shakespeare_overview.pdf
Shakespeare,http://triggs.djvu.org/djvu-editions.com/SHAKESPEARE/SONNETS/Download.pdf
Beowulf,http://www.yorku.ca/inpar/Beowulf_Child.pdf
Beowulf,https://is.muni.cz/el/1441/podzim2013/AJ2RC_STAL/2._Beowulf.pdf
Beowulf,http://teacherweb.com/IL/Steinmetz/MottramM/Beowulf---Seamus-Heaney.pdf
Beowulf,http://www.penguin.com/static/pdf/teachersguides/beowulf.pdf
Beowulf,http://www.neshaminy.org/cms/lib6/PA01000466/Centricity/Domain/380/text.pdf
Beowulf,http://www.sparknotes.com/free-pdfs/uscellular/download/beowulf.pdf

关于python - Pandas:将所有 re.search 结果从 BeautifulSoup 写入 csv,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31221442/

相关文章:

python - 从 Pandas Timedelta 获取总小时数?

Python堆叠直方图分组数据

python - 如何根据样式选择html标签?使用 Beautifulsoup 和 Python

python - 更新的代码 - 如何使用 xlsxwriter 将所有值放在 Python 中的列中

python - 在 Raspberry Pi 1B 上安装 Pillow 失败 "gcc: fatal error: Killed signal terminated program cc1"

python - 在tensorflow python中替换值或创建张量掩码

python - 简单密码程序中的错误(cryptography.fernet.InvalidToken)

python - 如何从 statsmodels.api 中提取回归系数?

python - 如何从简单的html表格中提取行?

python - 使用 Python Beautifulsoup 抓取 html 文本和图像链接