python - 如何使用 Beautiful Soup 从网站检索信息?

标签 python beautifulsoup web-crawler

我遇到了一个任务,我必须使用爬虫从网站上检索信息。 (网址:https://www.onepa.gov.sg/cat/adventure)

该网站有多个产品。对于每个产品,它包含将我们定向到该单个产品网页的链接,我想收集所有链接。

screenshot of the webpage

screenshot of the HTML code

例如,其中一个产品的名称为:KNOTTY STUFF,我希望获得/class/details/c026829364 的 href

import requests
from bs4 import BeautifulSoup


def get_soup(url):
    source_code = requests.get(url)
    plain_text = source_code.text
    soup = BeautifulSoup(plain_text, features="html.parser")
    return soup

url = "https://www.onepa.gov.sg/cat/adventure"
soup = get_soup(url)
for i in soup.findAll("a", {"target": "_blank"}):
    print(i.get("href"))

输出是 https://tech.gov.sg/report_vulnerability https://www.pa.gov.sg/feedback 其中不包括我正在寻找的内容:/class/details/c026829364

感谢任何帮助或协助,谢谢!

最佳答案

该网站是动态加载的,因此 requests 不支持它。但是,可以通过向以下地址发送 POST 请求来获取链接:

https://www.onepa.gov.sg/sitecore/shell/WebService/Card.asmx/GetCategoryCard

尝试使用内置的 re 搜索链接(正则表达式)模块

import re
import requests


URL = "https://www.onepa.gov.sg/sitecore/shell/WebService/Card.asmx/GetCategoryCard"

headers = {
    "authority": "www.onepa.gov.sg",
    "accept": "application/json, text/javascript, */*; q=0.01",
    "x-requested-with": "XMLHttpRequest",
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
    "content-type": "application/json; charset=UTF-8",
    "origin": "https://www.onepa.gov.sg",
    "sec-fetch-site": "same-origin",
    "sec-fetch-mode": "cors",
    "sec-fetch-dest": "empty",
    "referer": "https://www.onepa.gov.sg/cat/adventure",
    "cookie": "visid_incap_2318972=EttdbbMDQMeRolY+XzbkN8tR5l8AAAAAQUIPAAAAAAAjkedvsgJ6Zxxk2+19JR8Z; SC_ANALYTICS_GLOBAL_COOKIE=d6377e975a10472b868e47de9a8a0baf; _sp_ses.075f=*; ASP.NET_SessionId=vn435hvgty45y0fcfrold2hx; sc_pview_shuser=; __AntiXsrfToken=30b776672938487e90fc0d2600e3c6f8; BIGipServerpool_PAG21PAPRPX00_443=3138016266.47873.0000; incap_ses_7221_2318972=5BC1VKygmjGGtCXbUiU2ZNRS5l8AAAAARKX8luC4fGkLlxnme8Ydow==; font_multiplier=0; AMCVS_DF38E5285913269B0A495E5A%40AdobeOrg=1; _sp_ses.603a=*; SC_ANALYTICS_SESSION_COOKIE=A675B7DEE34A47F9803ED6D4EC4A8355|0|vn435hvgty45y0fcfrold2hx; _sp_id.603a=d539f6d1-732d-4fca-8568-e8494f8e584c.1608930022.1.1608930659.1608930022.bfeb4483-a418-42bb-ac29-42b6db232aec; _sp_id.075f=5e6c62fd-b91d-408e-a9e3-1ca31ee06501.1608929756.1.1608930947.1608929756.73caa28b-624c-4c21-9ad0-92fd2af81562; AMCV_DF38E5285913269B0A495E5A%40AdobeOrg=1075005958%7CMCIDTS%7C18622%7CMCMID%7C88630464609134511097093602739558212170%7CMCOPTOUT-1608938146s%7CNONE%7CvVersion%7C4.4.1",
}

data = '{"cat":"adventure", "subcat":"", "sort":"", "filter":"[filter]", "cp":"[cp]"}'

response = requests.post(URL, data=data,  headers=headers)
print(re.findall(r"<Link>(.*)<", response.content.decode("unicode_escape")))

输出:

['/class/details/c026829364', '/interest/details/i000027991', '/interest/details/i000009714']

关于python - 如何使用 Beautiful Soup 从网站检索信息?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65451101/

相关文章:

python - Dropbox webhook,检测已删除的文件

python - Beautiful Soup - `findAll` 没有捕获 SVG (`ElementTree` 中的所有标签)

php - 编写一个 PHP 脚本来打开网站页面并将页面内容存储在变量中

python - 如果某个元素内有其他元素,如何在 scrapy 中选择该元素内的所有文本?

python - 在Python中匹配2个数据框列的字符串

Python:在具有公共(public)访问权限的 AWS S3 中上传 csv

python - ProgrammingError at/admin/login/

python - 如何从链接列表中抓取?

Python使用正则表达式查找文本中的所有内容

node.js\为什么我会收到 RangeError : Maximum call stack size exceeded