我以前从未使用过 Python,所以请原谅我缺乏知识,但我正在尝试为所有线程抓取 xenforo 论坛。到目前为止一切顺利,除了它为同一线程的每个页面获取多个 URL 之外,我之前发布了一些数据来解释我的意思。
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/page-9
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/page-10
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/page-11
真的,我理想中想要抓取的只是其中之一。
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/
这是我的脚本:
from bs4 import BeautifulSoup
import requests
def get_source(url):
return requests.get(url).content
def is_forum_link(self):
return self.find('special string') != -1
def fetch_all_links_with_word(url, word):
source = get_source(url)
soup = BeautifulSoup(source, 'lxml')
return soup.select("a[href*=" + word + "]")
main_url = "http://example.com/forum/"
forumLinks = fetch_all_links_with_word(main_url, "forums")
forums = []
for link in forumLinks:
if link.has_attr('href') and link.attrs['href'].find('.rss') == -1:
forums.append(link.attrs['href']);
print('Fetched ' + str(len(forums)) + ' forums')
threads = {}
for link in forums:
threadLinks = fetch_all_links_with_word(main_url + link, "threads")
for threadLink in threadLinks:
print(link + ': ' + threadLink.attrs['href'])
threads[link] = threadLink
print('Fetched ' + str(len(threads)) + ' threads')
最佳答案
此解决方案假设应该从 url 中删除以检查唯一性的内容始终是“/page-#...”。如果不是这种情况,则此解决方案将不起作用。
您可以使用集合来代替使用列表来存储您的网址,它只会添加唯一值。然后在 url 中删除“page”的最后一个实例及其后的所有内容(如果它的格式为“/page-#”,其中 # 是任意数字),然后再将其添加到集合中。
forums = set()
for link in forumLinks:
if link.has_attr('href') and link.attrs['href'].find('.rss') == -1:
url = link.attrs['href']
position = url.rfind('/page-')
if position > 0 and url[position + 6:position + 7].isdigit():
url = url[:position + 1]
forums.add(url);
关于python - 在 Python 应用程序中排除 'duplicated' 抓取的 URL?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/56148760/