python - 获取维基百科文章中不在括号内的第一个链接

标签 python parsing dom

所以我对this theory感兴趣如果你随机访问维基百科文章,重复点击第一个不在括号内的链接,在 95% 的情况下你最终会找到关于 Philosophy 的文章。 .

我想用 Python 编写一个脚本来为我获取链接,最后打印一个很好的列表,其中包含访问过的文章 (linkA -> linkB -> linkC) 等。

我设法获取了网页的 HTML DOM,并设法去除了一些不必要的链接和引导消歧页面的顶部描述栏。到目前为止,我得出的结论是:

  • DOM 以您在某些页面右侧看到的表格开始,例如 Human。 .我们想忽略这些链接。
  • 有效的链接元素都有一个 <p>某处的元素作为他们的祖先(如果它位于 <b> 标签或类似标签内,通常是 parent 或祖 parent 。导致消歧页面的顶部栏似乎不包含任何 <p> 元素。
  • 无效链接包含一些特殊单词后跟冒号,例如Wikipedia:

到目前为止,还不错。但正是括号让我明白了。在关于 Human 的文章中例如,第一个不在括号内的链接是“/wiki/Species”,但脚本会找到其中的“/wiki/Taxonomy”。

我不知道如何以编程方式解决这个问题,因为我必须在父/子节点的某种组合中查找文本,而这些节点可能并不总是相同的。有什么想法吗?

我的代码可以在下面看到,但这是我编得很快的东西,并不引以为豪。但是,它已被评论,因此您可以看到我的想法(我希望 :))。

"""Wikipedia fun"""
import urllib2
from xml.dom.minidom import parseString
import time

def validWikiArticleLinkString(href):
    """ Takes a string and returns True if it contains the substring
        '/wiki/' in the beginning and does not contain any of the
        "special" wiki pages. 
    """
    return (href.find("/wiki/") == 0
            and href.find("(disambiguation)") == -1 
            and href.find("File:") == -1 
            and href.find("Wikipedia:") == -1
            and href.find("Portal:") == -1
            and href.find("Special:") == -1
            and href.find("Help:") == -1
            and href.find("Template_talk:") == -1
            and href.find("Template:") == -1
            and href.find("Talk:") == -1
            and href.find("Category:") == -1
            and href.find("Bibcode") == -1
            and href.find("Main_Page") == -1)


if __name__ == "__main__":
    visited = []    # a list of visited links. used to avoid getting into loops

    opener = urllib2.build_opener()
    opener.addheaders = [('User-agent', 'Mozilla/5.0')] # need headers for the api

    currentPage = "Human"  # the page to start with

    while True:
        infile = opener.open('http://en.wikipedia.org/w/index.php?title=%s&printable=yes' % currentPage)
        html = infile.read()    # retrieve the contents of the wiki page we are at

        htmlDOM = parseString(html) # get the DOM of the parsed HTML
        aTags = htmlDOM.getElementsByTagName("a")   # find all <a> tags

        for tag in aTags:
            if "href" in tag.attributes.keys():         # see if we have the href attribute in the tag
                href = tag.attributes["href"].value     # get the value of the href attribute
                if validWikiArticleLinkString(href):                             # if we have one of the link types we are looking for

                    # Now come the tricky parts. We want to look for links in the main content area only,
                    # and we want the first link not in parentheses.

                    # assume the link is valid.
                    invalid = False            

                    # tables which appear to the right on the site appear first in the DOM, so we need to make sure
                    # we are not looking at a <a> tag somewhere inside a <table>.
                    pn = tag.parentNode                     
                    while pn is not None:
                        if str(pn).find("table at") >= 0:
                            invalid = True
                            break
                        else:
                            pn = pn.parentNode 

                    if invalid:     # go to next link
                        continue               

                    # Next we look at the descriptive texts above the article, if any; e.g
                    # This article is about .... or For other uses, see ... (disambiguation).
                    # These kinds of links will lead into loops so we classify them as invalid.

                    # We notice that this text does not appear to be inside a <p> block, so
                    # we dismiss <a> tags which aren't inside any <p>.
                    pnode = tag.parentNode
                    while pnode is not None:
                        if str(pnode).find("p at") >= 0:
                            break
                        pnode = pnode.parentNode
                    # If we have reached the root node, which has parentNode None, we classify the
                    # link as invalid.
                    if pnode is None:
                        invalid = True

                    if invalid:
                        continue


                    ######  this is where I got stuck:
                    # now we need to look if the link is inside parentheses. below is some junk

#                    for elem in tag.parentNode.childNodes:
#                        while elem.firstChild is not None:
#                            elem = elem.firstChid
#                        print elem.nodeValue

                    print href      # this will be the next link
                    newLink = href[6:]  # except for the /wiki/ part
                    break

        # if we have been to this link before, break the loop
        if newLink in visited:
            print "Stuck in loop."
            break
        # or if we have reached Philosophy
        elif newLink == "Philosophy":
            print "Ended up in Philosophy."
            break
        else:
            visited.append(currentPage)     # mark this currentPage as visited
            currentPage = newLink           # make the the currentPage we found the new page to fetch
            time.sleep(5)                   # sleep some to see results as debug

最佳答案

我在 Github (http://github.com/JensTimmerman/scripts/blob/master/philosophy.py) 上找到了一个玩这个游戏的 python 脚本。 它使用 Beautifulsoup 进行 HTML 解析,为了解决括号问题,他只是在解析链接之前删除括号之间的文本。

关于python - 获取维基百科文章中不在括号内的第一个链接,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10634278/

相关文章:

php - 使用 PHP 为 XML 标记设置命名空间

python - 在多个轴上具有多个刻度的雷达图

java - SAX解析器加载DTD文件

javascript - 在特定条件下尝试禁用 DOM 创建的按钮

java - 如何保存到局部变量并在短路操作数中运行函数?

javascript - NodeJS : How to add a variable to the return for a JSON. 解析()?

javascript - jQuery 在使用 Javascript 生成的表中查找元素

python - Pandas 过滤器计数

python - 忽略条件的后续实例(for 循环和 if 语句)

python - 创建没有颜色条的热图