Python:无法从空列表中弹出?什么时候列表显然不为空?

标签 python multithreading list

我显然在这里遗漏了一些东西。我已经研究了好几天的同一个项目。一点一点地通过它,似乎工作正常。我添加了一部分 main() 函数来实际创建比较列表,然后突然开始抛出 cannot pop from empty list 错误,即使通过我在 pop() 调用之前放置的打印函数清楚地显示该列表不为空?任何想法我做错了什么?这个怪物真的会按照我想要的方式工作吗?第一次使用线程和所有。这是完整的代码:

import urllib
import urllib2
import sys
from lxml.html import parse, tostring, fromstring
from urlparse import urlparse
import threading



class Crawler(threading.Thread):

 def __init__(self):
    self.links = []
    self.queue = []
    self.mal_list = []
    self.count = 0
    self.mal_set = set(self.mal_list)
    self.crawled = []
    self.crawled_set = set(self.crawled)
    self.links_set = set(self.links)
    self.queue.append(sys.argv[1])
    self.queue_set = set(self.queue)



def run(self, max_depth):
    print(self.queue)
    while self.count < max_depth:
        tgt = self.queue.pop(0)
        if tgt not in self.mal_set:
            self.crawl(tgt)
        else:
            print("Malicious Link Found: {0}".format(tgt)
            continue
    sys.exit("Finished!")


def crawl(self, tgt):
    url = urlparse(tgt)
    self.crawled.append(tgt)
    try:
        print("Crawling {0}".format(tgt))
        request = urllib2.Request(tgt)
        request.add_header("User-Agent", "Mozilla/5,0")
        opener = urllib2.build_opener()
        data = opener.open(request)
        self.count += 1

    except:
        return


    doc = parse(data).getroot()
    for tag in doc.xpath("//a[@href]"):
            old = tag.get('href')
            fixed = urllib.unquote(old)
            self.links.append(fixed)
            self.queue_links(self.links_set, url)


def queue_links(self, links, url):
        for link in links:
            if link.startswith('/'):
                link = "http://" + url.netloc + "/" + link

            elif link.startswith('#'):
                continue

            elif link.startswith('http'):

                link = 'http://' + url.netloc + '/' + link

            if link.decode('utf-8') not in self.crawled_set:
                self.queue.append(link)




def make_mal_list(self):
    """
    Open various malware and phishing related blacklists and create a list 
    of URLS from which to compare to the crawled links
    """
    hosts1 = "hosts.txt"
    hosts2 = "MH-sitelist.txt"
    hosts3 = "urls.txt"

    with open(hosts1) as first:
        for line1 in first.readlines():
            link = "http://" + line1.strip()
            self.mal_list.append(link)

    with open(hosts2) as second:
        for line2 in second.readlines():
            link = "http://" + line2.strip()
            self.mal_list.append(link)

    with open(hosts3) as third:
        for line3 in third.readlines():
            link = "http://" + line3.strip()
            self.mal_list.append(link)
def main():
    crawler = Crawler()
    crawler.make_mal_list()
    crawler.run(25)
if __name__ == "__main__":
  main()

最佳答案

首先,我在阅读你的代码时确实迷路了,所以如果我可以的话,也许我可以给你一些评论:

  • 对于许多实例变量,您不必创建一个新的实例变量,只需将另一个变量的 set() 放在上面,就像下面的代码:self.mal_set = set(self.mal_list) 并且你多次重复同样的事情

  • 如果你想使用线程,那么就使用它,因为在你的代码中你只是创建一个线程,为此你应该创建大约 (10) 个线程,所以每个线程将处理他的一堆 URL应该获取,并且不要忘记将线程放入 Queue.Queue 以在它们之间进行同步。

  • 编辑:啊我忘了:缩进你的代码:)

现在谈谈你的问题:

你在哪里分配 self.queue 因为我没有看到它?你只是在调用 make_mal_list() 方法,它只会初始化 self.mal_list 并且在你运行你自己的线程之后我认为很明显 self.queue 是空的,所以你不能 pop() 对吗?

编辑 2:

我认为你的例子更复杂(使用黑名单和所有这些东西,......)但你可以从这样的事情开始:

import threading
import Queue
import sys
import urllib2
import url
from urlparse import urlparse

THREAD_NUMBER = 10


class Crawler(threading.Thread):

    def __init__(self, queue, mal_urls):
        self.queue = queue
        self.mal_list = mal_urls
        threading.Thread.__init__(self) # i forgot , thanks seriyPS :)

    def run(self):

        while True:
             # Grabs url to fetch from queue.
             url = self.queue.get()
             if url not in self.mal_list:
                 self.crawl(url)
             else:
                 print "Malicious Link Found: {0}".format(url)
             # Signals to queue job is done
             self.queue.task_done()

     def crawl(self, tgt):
         try:
             url = urlparse(tgt)
             print("Crawling {0}".format(tgt))
             request = urllib2.Request(tgt)
             request.add_header("User-Agent", "Mozilla/5,0")
             opener = urllib2.build_opener()
             data = opener.open(request)
         except: # TODO: write explicit exceptions the URLError, ValueERROR ...
             return

         doc = parse(data).getroot()
         for tag in doc.xpath("//a[@href]"):
             old = tag.get('href')
             fixed = urllib.unquote(old)

             # I don't think you need this, but maybe i'm mistaken.
             # self.links.append(fixed) 

             # Add more URL to the queue.
             self.queue_links(fixed, url)


    def queue_links(self, link, url):
        """I guess this method allow recursive download of urls that will
        be fetched from the web pages ????
        """

        #for link in links:  # i changed the argument so now links it just one url.
        if link.startswith('/'):
            link = "http://" + url.netloc + "/" + link

        elif link.startswith('#'):
            continue

        elif link.startswith('http'):
            link = 'http://' + url.netloc + '/' + link

        # Add urls extracted from the HTML text to the queue to fetche them
        if link.decode('utf-8') not in self.crawled_set:
            self.queue.put(link)


def get_make_mal_list():
    """Open various malware and phishing related blacklists and create a list 
    of URLS from which to compare to the crawled links
    """

    hosts1 = "hosts.txt"
    hosts2 = "MH-sitelist.txt"
    hosts3 = "urls.txt"

    mal_list = []

    with open(hosts1) as first:
        for line1 in first:
            link = "http://" + line1.strip()
            mal_list.append(link)

    with open(hosts2) as second:
        for line2 in second:
            link = "http://" + line2.strip()
            mal_list.append(link)

    with open(hosts3) as third:
        for line3 in third:
            link = "http://" + line3.strip()
            mal_list.append(link)

    return mal_list

def main():

    queue = Queue.Queue()

    # Get malicious URLs.
    mal_urls = set(get_make_mal_list())

    # Create a THREAD_NUMBER thread and start them.
    for i in xrange(THREAD_NUMBER):
        cr = Crawler(queue, mal_urls)
        cr.start()

    # Get all url that you want to fetch and put them in the queue.
    for url in sys.argv[1:]:
        queue.put(url)

    # Wait on the queue until everything has been processed.
    queue.join()


if __name__ == '__main__':
    main()

关于Python:无法从空列表中弹出?什么时候列表显然不为空?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/4064424/

相关文章:

python - Python Tkinter 的 PDF 查看器

java - java中如何启动ThreadGroup?

vb.net - 多个后台 worker vb.net

r - 将零填充到列表中所有数据框中的一列

python - Writelines (python) 清除它应该写入的文本文件,并且不写入任何内容

python - pandas.df.columns - 使输出在视觉上有用

python - 促进代码生成的最佳 Python 模板库

python - 如何将嵌套循环转换为 python 中的列表理解

python - 为什么 Python 脚本在第二次运行时读取文件的速度要快得多?

java - 如何销毁一个线程?