java - 如何使用jsoup爬取多个url

标签 java web-crawler jsoup

我有下面的代码,它使用 JSoup 抓取网站,但我想同时抓取多个 URL。 我将 URL 存储在数组中,但无法让它工作。 如果我想使用这段代码,如何在多线程中实现它?多线程适合这样的应用程序吗?

public class Webcrawler {
    public static void main(String[] args) throws IOException {

        String [] url = {"http://www.dmoz.org/","https://docs.oracle.com/en/"}; 
        //String [] url = new String[3];
        //url[0] = "http://www.dmoz.org/";
        //url[1] = "http://www.dmoz.org/Computers/Computer_Science/";
        //url[2] = "https://docs.oracle.com/en/";

        for(String urls : url){
            System.out.print("Sites to be crawled\n " + urls);
        }
        //String url = "http://www.dmoz.org/";
        print("\nFetching %s...", url);

        Document doc = Jsoup.connect(url[0]).get();
        org.jsoup.select.Elements links = doc.select("a");
        //doc.select("a[href*=https]");//(This is the one you are looking for)selects if value of href contatins https
        print("\nLinks: (%d)", links.size());
        for (Element link : links) {
            print(" (%s)", link.absUrl("href") /*link.attr("href")*/, trim(link.text(), 35));     
        }
    }

    private static void print(String msg, Object... args) {
        System.out.println(String.format(msg, args));
    }

    private static String trim(String s, int width) {
        if (s.length() > width)
            return s.substring(0, width-1) + ".";
        else
            return s;
    }
}

最佳答案

您可以使用多线程并同时抓取多个网站。以下代码可以满足您的需要。我很确定它可以改进很多(例如通过使用 Executor ),但我写得很快。

public class Main {

    public static void main(String[] args) {

        String[] urls = new String[]{"http://www.dmoz.org/", "http://www.dmoz.org/Computers/Computer_Science/", "https://docs.oracle.com/en/"};

        // Create and start workers
        List<Worker> workers = new ArrayList<>(urls.length);
        for (String url : urls) {
            Worker w = new Worker(url);
            workers.add(w);
            new Thread(w).start();
        }

        // Retrieve results
        for (Worker w : workers) {
            Elements results = w.waitForResults();
            if (results != null)
                System.out.println(w.getName()+": "+results.size());
            else
                System.err.println(w.getName()+" had some error!");
        }
    }
}

class Worker implements Runnable {

    private String url;
    private Elements results;
    private String name;
    private static int number = 0;

    private final Object lock = new Object();

    public Worker(String url) {
        this.url = url;
        this.name = "Worker-" + (number++);
    }

    public String getName() {
        return name;
    }

    @Override
    public void run() {
        try {
            Document doc = Jsoup.connect(this.url).get();

            Elements links = doc.select("a");

            // Update results
            synchronized (lock) {
                this.results = links;
                lock.notifyAll();
            }
        } catch (IOException e) {
            // You should implement a better error handling code..
            System.err.println("Error while parsing: "+this.url);
            e.printStackTrace();
        }
    }

    public Elements waitForResults() {
        synchronized (lock) {
            try {
                while (this.results == null) {
                    lock.wait();
                }
                return this.results;
            } catch (InterruptedException e) {
                // Again better error handling
                e.printStackTrace();
            }

            return null;
        }
    }
}

关于java - 如何使用jsoup爬取多个url,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35939635/

相关文章:

javascript - 如何确定不同页面是否确实需要包含的 JavaScript 文件?

python - Scrapy View 返回空白页

java - 设置滚动条粗细

java - 浏览器中 java applet 中的 UTF-8

java - 使用 Jmockit 验证 FutureCallback 效果的最佳方法

java - 将对象转换为 map : Cast vs ObjectMapper

html - 使用自定义爬虫访问分页中的所有页面

java - 如何添加异常以不在 java 中使用 jsoup 解析某些类型的文件?

java - 在jsoup中提取td标签内的href值

java - 如何使用 Jsoup 只获取第一级节点