我需要制作一个网络爬虫来收集特定网站的链接和信息。我还需要使用 Apache HTTP Client 来做,我已经浏览了几天网站上的教程,但毫无进展。现在,我正在尝试弄清楚如何使用 apache HTTPClient 来获取 HTML,以便我可以对其进行解析。坦率地说,这可能是误解了 HTTPClient 的用途。任何帮助将不胜感激。
最佳答案
h-m-m...就是这样,但是...如果您看到的内容与浏览器中看到的内容不同,请不要感到惊讶。正如我所说,您将获得服务器通过请求实际返回的内容:
HttpClient client = new HttpClient();
HostConfiguration hostConfig = new HostConfiguration();
hostConfig.setHost("my.site.com", 80, Protocol.getProtocol("http"));
client.setHostConfiguration(hostConfig);
GetMethod getHtmlPageMethod = new GetMethod("/myPage.html");
getHtmlPageMethod.setFollowRedirects(true);
try {
int responseCode = client.executeMethod(getHtmlPageMethod);
System.out.println("Got response code: " + responseCode);
if (200 == responseCode) {
System.out.println("Response code 200 - SUCCESS ... go for response body... ");
String responseBody = getHtmlPageMethod.getResponseBodyAsString();
if (null != responseBody) {
System.out.println("Got body string:" + System.lineSeparator());
System.out.println(responseBody);
} else
{
System.out.println("No response body returned!");
}
}
} catch (Exception e) {
e.printStackTrace();
}
关于java - 使用HTTPClient制作爬虫,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/58649955/