浏览器中显示的网页由 HTML 文档和一些对象(例如 CSS、JS、图像等)组成。我想使用 wget
命令将它们全部保存在我的硬盘上稍后从本地计算机加载它。还有机会吗?
注意:我想要一个页面,而不是网站的所有页面或类似内容。
最佳答案
使用以下命令:
wget -E -k -p http://example.com
开关详细信息:
-E:
If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp .[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good use for this is when you're downloading CGI-generated materials. A URL like http://example.com/article.cgi?25 will be saved as article.cgi?25.html.
-k
After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
-p
This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets.
关于linux - 如何使用 wget 保存网页及其对象?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39970688/