From ad5b7acc19290ff91e0f42a0de448a26760fcf99 Mon Sep 17 00:00:00 2001 From: Xavier Roche Date: Mon, 19 Mar 2012 12:36:11 +0000 Subject: Imported httrack 3.20.2 --- HelpHtml/step9_opt1.html | 156 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 156 insertions(+) create mode 100644 HelpHtml/step9_opt1.html (limited to 'HelpHtml/step9_opt1.html') diff --git a/HelpHtml/step9_opt1.html b/HelpHtml/step9_opt1.html new file mode 100644 index 0000000..d4ba2f6 --- /dev/null +++ b/HelpHtml/step9_opt1.html @@ -0,0 +1,156 @@ + + + + + + + HTTrack Website Copier - Offline Browser + + + + + + + + + +
HTTrack Website Copier
+ + + + +
Open Source offline browser
+ + + + +
+ + + + +
+ + + + +
+ + +

Option panel : Links

+ +
+ +
    +
    +

    + +
  • Attempt to detect all links
  • +
    Asks the engine to try to detect all links in a page, even for unknown tags or unknown javascript code. This can generate bad requests or error in pages, but may be helpful to catch all desired links +
    Useful, for example, in pages with many javascript tricks +


    + +
  • Get non-html files related to a link
  • +
    This option allows you to catch all file references in captured HTML files, even external ones +
    For example, if an image in an Html page has its source on another web site, this image will be captured together. +


    + +
  • Test validity of all links
  • +
    This option forces the engine to test all links in spidered pages, i.e. to check if every link is valid or not by performing a request to the server. If an error occured, it is reported to the error log-file. +
    Useful to test all external links in a website +


    + +
  • Get HTML files first!
  • +
    With this option enabled, the engine will attempt to download all HTML files first, and + then download other (images) files. This can speed up the parsing process, by efficiently scanning + the HTML structure. +

    +
+ +



+

Back to Home

+ + +
+
+
+ + + + + +
+ + + + + + -- cgit v1.2.3