From 25adbdabb47499fe641c7bd9595024ff82667058 Mon Sep 17 00:00:00 2001 From: Xavier Roche Date: Mon, 19 Mar 2012 12:51:31 +0000 Subject: httrack 3.30.1 --- HelpHtml/step9_opt1.html | 156 ----------------------------------------------- 1 file changed, 156 deletions(-) delete mode 100644 HelpHtml/step9_opt1.html (limited to 'HelpHtml/step9_opt1.html') diff --git a/HelpHtml/step9_opt1.html b/HelpHtml/step9_opt1.html deleted file mode 100644 index d4ba2f6..0000000 --- a/HelpHtml/step9_opt1.html +++ /dev/null @@ -1,156 +0,0 @@ - - - - - - - HTTrack Website Copier - Offline Browser - - - - - - - - - -
HTTrack Website Copier
- - - - -
Open Source offline browser
- - - - -
- - - - -
- - - - -
- - -

Option panel : Links

- -
- -
    -
    -

    - -
  • Attempt to detect all links
  • -
    Asks the engine to try to detect all links in a page, even for unknown tags or unknown javascript code. This can generate bad requests or error in pages, but may be helpful to catch all desired links -
    Useful, for example, in pages with many javascript tricks -


    - -
  • Get non-html files related to a link
  • -
    This option allows you to catch all file references in captured HTML files, even external ones -
    For example, if an image in an Html page has its source on another web site, this image will be captured together. -


    - -
  • Test validity of all links
  • -
    This option forces the engine to test all links in spidered pages, i.e. to check if every link is valid or not by performing a request to the server. If an error occured, it is reported to the error log-file. -
    Useful to test all external links in a website -


    - -
  • Get HTML files first!
  • -
    With this option enabled, the engine will attempt to download all HTML files first, and - then download other (images) files. This can speed up the parsing process, by efficiently scanning - the HTML structure. -

    -
- -



-

Back to Home

- - -
-
-
- - - - - -
- - - - - - -- cgit v1.2.3