summaryrefslogtreecommitdiff
path: root/debian/control
blob: 2369c01212047a8ff6f7777618be659f58f67d4c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
Source: httrack
Section: web
Priority: optional
Maintainer: Xavier Roche <roche@httrack.com>
Standards-Version: 4.6.2
Build-Depends: debhelper (>= 12.0.0), dh-autoreconf, autotools-dev, autoconf, autoconf-archive, automake, libtool, zlib1g-dev, libssl-dev
Homepage: http://www.httrack.com
Vcs-Git: https://github.com/xroche/httrack.git

Package: httrack
Architecture: any
Multi-Arch: same
Depends: ${misc:Depends}, ${shlibs:Depends}
Suggests: webhttrack, httrack-doc
Description: Copy websites to your computer (Offline browser)
 HTTrack is an offline browser utility, allowing you to download a World
 Wide website from the Internet to a local directory, building recursively
 all directories, getting html, images, and other files from the server to
 your computer.
 .
 HTTrack arranges the original site's relative link-structure. Simply
 open a page of the "mirrored" website in your browser, and you can
 browse the site from link to link, as if you were viewing it online.
 HTTrack can also update an existing mirrored site, and resume
 interrupted downloads. HTTrack is fully configurable, and has an
 integrated help system. 

Package: webhttrack
Architecture: any
Multi-Arch: same
Depends: ${misc:Depends}, ${shlibs:Depends}, webhttrack-common, iceape-browser | iceweasel | icecat | mozilla | firefox | mozilla-firefox | www-browser | sensible-utils
Replaces: webhttrack-common (<< 3.43.9-2)
Breaks: webhttrack-common (<< 3.43.9-2)
Suggests: httrack, httrack-doc
Enhances: httrack
Description: Copy websites to your computer, httrack with a Web interface
 WebHTTrack is an offline browser utility, allowing you to download a World
 Wide website from the Internet to a local directory, building recursively
 all directories, getting html, images, and other files from the server to
 your computer, using a step-by-step web interface.
 .
 WebHTTrack arranges the original site's relative link-structure. Simply
 open a page of the "mirrored" website in your browser, and you can
 browse the site from link to link, as if you were viewing it online.
 HTTrack can also update an existing mirrored site, and resume
 interrupted downloads. WebHTTrack is fully configurable, and has an
 integrated help system.
 .
  Snapshots: http://www.httrack.com/page/21/

Package: webhttrack-common
Architecture: all
Multi-Arch: allowed
Depends: ${misc:Depends}
Description: webhttrack common files
 This package is the common files of webhttrack, website copier and
 mirroring utility

Package: libhttrack2
Architecture: any
Multi-Arch: same
Section: libs
Replaces: libhttrack1
Conflicts: libhttrack1
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: Httrack website copier library
 This package is the library part of httrack, website copier and mirroring
 utility

Package: libhttrack-dev
Architecture: any
Multi-Arch: same
Section: libdevel
Depends: ${misc:Depends}, ${shlibs:Depends}, zlib1g-dev
Description: Httrack website copier includes and development files
 This package adds supplemental files for using the httrack website copier
 library

Package: httrack-doc
Architecture: all
Multi-Arch: allowed
Section: doc
Depends: ${misc:Depends}
Description: Httrack website copier additional documentation
 This package adds supplemental documentation for httrack and webhttrack
 as a browsable html documentation

Package: proxytrack
Architecture: any
Multi-Arch: same
Depends: ${misc:Depends}, ${shlibs:Depends}
Suggests: squid, httrack
Description: Build HTTP Caches using archived websites copied by HTTrack
 ProxyTrack is a simple proxy server aimed to deliver content archived by
 HTTrack sessions. It can aggregate multiple download caches, for direct
 use (through any browser) or as an upstream cache slave server.
 This proxy can handle HTTP/1.1 proxy connections, and is able to reply to
 ICPv2 requests for an efficient integration within other cache servers,
 such as Squid. It can also handle transparent HTTP requests to allow
 cached live connections inside an offline network.