GNU Info

Info Node: (wget.info)Recursive Retrieval Options

(wget.info)Recursive Retrieval Options


Next: Recursive Accept/Reject Options Prev: FTP Options Up: Invoking
Enter node , (file) or (file)node

Recursive Retrieval Options
===========================

`-r'
`--recursive'
     Turn on recursive retrieving.  Note: Recursive Retrieval, for
     more details.

`-l DEPTH'
`--level=DEPTH'
     Specify recursion maximum depth level DEPTH (Note: Recursive
     Retrieval).  The default maximum depth is 5.

`--delete-after'
     This option tells Wget to delete every single file it downloads,
     _after_ having done so.  It is useful for pre-fetching popular
     pages through a proxy, e.g.:

          wget -r -nd --delete-after http://whatever.com/~popular/page/

     The `-r' option is to retrieve recursively, and `-nd' to not
     create directories.

     Note that `--delete-after' deletes files on the local machine.  It
     does not issue the `DELE' command to remote FTP sites, for
     instance.  Also note that when `--delete-after' is specified,
     `--convert-links' is ignored, so `.orig' files are simply not
     created in the first place.

`-k'
`--convert-links'
     After the download is complete, convert the links in the document
     to make them suitable for local viewing.  This affects not only
     the visible hyperlinks, but any part of the document that links to
     external content, such as embedded images, links to style sheets,
     hyperlinks to non-HTML content, etc.

     Each link will be changed in one of the two ways:

        * The links to files that have been downloaded by Wget will be
          changed to refer to the file they point to as a relative link.

          Example: if the downloaded file `/foo/doc.html' links to
          `/bar/img.gif', also downloaded, then the link in `doc.html'
          will be modified to point to `../bar/img.gif'.  This kind of
          transformation works reliably for arbitrary combinations of
          directories.

        * The links to files that have not been downloaded by Wget will
          be changed to include host name and absolute path of the
          location they point to.

          Example: if the downloaded file `/foo/doc.html' links to
          `/bar/img.gif' (or to `../bar/img.gif'), then the link in
          `doc.html' will be modified to point to
          `http://HOSTNAME/bar/img.gif'.

     Because of this, local browsing works reliably: if a linked file
     was downloaded, the link will refer to its local name; if it was
     not downloaded, the link will refer to its full Internet address
     rather than presenting a broken link.  The fact that the former
     links are converted to relative links ensures that you can move
     the downloaded hierarchy to another directory.

     Note that only at the end of the download can Wget know which
     links have been downloaded.  Because of that, the work done by
     `-k' will be performed at the end of all the downloads.

`-K'
`--backup-converted'
     When converting a file, back up the original version with a `.orig'
     suffix.  Affects the behavior of `-N' (Note: HTTP Time-Stamping
     Internals).

`-m'
`--mirror'
     Turn on options suitable for mirroring.  This option turns on
     recursion and time-stamping, sets infinite recursion depth and
     keeps FTP directory listings.  It is currently equivalent to `-r
     -N -l inf -nr'.

`-p'
`--page-requisites'
     This option causes Wget to download all the files that are
     necessary to properly display a given HTML page.  This includes
     such things as inlined images, sounds, and referenced stylesheets.

     Ordinarily, when downloading a single HTML page, any requisite
     documents that may be needed to display it properly are not
     downloaded.  Using `-r' together with `-l' can help, but since
     Wget does not ordinarily distinguish between external and inlined
     documents, one is generally left with "leaf documents" that are
     missing their requisites.

     For instance, say document `1.html' contains an `<IMG>' tag
     referencing `1.gif' and an `<A>' tag pointing to external document
     `2.html'.  Say that `2.html' is similar but that its image is
     `2.gif' and it links to `3.html'.  Say this continues up to some
     arbitrarily high number.

     If one executes the command:

          wget -r -l 2 http://SITE/1.html

     then `1.html', `1.gif', `2.html', `2.gif', and `3.html' will be
     downloaded.  As you can see, `3.html' is without its requisite
     `3.gif' because Wget is simply counting the number of hops (up to
     2) away from `1.html' in order to determine where to stop the
     recursion.  However, with this command:

          wget -r -l 2 -p http://SITE/1.html

     all the above files _and_ `3.html''s requisite `3.gif' will be
     downloaded.  Similarly,

          wget -r -l 1 -p http://SITE/1.html

     will cause `1.html', `1.gif', `2.html', and `2.gif' to be
     downloaded.  One might think that:

          wget -r -l 0 -p http://SITE/1.html

     would download just `1.html' and `1.gif', but unfortunately this
     is not the case, because `-l 0' is equivalent to `-l inf'--that
     is, infinite recursion.  To download a single HTML page (or a
     handful of them, all specified on the commandline or in a `-i' URL
     input file) and its (or their) requisites, simply leave off `-r'
     and `-l':

          wget -p http://SITE/1.html

     Note that Wget will behave as if `-r' had been specified, but only
     that single page and its requisites will be downloaded.  Links
     from that page to external documents will not be followed.
     Actually, to download a single page and all its requisites (even
     if they exist on separate websites), and make sure the lot
     displays properly locally, this author likes to use a few options
     in addition to `-p':

          wget -E -H -k -K -p http://SITE/DOCUMENT

     To finish off this topic, it's worth knowing that Wget's idea of an
     external document link is any URL specified in an `<A>' tag, an
     `<AREA>' tag, or a `<LINK>' tag other than `<LINK
     REL="stylesheet">'.


automatically generated by info2www version 1.2.2.9