HTTrack Website Copier
Free software offline browser - FORUM
Subject: Dry Run First
Author: RJ Emery
Date: 03/04/2005 19:20
 
Is there an option within HTTrack that would enable one to
spider through a website to determine how many files, etc.,
would be downloaded *BEFORE* any are actually downloaded and
stored on one's hard drive?
Such an option would serve as a check on any circular pages
and gives some idea of what the endpoint would be in terms
of pages scanned, links followed and total number of files
that would be downloaded.

I realize that HTML pages may be downloaded and stored as a
consequence of any such "dry run", but that effort might
later serve as a skeleton for the actual site copy.
 
Reply


All articles

Subject Author Date
Dry Run First

03/04/2005 19:20
Re: Dry Run First

03/05/2005 22:42




4

Created with FORUM 2.0.11