| Is there an option within HTTrack that would enable one to
spider through a website to determine how many files, etc.,
would be downloaded *BEFORE* any are actually downloaded and
stored on one's hard drive?
Such an option would serve as a check on any circular pages
and gives some idea of what the endpoint would be in terms
of pages scanned, links followed and total number of files
that would be downloaded.
I realize that HTML pages may be downloaded and stored as a
consequence of any such "dry run", but that effort might
later serve as a skeleton for the actual site copy. | |