| > Hi I am trying to back up a website. It appears
> that the pages are dynamically created with a
> temporary serverside link / session....
Humm.. this kind of site is generally a real pain to capture.
> my question is, is there a way for httrack to go
> into each subdir and parse the html, then download
> the file from the link in the html, then move up a
> subdir and do the next one. (in that order)
No -- the parser is layer-oriented, and changing this behaviour is not a
trivial task (mainly because of the reallocation scheme, and chained links in
the internal code)
| |