| Hi I am trying to back up a website. It appears that the pages are dynamically
created with a temporary serverside link / session....
my question is, is there a way for httrack to go into each subdir and parse
the html, then download the file from the link in the html, then move up a
subdir and do the next one. (in that order) what appears to be happening is it
downloads too many html files parses them, then downloads the file, and by the
time it hits link 25, the link is not there anymore because the location
expired server side...
thanks
Pablo | |