| Hi,
I have a lot of zip archives on my hard drive. These files are named with the
number associated with their catalog entries on the vendor's site. The URLs
(PHP I think, not straight HTML) for the catalog entries are all the same,
except they end with the aforementioned unique number. I need to download the
page (as if I'd saved the "complete" page in a browser) of each product and
save it with the unique number as the name (or at least, as the beginning of
the name). It would be nice if each page and its associated files was saved as
a single archive, but I'd be okay with the messy "HTML file and
folder-with-same-name-containing-files" method used by Firefox and Internet
Explorer. And I need to be able to feed it a text file full of URLs.
To sum up:
1) feed it a list of URLs
2) set it to name each saved page by its unique identifier
3) save the web pages separately, not with the structure of the originating
site
Can HTTrack do that? Do you know any software that can do that? | |