| Yup. It's the spider option itself that makes proper file naming problematic.
An archiver has to process and redirect every link and file on every page - in
theory, you could put in lots of subroutines for trying to extract the "right"
filenames, but it would be hard, slow, and never 100% reliable anyway. The
primary goal of HTTrack is to recreate a browsable website on the local
machine, not to retrieve all directory structures intact, so I suspect this is
very low on the to-do list. wget may address this, depending on the
permissions on the image directory, but not likely. I suspect you're
currently out of luck if you want *both* automated crawling *and* unaltered
filenames. | |