I'm trying to copy a website using command line Httrack tool. This tool pulls
all the web pages ( as per the configuration ) and saves into .tmp files. If
there is no connection active (or at the end of the crawl), it converts the
.tmp pages into normal .html pages with some additional header information.
My requirement is that the crawl should not convert the .tmp pages into .html
Can anyone tell me how to do this? Also what are the drawbacks of stopping
Thanx in advance,