| Hi,
an update.
Previously the CSS-files of my website were not downloaded because they were
located in a directory "protected" by the robots.txt file.
The robots.txt file is now ignored, and the css-files are downloaded - but
they are deleted after httrack finishes.
during the download:
(server>) 0 # find . -type f |wc -l
150824
there are a lot of .tmp files created - and the css-files are part of these
.tmp-files.
afterwards:
(server) 0 # find . -type f |wc -l
14005
I run it like this:
httrack <http://website> --robots=0 --keep-alive --extended-parsing
+website/*.css +website/*.html +website/*.gif +website/*.jpg +website/*.png
+website/*.swf -mime:*/* +mime:text/html +mime:image/* -*/print.html | |