| 1) Always post the ACTUAL command line used (or log file line two) so we know
what the site is, what ALL your settings are, etc.
2) Always post the URLs you're not getting and from what URL it is
referenced.
3) Always post anything USEFUL from the log file.
4) If you want everything use the near flag (get non-html files related) not
filters.
5) I always run with A) No External Pages so I know where the mirror ends.
With B) browser ID=msie 6 pulldown as some sites don't like a HTT one. With C)
Attempt to detect all links (for JS/CSS.) With D) Timeout=60, retry=9 to avoid
temporary network interruptions from deleting files.
> What should I write in the command line, so I
> download the whole page, with all its files ? (zip,
> video, mp3, mp4, img, gif, etc ...)
HTT is a web site copier. If you want one page, it's easier to use your
browser's save as.
Even with extended parsing, you can't get most videos because they play via a
SWF. HTT gets the swf but when you try to view, the swf tries to get the video
from the server which is now your PC and the videos don't exist on your PC.
Without links, it can not be done. | |