| 1) Always post the ACTUAL command line used (or log file line two) so we know
what the site is, what ALL your settings are, etc.
2) Always post the URLs you're not getting and from what URL it is
referenced.
3) Always post anything USEFUL from the log file.
4) If you want everything use the near flag (get non-html files related) not
filters.
5) I always run with A) No External Pages so I know where the mirror ends.
With B) browser ID=msie 6 pulldown as some sites don't like a HTT one. With C)
Attempt to detect all links (for JS/CSS.) With D) Timeout=60, retry=9 to avoid
temporary network interruptions from deleting files.
> (winhttrack
> -qwr5%e0C2%Pns1u1%s%uN0%I0p7DaK0m600000,0c3T30H0%kf2
> A25000%f#f -F "Mozilla/4.0 (compatible; MSIE 6.0;
> Windows NT 5.0)" -%F -%l "en, en, *"
> <http://donklipstein.com//light.html> -O1
> C:\chuck\httrack-web\LightDon +*.pdf +*.png +*.gif
> +*.jpg +*.css +*.js -ad.doubleclick.net/*
> -mime:application/foobar )
You are using the near flag - the filters are unnecessary
> External Depth set to download 0 external sites. But
> when I download the site, I get lots of files that
> are not on the original site. My project contains
You used the near flag - so you get non-html files where ever stored.
Look in the hts-cache\new.txt You won't find a single .htm file not from
donklipstein.com | |