| 1) Always post the ACTUAL command line used (or log file line two) so we know
what the site is, what ALL your settings are, etc.
2) Always post the URLs you're not getting and from what URL it is
referenced.
3) Always post anything USEFUL from the log file.
4) If you want everything use the near flag (get non-html files related) not
filters.
5) I always run with A) No External Pages so I know where the mirror ends.
With B) browser ID=msie 6 pulldown as some sites don't like a HTT one. With C)
Attempt to detect all links (for JS/CSS.) With D) Timeout=60, retry=9 to avoid
temporary network interruptions from deleting files.
> Hi. I want to only download certain file types
> stored within websites such as *.doc, *.pdf, *,xls
> etc.
>
> Is there a way to scan an entire website for these
> files without downloading anything else?
If you have the URLs of those files, no problem. Otherwise can't be done. You
MUST let it spider the site (get the html) to get them.
You can filter everything else out:
-* +*.html +*.doc +*.pdf +*.xls
> Also, if the above is possible can the files be sent
> to a single folder rather than the folder the files
> were in at the website?
change local structure <http://www.httrack.com/html/step9_opt5.html>
html in web other in web/other or xxx in web/xxx | |