HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: Bug in WinHTTrack?
Author: WHRoeder
Date: 03/15/2013 14:24
1) Always post the ACTUAL command line used (or log file line two) so we know
what the site is, what ALL your settings are, etc.
2) Always post the URLs you're not getting and from what URL it is
3) Always post anything USEFUL from the log file.
4) If you want everything use the near flag (get non-html files related) not
5) I always run with A) No External Pages so I know where the mirror ends.
With B) browser ID=msie 6 pulldown as some sites don't like a HTT one. With C)
Attempt to detect all links (for JS/CSS.) With D) Timeout=60, retry=9 to avoid
temporary network interruptions from deleting files.

> ONLY store files of a specific extension - in this
> case, ZIP files. (emphasis added)

Can't be done. Unless you have all the URLs to them, you MUST let it spider
the html to get them.

> I was assuming that entering the filters on the scan
> mode tab would act identically, but it would appear
> not to be the case. 


> often given to use a "-*" filter, but that always
> seems to block *anything* getting downloaded.
> (Unsurprisingly, in my opinion!)

Why are you surprised that a 'get nothing' filter get's nothing?
> I attempt to use the +*.zip, but whatever filter I

Since the default is to get everything on a site, that by itself does

> to "exclude everything but this extension"...

Of course not, to do that you must combine them. You want nothing but html and
zips: -* +*.html +*.zip
Reply Create subthread

All articles

Subject Author Date
Bug in WinHTTrack?

03/15/2013 13:47
Re: Bug in WinHTTrack?

03/15/2013 14:24


Created with FORUM 2.0.11