HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: trying to download
Author: WHRoeder
Date: 01/11/2013 20:35
 
1) Always post the ACTUAL command line used (or log file line two) so we know
what the site is, what ALL your settings are, etc.
2) Always post the URLs you're not getting and from what URL.
3) Always post anything USEFUL from the log file.
4) If you want everything use the near flag (get non-html files related) not
filters.
5) I always run with No External Pages so I know where the mirror ends. I
always run with browser ID=msie6 as some sites don't like a HTT one. I always
run with Attempt to detect all links.

> <http://www.discoverlife.org/users/l/Losey,_John/JEL>.

> <http://www.discoverlife.org/mp/20p?see=I_JEL2&res=80>
The default is to go down only and /mp is not down from /users

> I set up the project for the first link above, to
> download the main text. I added the latter two
> directories in "filter" and "Scan Rules", which is
> what I thought I should do.
That should allow it go get those URLs.
 
Reply Create subthread


All articles

Subject Author Date
trying to download

01/11/2013 18:37
Re: trying to download

01/11/2013 18:41
Re: trying to download

01/11/2013 18:49
Re: trying to download

01/11/2013 20:35
Re: trying to download

01/12/2013 18:07




a

Created with FORUM 2.0.11