HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: For the next version.
Author: WHRoeder
Date: 12/16/2012 14:02
 
1) Always post the ACTUAL command line used (or log file line two) so we know
what the site is, what ALL your settings are, etc.
2) Always post the URLs you're not getting and from what URL.
3) Always post anything USEFUL from the log file.
4) If you want everything use the near flag (get non-html files related) not
filters.
5) I always run with No External Pages so I know where the mirror ends. I
always run with broswer ID=msie6 as some sites don't like a HTT one. I always
run with Attempt to detect all links.

> For example some websites that don't lock directory
> listing sometimes contain mp3, avi or jpg files.

If your broswer can see the directory, so can HTT

> What happened to me is that i had only 24Gb left on
> my laptop. I thought it would be enough to copy a

Why didn't you use the pause after xx amount?
> website with unlocked directories. I had to stop
> this httrack operation by lack of space. I
> transfered only the directory structure that httrack
> had grabbed for me. I left other things like
> hts-cache directory, .lock file, .txt log file.
> Then I tried to 'resume' my downloads.

You did NOT try to resume, you tried to continue or update but its files are
not longer there, so it redownloads them.

> With filezilla it is a common operation to realize.

blah blah blah. Irrelevant. Not an HTT question...

> It has been impossible for me to find an easy
> solution to restart my downloads. 

Pause after xxx.
Or have it download to where ever you moved the files to.

> Finally i had to manually create an exception rule
> for each directory i thought okay. VERY boring.

Always post what you did #1 (there are no mind readers here)
Why do you need a filter(s) if you want everything?Do you really expect an
answer when you provide ZERO information on what you are trying to do?
> I notice that httrack knows the file size at the
> moment the download starts. It would be nice to
> detect if a same named file exists and compare the
> sizes.

It does this already. Update is very fast (about 2-3 files per second
depending on the round trip time to the server) and independent on the size of
the files.

> I don't expect a resume option even if it would be
> great. But just a skip option for large files when
> the size is identical.

There is already a pause and resume, can cancel/continue.
blah blah blah

> I also discovered that some directories contain
> uncomplete files and just near the same file named
> '.tmp' also uncomplete. I have now to manually
> perform these type of download.

tmp files are temporary internal processing binary data. NOT you files. They
are gone once the file is downloaded. Stop canceling the download.

> With filezilla, when i have some doubt i just reload
blah blah blah. Not an HTT question.

> Another cool option would be to be able to setup the
> speed limit manually. I perfectly understand the
> non-abuse policy. But 25kb/sec is very slow,

Not an HTT question.
Not if there are many people using HTT at the same time on a site. But I tend
to aggree. I think Xavier should have limited the connections/second to 1-2
since it is the requests that are the big drain.

> I still think it is a powerful tool because one of

Not an HTT question
 
Reply Create subthread


All articles

Subject Author Date
For the next version.

12/16/2012 12:44
Re: For the next version.

12/16/2012 14:02
Re: For the next version.

12/16/2012 15:33
Re: For the next version.

12/16/2012 16:24
Re: For the next version.

12/16/2012 18:23
Re: For the next version.

12/16/2012 19:44




b

Created with FORUM 2.0.11