| > Does Httrack have in fact a limit of links to be scanned
or
> a relation memory<->links?
The reasonnable limit is few millions links. But the more
you allow to crawl, the more memory httrack will eat.. for
large scale mirrors (such as backuping multiple sites for
preservation), you'll need more memory, and possibily
experience slowdowns (the internal hashtable will be
overloaded a bit)
> I've do several attempts but after 2 or 3 days of machine
> work the program dies peacefully without errors, without
> apparent work (that is nobody is reading the HD).
No report in hts-log.txt ? The memory was exhausted, maybe ?
| |