| > Hi im trying to download a large number of html
> pages(800K) to be exact and i have tried adding the
> list of 800k urls to httrack and it begins to work
> but it seems to go through the urls and download
Humm, you may try to add -%N0 (disabled type checks) in the scan rules to
speedup the process. But httrack will also have troubles beyond 100k-URLS ;
you may also add -#L10000000 to bypass this limit. In all cases, 800kURLS is a
bit big for a small-scale program like httrack :)
| |