HTTrack Website Copier
Free software offline browser - FORUM
Subject: Depth-First
Author: Markus
Date: 03/30/2008 17:33
 
Hi there,

I have already searched the forum on how to make Winhttrack using depth-first
retrieval. My problem is that I have gathered a list of approx. 400.000 URLs
(I had to do that using Perl because of ugly JavaScript usage) and wanted
Winhttrack to do the rest. The URLs in turn refer to common other ressources
and I also wanted to utilitze Winhttrack to avoid redundant downloads.
However, after 3h and at approx. 17.000 Files and 2GB of data, Winhttrack gets
veery slow and almost freezes. I assume this is because of the breadth-first
search and the corresponding memory-consuming parallel scanning?
So if it is not possible to "get one URL job" done and proceed with the next
one (or is it?), how could I eventually work around the problem?


Many thanks!

Markus
 
Reply


All articles

Subject Author Date
Depth-First

03/30/2008 17:33
Re: Depth-First

03/31/2008 23:33
Re: Depth-First

03/31/2008 23:37
Re: Depth-First

04/03/2008 12:27




5

Created with FORUM 2.0.11