| > e.x <http://www.domain.com/> -#L1000000
> Am I correct?
Yes
> Also, seeing from Options > Limits, I can see that the
> maximum mumber in the drop down list is 50000000. Does it
> mean that 50000000 is the maximum limit, or would the
No, there are predefined values, but you can enter what you
want. The virtual limit is 2,000,000,000 links, BUT you
won't be able to spider so many links, because you won't
have enough memory to pre-allocate the engine internal
structure.
> option of -#L1000000 see to it that there is NO LIMIt at
> all. That is what I aim at, coz the site I am downloading
> has lot lot more urls than 50000000. :)
Err serious?? The complete french web should be beneath
this value :))
> 2) What exactly is the 'Cache' stuff. I stopped the
> downloading yesterday and started it again. It started
> reading from the cache and started 'parsing html'... Then
> suddenly it gave me a message saying that the 1000000
limit
> has been crossed and hence it has to stop. And then after
> changing the options so as to increase the limit, it
> started downloading the whole site from start. ?? Why is
> this happening. I thought it was supposed to 'Parse' the
> files again and then start continue the interupted
> download. Please help me.
Select 'Continue' and not 'Update', this will be MUCH
faster. The engine will always recheck the local strucyure
anyway, to rebuild the internal link table
> I even checked the old.dat file but the file is just in
> kilobytes. Have I done some error?
The new.dat file should contains all html data, and
therefore should be approximately the same size as all html
downloaded.
| |