| Hello
I'm not very sure of the internals of httrack, so I beg
your pardon if you feel that I dont understand anything
at all (I think Xavier should stop coding and start
documenting everything for at least 2 months from now :P)
I'd like to know if it's possible to progress through
the site's downloading using a deep-first-search algorithm.
To make it clearer, I would like to progress and download links/pages as soon
as they are found out (it would be mandatory to use the --level option to put
a bound on the recursion).
How is currently managed the link's queue ? Any short explanation of these
concepts will be kindly appreciated
Thanks!
| |