| > 1) A web browser does not use robots.txt
Yes, browsers are not Robots.
:-)
> and I need to capture sites in a way a user clicking
> through a site would.
And the webmaster is agree ?Why do you need this site on your local computer
?The site will be updated the next time. It's online, and
update, all over the world. A copy is out of date in fews
days (or hours).
Is it for your business ?If so, you don't have a way to bypass this step ?
And you'll use the _whole_ site copied on local ?
> I do agree with you, though, Renardrogue, that HTTrack
with
> its default settings is quite dangerous. In particular,
the
> connections / speed limit are not defined with basically
> lets the program go all-out on a site.
> Perhaps instead of flaming, we need to constructively
work
> with Xavier to define some default settings, and perhaps
> some in-program warnings when changing 'Expert options'.
ok. Lets go.
:-) | |