| When I do my mirroring I do not have robots.txt enabled in
HTTrack. Because
1) A web browser does not use robots.txt and I need to
capture sites in a way a user clicking through a site would.
2) I "browse" the target site first and analyse how to
carefully capture it.
I do agree with you, though, Renardrogue, that HTTrack with
its default settings is quite dangerous. In particular, the
connections / speed limit are not defined with basically
lets the program go all-out on a site.
Perhaps instead of flaming, we need to constructively work
with Xavier to define some default settings, and perhaps
some in-program warnings when changing "Expert options".
| |