| > Why I am getting a mistake message, when I try to
download
> a side of the German newspapaer 'Frankfurter Rundschau'
> (www.fr-aktuell.de)? With the simple parallel
product 'site
> snagger' doesn't exist this problem.
> Have someone a idea?
HTTrack, by default, respect robots.txt rules - this might
be the cause here. You can bypass them (Options/Spider),
but only with great care (for example, by setting
reasonnable bandwidth limits, such as 2 simultaneous
connections and not more than 10KB/s)
| |