| > What's wrong? I try to use the software to download
> some pages with images on wikipedia.
> No matter what rules I set, nothing gets downloaded.
> Why?You look at YOUR log file and see the message about robots.txt
Look at the robots.txt and you'll see:
User-agent: HTTrack
Disallow: /
and many other ones.
If you choose to override robots, you had better put filters in place.
| |