| Ignore previous message -- I've got it working now. Thanks for your help.
Two other quick questions, though, although neither is all that crucial:
1. After getting all of the HTML pages, it then goes back and gets all of the
robots.txt pages, which takes a long time. I don't want the robots.txt -- just
the HTML pages in my list of pages to download.
2. Any way to not have it create *.readme files for every page downloaded?
Thanks again.
| |