|  | I have a website:  <http://cgauxed.org>.  I simply want to copy the entire site
at the document root for this site, and EVERYTHING BELOW. I don't need
anything from any other site (i.e., in a different domain). This includes any
linked documents (.pdf, images, .doc, etc.) stored in this domain.
Apparently, a robots.txt file is preventing ANYTHING from being captured
except the index.html page. I've read the forum, but the responses always say
"Change the option". But how???
I have spent hours trying to set up a filter to do this simple mirror, to no
avail. Help! |  |