| > This is probably a mistyped URL or something - the
> ending of the commandline is: ... -* -%A =
>
> Therefore you must have forgotten a "=" somewhere (?!=,
> for example in the URLs or in the scan rules area ?>
> Anyway the other link was downloaded, but the "-*" scan
> rules prevented from downloading anything else.
Ok, I *think* I understand. I checked that the URL I typed
in was correct. The -* is what I typed in the scan rules
because I didn't want the whole site copied (I was only
after the pdfs) Should I have typed www.site.com-* in scan
rules instead?
I have no idea where the -%A = ) came from. Unless it was
something to do with checking the box "Get non-html files
related to a link" to catch the pdfs?
>>>Either use filters (scan rules), or use:
-*
.. and use Set Options/Links/"Get non-html files related
to
a link" to fetch the "books" if they are non-html files
(such as pdf's)<<<<
Ok, On re-reading this I'm not sure where I was supposed
to put the -*? | |