| I am wondering how to properly use wildcards when downloading a web site for
offline viewing. For example Google Print has pdf files avaliable for some
books in the public domain. They also have full copies of other books in the
public domain that do not have the PDF download option. The site I want to
use offline is
<http://books.google.com/books?vid=OCLC00296148&id=QskNe3Zynl0C&pg=PA9&dq=hebrew+syntax&as_brr=1>
Now in that Url the only thing that changes for all the pages I want to copy
is "PA9". For each page after page nine the url changes to PA10, PA11, PA12,
PA13, etc. The rest of the URl remains the same.
So my question is, is there a way to direct the program to copy this site (in
the public domain) so it automatically gets all the pages in this book?
| |