| Greetz. I have been using httrack for a few years now. I have always grabbed
small sites or so many layers. I am trying to put a tutorial to my disc so I
can let some elderly people with no internet learn some basics. Here is teh
address
<http://www.ckls.org/~crippel/computerlab/tutorials/mouse/page1.html>
I make a scan rule to pick up all pages (31 in total). It does grab them,
although the project does not go to page 1, it looks more like an ftp folder.
Anyway, clicking on page1.html starts ok. It does fine until it gets to page
6, then it tries to look online, and does not refer to the local copy. I can
see that the copy in there. I can even open it manually. I have tried a number
of scan rules, no change. Layers don't matter since I only want /mouse/*.
Anyone got any ideas? | |