| The robot.txt rule is for search engines I understand, not for surfing
(otherwise why put a site on the web?).
Surfable => downloadable and surfable off line, I believe.
The pictures problem: same here. I'm trying to backup my old web site contents
before completely remanage it after years of inactivity and forgotten
password, and just can't have HTTrack get it all no matter how I set the
limits (very little web site btw). Went back to an older version and it got
one more level, but not everything.
Links to *.jpg and any type of media are activated (actually, it correctly
gets some video clips in an old Smithmicro .smv format). | |