| My website is nearly 700mb, mostly plain text
translated into HTML by nearly a score of Perl/CGI
scripts. Everytime I do an update, HTTrack will
re-extract the entire site instead of just the trully
updated pages. Sometimes, it takes more than the
entire time of doing a fresh HTTrack session.
Is there a way for HTTrack to skip those that have
already been downloaded and focus instead on the newly
updated pages? | |