| So I have an interest in very old comics, and there is a website that is a
fantastic resource, but is a labour of love from a bunch of hobbyists, not a
professional or corporate funded site. The site contains thousands of pdfs, in
a structure based on genre and then publication title.
I'd like to copy the lot (I think), but don't want to hit the site with a
massive download (No idea if that would impact them or not to be honest, but
don't want to risk it really), so would like to do this gradually if I can. Is
there any way in this software to say grab the entire html structure, but not
the pdfs, and to then selectively grab the pdfs within sections of the site as
I see fit please?
I also already have some 80Gb of these files already downloaded, so is there
any option to point to an offilne folder and tell it to get files from there
if they exist, instead of downloading them again?
Appreciate any advice, thank you. | |