| > I hoped the program would find the links on the
> first page and download the missing pages. (thus I
That's what it does.
> added +*art52833* since every other page's link is
Since the default is to get everything on site, your filter does not enable
anything else - irrelevant.
> Also, I still cannot get any images.
Previous post #4
> (winhttrack
> -qwr1%e0C2%Ps2u1%s%uN0%I0p3DaK0H0%kf2A25000%f#f -F
> "Mozilla/4.5 (compatible; HTTrack 3.0x; Windows 98)"
> -%F "<!-- Mirrored from %s%s by HTTrack Website
> Copier/3.x [XR&CO'2010], %s -->" -%l "en, en, *"
> <http://pclab.pl/art52833.html> -O1
> C:\Users\Michael\Desktop\PcLab\pad +*.html +*.gif
> +*.jpg +*.png +*.tif +*.bmp )
>
> Also, is it possible to put a variable into the scan
> rule? Such as +/art52833-X.html would include every
> page no matter what X is.
What do you THINK the asterisk in your filters do?
> And, what does it mean to spider? I guess you mean
> scanning the page for links but how do I ´´MUST let
> it spider the site to get them´´? Can I prevent the
> program from spidering?
What do you THINK -* would do?
> I though this is what its made for - such that I
> don't have to download every site separately.
Your original post said "no other files whatsoever" Make up your mind, either
you want everything or not.
| |