| > All I want to do is to copy an entire website domain name
> (eg. www.abc.com ) AND only the first level links
> (.html, .jpg, .pdf, whatever...) to any other websites.
> That's all I want.
> There are too many such links for their corresponding
> websites to be specified in the scan rules.
> I have been trying for hours and I can't make it work
> correctly. I set 'external limit =1'.
1. Set Options / Scan rules:
-* +www.abc.com/*
2. Set external depth to 1
This should do the trick.
| |