| So I tried setting no Scan Rules but allowing Travel Mode up and down within
the same domain with a depth of 1. Which sort of works when the hypothetical
tree page is a single page. If the page is multi-page (has links to page 2,3
etc), I need to set the depth higher to account for clicking on those, which,
due to the up and down, will lead to dozens of subdomains and unwanted folders
being parsed as well. Always wanted to learn 17 new localizations, just not
this time.
Is it possible to set the filter to set allowed travel paths, so that I don't
have to designate all unwanted locations separately, instead just designating
the few locations I DO want?
In other words, tell it that it should ONLY download items found in /trees,
/photos and for example tags.someweb.com if referred to, without having it
download EVERYTHING that's in those places? | |