| Hi there, I'm looking to copy from a single website URL, but want to exclude
any files with certain key text in the file names. I understand that I could
download everything and just do a search and delete after the fact, but this
is actually more time and data consuming, so that's not viable.
For example, there are thousands of files with keywords like (Wf6hTr0x). Some
of the files are small, like .txt and .doc, but there are also bigger many zip
files that will consume unnecessary bandwidth and storage space.
I made multiple attempts using Scan Rules to exclude any links with file names
that have these keywords, but no matter how I alter these settings, it still
attempts to download the files with these keywords. Am I doing something
wrong? Is this even possible?
Thanks | |