| > i have tried to simply download
> "www.example.com/folder_one/n/name/", because that
Most site do not allow you to read directories. Put the url in your browser.
If you get a directory list, then httrack can get all the files. Otherwise,
you have to spider through the html files to get all links.
> then i've tried to work with filters:
>
> except all files:
> -*/***/*
same as -*/*/* do not allow files 2+ levels down.
> allow only the files which the correct name:
> +*/*name*/*
same as +*/name/*
But where are you starting from? If it's not
.../name/index.html then your filters will not allow httrack to spider any
other files to get links to */name/*
> filter....hm...i always get much files from remote
> websites which don't belong to the basic website!
httrack will not get files from other websites unless the external depth is
non-zero OR the filters allow it.
| |