| Hello
I guess I will never understand WinHTTrack :(
I want to download only all xml-files (downloads) from a page:
<http://tasker.wikidot.com/profile-index>
which are in each subpages only.
I don't need the html and any picture, but every xml-file which is in the
second level down.
So I set my settings to:
Filterrules / Filter: +*.xml
Limits / Max Deep: 3
Limits / external: 0
But not all xml files in the result and a lot of html.
Is it not possible to crawl just for one single filetype in a whole website?
Or at least get the links so that I can create a batchfile with wget?
Thanks.
frank
| |