| NOTE: I am downloading slowly from a Dial-up and
responsibly, so the website won't suffer.
Dear all,
I am downlading some tech articles from a website. All
articles have little (<250bytes) .jpg files in them. The
layout is like this
<http://www.website.com/articles/frame.html?http://www.website.com/articles/tech/article1.php>
etc. etc.
The images are stored as
<http://www.website.com/articles/tech/article_Screens_1.jpg>
etc.etc.
My scan rules are:
+www.website.com/articles/*
+www.website.com/articles/*/*.jpg
+www.website.com/articles/*/*.gif
+*.png +*.gif +*.jpg +*.css +*.js -ad.doubleclick.net/*
I have also enabled:
Get non-html files(ZIP, images etc)
Get HTML files first
I have disabled:
Persistent connection(Keep alive)
All other settings like browser type etc. are as allright.
I am able to download html pages, but no jpg files. The
WinHTTrack Website Copier 3.23 (+swf) just copies the html
files, and doesn't wait to download and save the jpg files.
What am I doing wrong? Please help. | |