| > First off, great program you have. I just have a few
problems though.
> HTTrack doesn't download ANY of the thread pages. It only
downloads the TWOP main page.
Yes - because the server does not correctly responds to
HEAD requests. Do the following:
Set options / MIME types:
cgi <----> text/html
Set options / Scan rules:
-*
+www.televisionwithoutpity.com/ijsbb/forum.cgi?action=list&forum=484&idtopic=*
+www.televisionwithoutpity.com/ijsbb/forum.cgi?action=next&forum=484&idtopic=*
+*.png +*.gif +*.jpg +*.css +*.js
Set options / Flow control / Number of connections:
1 connection ONLY
And set bandwidth limit if necessary (like 10KB/s)
> 2. Is there any way to download a range or sequence of
> urls? For example /001.html, /002.html ... without using
> a text list or adding the urls individually? (Something
> like Webcopier - www.example.com/{:001-005}.html -if I
> recall correctly :) )
Options / Scan rules
+www.example.com/*[0-9].html
| |