HTTrack Website Copier
Free software offline browser - FORUM
Subject: SOS: Not going beyond the starting URL
Author: Sanjay Das
Date: 12/09/2002 11:49

A few weeks earlier, I had successfully downloaded 100,000 
selected webpages, among more massive webpages in a site, 
using the Scan Rules suggested by you. But this time, with 
another similar website, I am facing problem.

The earlier website was a Question & Answer forum, unlike 
this one,, as well as the present one I 
am about to download.

The problem is that HTTrack is not downloading beyond the 
starting URLs.

I am starting a project with 8 sub-URLs within the site, 
and have tested both with the Scan Rules (many times) and 
without the Scan Rules, to get the same result.

Please find below the following:
A. The General Structure Of The Site,
B. The Objective Of The Project To Be Downloaded By     
C. The Scan Rules And Start URLs I Used,
D. The Problem Faced,
E. The Request For Solution To You

A. The site has this general structure:

The main Question & Answer site is: 

<> (when this is typed 
into the address bar, it results in:


Each category and sub category thereafter is like:

(where for every category and sub-categry, the last four 
digits vary, ie 1100, 1101, 1102,...1108,...1127 etc.)

On every category and sub-category page, rhe Questions' 
captions, as links, are listed upto the number 25 and 
there after in the next page (as in search results in any 
search results). The link to go to the next page of 25 
Questions & Answers are like:

(please note that here the 'catid' comes)

The links of the Questions (which includes both the 
Question and its Answer) are as:


(Please note that in it there is no 'catid', 
but 'threadview&id' for every Question & Answer, only 
varies is the last six digits, for every unique Question & 

B. The Objectives Of The Project:

As before, I require to download only all the Questions & 
Answers in a select category and some sub-categories under 
it (all having unique 'catid's). I need to download only 
1+8 sub-category among all. The 'catid's are from 1100 to 

C. The Scan Rules And Starting URLs Used:

The starting URLs were:


I used as following Scan Rules at first,












[From the first to the fifth, I overlooked to use 
*qestartts*, but that was irrlevant perhaps, on the result 
I had for all the sample runs]

D. The Problem Faced:

For each run, from first to the sixth, the HTTrack ended 
in less than two minutes, where only the starting URL 
pages were downloaded and not one Question & Answer pages.

Please guide as to where I went wrong and why. May I 
request you to please construct a Scan Rule and other 
settings, on the informations I provided ?
Is it also possible that some websites, in particular this 
one, are crawler-proof ? If so how to get over this 
problem, by over coming its all and any constraints ? 

I would remain the most genuinely thankful to you on your 
help to get over this problem totally, and please accept 
my sincere thanks for the success I had in downloading the 
previous 100,000 pages, from another site.

Warm regards,


All articles

Subject Author Date
SOS: Not going beyond the starting URL

12/09/2002 11:49
Re: SOS: Not going beyond the starting URL

12/09/2002 20:13


Created with FORUM 2.0.11