It looks like that websites uses a lot of javascript.
No webcrawler like that ! - if you look at the source code
for the frontpage it includes javascript from 2 external
files: spider.php and main.js - my guess is that the first
checks wheter you look like a spider - the second does a
lot of javascripting.
The first step for you is to ensure that you do not look
like a webcrawler - so set useragent to be some kind of
mozilla.
If the javascript in main.js is important for navigation
on the page and if HtTrack cannot parse it for links
(which it possibly cannot) this website cannot be copied. |