HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: But won't work anyway
Author: Xavier Roche
Date: 01/25/2002 09:53
> > Arfffffffffff .. you are talking about the link's 
> are 
> > javascript's so then httracker can't understand 
> > them !? :0/ arfffff what a job it's going to be 
> saving 
> > all those file's manually ......

The only solution for 'complex' javascript routines 
(that is, routines which dynamically load images using 
string expressions) is to add in the source final URLs 
which will be detected by the engine. For example:

preload1=new Image();
preload2=new Image();

Of course the webmaster will have to modify its 
sources so that it can work.

But remember that javascript is also sometimes used to 
avoid offline browsers, and NO offline browser is able 
to parse complex javascript code. Even if you merge 
the Mozilla code into a spider engine (which may 
generate a 10MB program..) there will be other 
problems, such as interacting with the user (a mouse 
click, wrapped by javascript, can be used to decypher 
an URL)

Therefore, if webmasters create websites hostile to 
spiders, there is no solution - even for indexing them 
(how many search engines are crawling into javascript 
and/or java?...)

Reply Create subthread

All articles

Subject Author Date
Re: Sauvegarde complete sur ..

01/24/2002 12:11
Re: But won't work anyway

01/24/2002 13:09
Re: But won't work anyway

01/25/2002 09:38
Re: But won't work anyway

01/25/2002 09:43
Re: But won't work anyway

01/25/2002 09:46
Re: But won't work anyway

01/25/2002 09:53
Re: But won't work anyway

01/25/2002 10:05


Created with FORUM 2.0.11