HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: Feature wish / mirroring strategy - Part II
Author: Xavier Roche
Date: 06/05/2002 22:06
> So the way to go is a javabean. This is a data object 
> is created for the current user (Internally, cookies are 
> used).
> So from the image creating servlet, the existing java 
> is accessed and the data extracted.

Yes, but session-based pages are generally hard to get (the 
same problem occurs when unprotected forums expose 
their "delete message" links using regular a href tags - I 
let you imagine what can be the result is you try to mirror 
such site) ; and the problem here is really a design 
problem: the only way woud be to "pack" a group of URLs in 
every pages, and catch them. Quite hard to do, using the 
current httrack heap architecture

>If there are a lot of values, you can't use the url

Right - but you have to get many values then! (note: you 
can also rely on the referer URL passed by the agent to 
detect the "parent" page, this may simplify specific cases)
You can also create multiple "views" in a single session 
(arrays) and identify them using either id's or list of 
id's (array indexes?). The session, tagged with a 
timestamp, would only be temporary. But this may create too 
big sessions anyway..

Hum, I don't see any simple workarounds yet.. I'll try to 
think a little more, but I'm afraid this would require 
some 'hard' code
Reply Create subthread

All articles

Subject Author Date
Feature wish / mirroring strategy - Part II

06/05/2002 21:55
Re: Feature wish / mirroring strategy - Part II

06/05/2002 22:06
Re: Feature wish / mirroring strategy - Part II

06/06/2002 10:50


Created with FORUM 2.0.11