| > 1) Always post the ACTUAL command line used (or log
> file line two) so we know what the site is, what ALL
> your settings are, etc.
> 2) Always post the URLs you're not getting and from
> what URL it is referenced.
> 3) Always post anything USEFUL from the log file.
> 4) If you want everything use the near flag (get
> non-html files related) not filters.
> 5) I always run with A) No External Pages so I know
> where the mirror ends. With B) browser ID=msie 6
> pulldown as some sites don't like a HTT one. With C)
> Attempt to detect all links (for JS/CSS.) With D)
> Timeout=60, retry=9 to avoid temporary network
> interruptions from deleting files.
>
> > I'm working to make WinHTTrack to save offline
> > proper java server input requests, but so far, no
> > way to do it. For example www.plantlust.com where
>
> No Java applications on the page
>
> > you have to input your hardiness zone and thew
> > request is sended to server and procesed, and
> after
>
> That is a form which needs a server to process. A
> mirror is a collection of files. forms will not
> work.
>
So, if the server gone, all information (many very hard to find) is lost from
WEB..!! So, again, what great importance must have a ripping site application.
Even with forms, when the information exist on that server, and is not
protected of any kind (accesible via browsers), logically is possible to be
ripped. I guess it needs a lot better parser. | |