| > Dear Httrack Support
>
> I have been to the forum section and faq but I cannot
find
> a simple answer to the following question:
how do we
> download ALL the html links related to a website? (as
well
> as further levels on the other domains?)
<http://www.httrack.com/html/step9_opt10.html> Global travel
mode (with maximum depth set to a reasonable number) might
be up your alley, it's probably better to just include the
other sites in your rules file like +www.othersite.com/*
if there are only a handful of other sites being linked to.
> P.s. : Also is it possible to simply name a project
> `Internet` and use a single project while adding more
and
> MORE weblinks on that same project - so that the links
in
> other domains will WORK - is that feasible? Currently, I
> name a project after the name of the Website, e.g.
> Queensland Anaesthesia, and download the site, but it
DOES
> NOT give me the links AUTOMATICALLY - I would have to add
This doesn't usually work all that well with frequently
changing sites, but try:
Stop your mirroring sessions with interrupt, check
doit.log (or the .ini file for the Windows GUI version)
and edit the options to include the new links, use
continue to restart the mirror. Be aware that the mirror
may follow links backwards to sites previously downloaded,
which can be a real pain with dynamic content.
> them one by one! (Besides is there- do you know of/or
> have - a program that will ENUMERATE and LIST and get
> those links on the Webpages downloaded so that I can
PASTE
> them into the project?)
try sed
| |