| >
<http://www.angelfire.com/pq/paradisegardens/cannabisholyanoi>
> ntingoil.html
>
> is a webpage I just created from an earlier version of
the
> document, which had the <u> underline html instead of <a
> href> linking code.
>
> I can't seem to get HTTrack to copy the pages linked from
> the URLs from the page...!
>
> The version of the document I have was prepared by taking
> the plain text off of the browser (not source, but
> displayed text), then inserting it into the Text2HTML
> conversion webpage. It did a very good job of creating a
> webpage with active links, but for some reason HTTRACK
will
> not copy all the URLs... and I KNOW how to use this
program
> well... I set it to follow robots.text, then tried not
> following robots.txt, set the client to MSIE 6.0 so the
> Angelfire server won't balk, set it to go both up and
down,
> get html files first, get all links to a file, get all
> filetypes I'm looking for, defaul site-structure, default
> on tolerant requests for servers, parse javascript for
> urls, etc....
>
> I also discovered that HTTrack doesn't know what to do
with
> web pages it doesn't have to download; i.e. HTML web
pages
> saved onto one's one hard drive!
>
> It would be great to be able to create a web page with
> links, and use HTTrack to check all the links to make
sure
> they are all current and active ! *without having to
first
> upload the webpage to a server on the Web*
>
> can you help??>
> TIA,
>
> Dapianoman
>
>
OK... all I had to do is select "Go everywhere on the Web"
and now it's copying all of that URL's links. Whew.
I still haven't figured out how to get it to spider a local
web page though.
TIA,
Dapianoman | |