| > (A) I did as it was said - copied the cookies.txt
> every'bloody'where
> My command line is 'httrack --cookies=1 -g --priority=1
> <http://finance.yahoo.com/p?v&k=pf_4> -O yahoo'
Why -g ? This will only fetch the first file without even
parsing it
> For whatever reason I keep getting the 'sign in required'.
> Same cookies.txt used with wget (try it and it works fine)
Humm strange. The cookies.txt is in the yahoo folder, isn't
it? Can you activate the header trace log, by adding -%H in
the commandline (% to be escaped if necessary) ? This will
generate a hts-log.txt file, containing all
requests/responses (don't post it here, as it can contain
your password or authentication scheme) - ensure that the
first request contains a "cookie" field
> (B) Everytime I download I get a readme (same name as
> downloaded file except with '.readme'). How is this
turned
> off????
This is probably due to -g (this could also be due to -r)
> My fear is that Yahoo! isn't supported (darn shame!). I
Well with some help this should work
For your #2 message:
>struggled and got the following command line
httrack might try to test the page type before fetching it
(bad) ; use:
--check-type=0
to avoid that
>This is not a solution I was thinking of - I expect the
>cookies to be read and processed and not have to deal with
>this way of getting to the site
Yes - the cookie system should definitely work, especially
if it works with wget (this is a proof that this method
should actually work)
| |