| Ok.. actually I'm not totally helpless. I did figure it out. In case anyone is
is trying to rip and entire tumblr, here is the command line from the log:
(winhttrack -qwC2%Pns2u1%s%uN0%I0p3DaK0H0%kf2A250000%f#f -F "Mozilla/4.0
(compatible; MSIE 6.0; Windows NT 5.0)" -%F "<!-- Mirrored from %s%s by
HTTrack Website Copier/3.x [XR&CO'2010], %s -->" -%l "en, en, *"
<http://her-master.tumblr.com> -O1 C:\SavedWebsites\Tumblr -*
+http://her-master.tumblr.com/* +http://static.tumblr.com/*
+http://assets.tumblr.com/* +http://media.tumblr.com/*
+http://*.media.tumblr.com/* -*?*=* -*=*
+http://her-master.tumblr.com/archive?*=* +http://www.tumblr.com/photo/*
+http://s3.amazonaws.com/data.tumblr.com/* )
I don't think all of those were needed for my tumblr, but that worked.
Now I have another question. Take a look at the archive at
<http://her-master.tumblr.com/archive> Now, the page saves correctly in HTTrack,
but only as far as the "loading" at the bottom. I assume this is javascript,
but is there any way to make HTTrack download the entire page as if I'd paged
to the bottom?
I know I can page and capture the links, but I'd love to have a working
version of the archive in my tumblr backup if possible.
Thanks. I hope that command line helps someone else. It's the filters that
matter. | |