| > Hi, i am new to these things. I have read some part
> of the FAQ and instructions. I still did not figured
> out how to do it.
>
> i want to download an endless scrolling page (until
> you reach blogs very own first post) which is an
> tumblr blog. And i also want to download every page,
> file(pics, rars, zips...) of it and every outgoing
> link from that blog. Except, others tumblr page.
> like that -*.tumblr.com/ But that is also excluding
> the my target blog. What i must do?>
> one of those blog that i talked about -->
> <http://ad7am.tumblr.com/> (just for example, i have
> found that blog few minutes ago)
>
> You must go down to see previous posts/pages. But
> you can also change the page by adding page/x to
> mainpage url.
1) "What i must do?" is write a coherent English so we can understand you. And
always post the command line used (or log file line two) so we know what you
did, what the site is, etc.
2) There is no such thing as "endless scrolling page" as you already pointed
out, continuation page have page/x added to the url.
3) if you want images, etc use the near flag (get non-html files related) to
get them no matter where stored.
4) "very outgoing
> link from that blog. Except, others tumblr page.
> like that -*.tumblr.com/ But that is also excluding
> the my target blog."
Outgoing links means external depth = 1 except external depth has been
reported broken in the newest versions (fails to stop)
"But excludes target blog" what excludes what? If you don't what other
tumblr's but your own: -*.tumblr.com/* +ad7am.tumblr.com/*
| |