| As some other guy said the .tmp files are temp files that should disappear
after a while or on finish.
The site you link here (joeblog) seems basically empty to me (Has content been
removed or something?). I tried httrack on it and since it has almost no
content it finished in a few seconds.
Not sure why you want to use "up and down" mode, never used it myself I think
as I imagine it will cause the crawling to take forever on many sites.
Here is an example of what you can do to download only the images from a
specific page like you say you want (not saying it's the best method as I'm no
expert):
Say you want to download only the images on the first page of
<http://linesandcolors.com/>
With the following settings I get only the images from the first page:
+*.png +*.gif +*.jpg +*.jpeg +*.webm -*.css -*.js -ad.doubleclick.net/*
-mime:application/foobar -*.txt -*.zip
Limits
Maximum mirroring depth: 2
Experts Only:
"Primary Scan Rule" set to "Store non html files"
If you try this yourself it should finish in some minutes and you end up with
some empty folders you can delete (where the html files would have been if
they were saved) and some folders with the images. If you watch the folders
while it is working it will be creating .tmp files while it works and then
removing them.
If you want _all_ images on that site change the max mirroring depth and you
can of course also further limit what images etc you want it to download with
e.g. *linesandcolors.com/images*.jpg
| |