| I am currently trying to download a website, and it seems
to be scanning a large folder and adding links, but those
added links are not being downloaded despite the fact that
the program has free download slots. How come it can not
use those free slots do download these files immediately?
The site in question is www.unicode.org, and it happens
when I turn off robots.txt and allow it to search in the
cgi-bin/ directory. | |