Isn't one of the problems with crawling .onions that you never know how many there are, you can only index known links and traverse any links to other .onions that you can find? So, in theory, it's possible for two completely unrelated "communities" to set up their own .onions and only link to each other (since they only know of their own existance), in which case you would have to be lucky enough to know .onions of both communities to have any hope of indexing "the complete deep web".