Hey, I know that modern web-search engines, such as Google, bump up results mostly based on references on other sites for those results. That doesn't interest me. I would like to know how did companies like 'Jerry and David's guide to the World Wide Web (Yahoo)' or 'Google' managed to index all the thousands of websites around 1998? I mean, I get that you can write a program that could scan web-pages and their metadata, but how do you tell it where to scan if there isn't a service where you can look up all the various url's? I don't understand, can anyone clarify it a little bit. Thank you once more - this is the best Internet Forum I've ever seen!