Deep web

[9] Those criminal activities include the commerce of personal passwords, false identity documents, drugs, firearms, and child pornography.

[20] Bergman cited a January 1996 article by Frank Garcia:[21] It would be a site that's possibly reasonably designed, but they didn't bother to register it with any of the search engines.

[6] It has been noted that this can be overcome (partially) by providing links to query results, but this could unintentionally inflate the popularity of a site of the deep web.

[28] Researchers have been exploring how the deep web can be crawled in an automatic fashion, including content that can be accessed only by special software such as Tor.

In 2001, Sriram Raghavan and Hector Garcia-Molina (Stanford Computer Science Department, Stanford University)[29][30] presented an architectural model for a hidden-Web crawler that used important terms provided by users or collected from the query interfaces to query a Web form and crawl the Deep Web content.

Alexandros Ntoulas, Petros Zerfos, and Junghoo Cho of UCLA created a hidden-Web crawler that automatically generated meaningful queries to issue against search forms.

Another effort is DeepPeep, a project of the University of Utah sponsored by the National Science Foundation, which gathered hidden-web sources (web forms) in different domains based on novel focused crawler techniques.