Biositemap

The Biositemap enables web browsers, crawlers and robots to easily access and process the information to use in other systems, media and computational formats.

The Biositemaps Protocol[2] allows scientists, engineers, centers and institutions engaged in modeling, software tool development and analysis of biomedical and informatics data to broadcast and disseminate to the world the information about their latest computational biology resources (data, software tools and web services).

The biositemap concept is based on ideas from Efficient, Automated Web Resource Harvesting[3] and Crawler-friendly Web Servers,[4] and it integrates the features of sitemaps and RSS feeds into a decentralized mechanism for computational biologists and bio-informaticians to openly broadcast and retrieve meta-data about biomedical resources.

These site, institution, or investigator specific biositemap descriptions are published in RDF format online and are searched, parsed, monitored and interpreted by web search engines, web applications specific to biositemaps and ontologies, and other applications interested in discovering updated or novel resources for bioinformatics and biomedical research investigations.

Using a biositemap does not guarantee that resources will be included in search indexes nor does it influence the way that tools are ranked or perceived by the community.

iTools representation of a biositemap