Spider
A spider (also a crawler or robot) is an automated program search engines use to find and index webpages. It "crawls" the web by visiting a page, analyzing its content, and following the links it contains. Each new page the spider discovers is added to the search engine's index, helping expand and update the searchable web.
When a spider visits a webpage, it typically encounters links to other pages. On a typical website, the majority of links point to pages on the same site (internal links), while a smaller number of links point to pages on other websites (external links). Following these links allows the spider to jump between sites and continue building the index. Because link structures are interconnected, spiders regularly return to pages they've already visited. Recrawling pages helps search engines detect new content, track page updates, and measure how many other pages link to each page.
NOTE: The word "spider" can also be used as a verb, such as "The search engine spidered my site yesterday."
Test Your Knowledge