Google's main business has been search, and now it wants to make a core part of it an internet standard.
The internet giant has outlined plans to turn robots exclusion protocol (REP) — better known as robots.txt — into an internet standard after 25 years. To that effect, it has also made its C++ robots.txt parser that underpins the Googlebot web crawler available on GitHub for anyone to access.
"We wanted to help website owners and developers create amazing experiences on the internet instead of worrying about how to control crawlers," Google said. "Together with the original author of the protocol, webmasters, and other search engines, we've documented how the REP is used on the modern web, and submitted it to the IETF."
The REP is one of the cornerstones of web search engines, and it helps website owners manage their server resources more easily. Web crawlers — like Googlebot — are how Google and other search engines routinely scan the internet to discover new web pages and add them to their list of known pages.
A follow-on post to Google's blog expands on the proposal.
The Draft Specification is available here. Google has put its open-source repository up on GitHub
(Score: 0) by Anonymous Coward on Friday July 05 2019, @03:20AM
People put all sorts of places in the robots.txt because they don't want robots firing off various scripts that they either won't care about the results of or can't fill out properly. For example, SoylentNews has Disallow: /search.pl in their robots.txt because going to that page causes a script to process their request, at a minimum, and potentially hitting the database.
However, as mentioned, that shows blackhats that there is definitely some script or something you don't want good crawlers to see.
But, if you really are worried about something like that, you can always put Disallow: /admin-control-panel.pl in your robots.txt too. Except that URL is a script that add their IP address to your firewall's blacklist, think fail2ban, denyhosts, OSSEC, or stockade.