Google's main business has been search, and now it wants to make a core part of it an internet standard.
The internet giant has outlined plans to turn robots exclusion protocol (REP) — better known as robots.txt — into an internet standard after 25 years. To that effect, it has also made its C++ robots.txt parser that underpins the Googlebot web crawler available on GitHub for anyone to access.
"We wanted to help website owners and developers create amazing experiences on the internet instead of worrying about how to control crawlers," Google said. "Together with the original author of the protocol, webmasters, and other search engines, we've documented how the REP is used on the modern web, and submitted it to the IETF."
The REP is one of the cornerstones of web search engines, and it helps website owners manage their server resources more easily. Web crawlers — like Googlebot — are how Google and other search engines routinely scan the internet to discover new web pages and add them to their list of known pages.
A follow-on post to Google's blog expands on the proposal.
The Draft Specification is available here. Google has put its open-source repository up on GitHub
(Score: 2, Interesting) by Anonymous Coward on Thursday July 04 2019, @10:56AM (2 children)
Three, if you want to include flagging 'interesting' parts of your site for the attentions of 'miscreants'..
I've just enabled 4 virtual hosts on a server, within hours I had a number of IP numbers attached to DSL lines dotted around the globe attempting to grab (non-existent) robots.txt files from these virtual sites, same IP numbers also tried various php exploits, MySQL exploits etc. etc.
The file has its uses, but seriously?, Google, of all the momsers, attempting to mandate its use seems a mite bloody strange....
(Score: 1, Interesting) by Anonymous Coward on Thursday July 04 2019, @02:12PM (1 child)
If you're actually using robots.txt to do that, then you're doing it wrong and deserve everything you get. If you think the absence of a robots.txt file is going to protect you, then you are sadly mistaken. Security by obscurity is never a good policy. The robots.txt file is there as a guideline for good actors, but no one has to absolutely respect it, not even all good actors (archive.org for one ignores robots.txt). Maybe these evil robots are looking for a default robots.txt file that's put there by some vulnerable package.
The mandate then, is largely for people who want to make web robots that try to be good neighbours, like Google's crawler tries to be, and for people who for whatever reason don't want portions of their site crawled by these well-behaved robots for whatever reason. Most RFCs, except for those in the Standards Track, aren't really mandates. A lot are just codifications of established practice. This one looks like one of the latter type.
(Score: 0) by Anonymous Coward on Friday July 05 2019, @03:20AM
People put all sorts of places in the robots.txt because they don't want robots firing off various scripts that they either won't care about the results of or can't fill out properly. For example, SoylentNews has Disallow: /search.pl in their robots.txt because going to that page causes a script to process their request, at a minimum, and potentially hitting the database.
However, as mentioned, that shows blackhats that there is definitely some script or something you don't want good crawlers to see.
But, if you really are worried about something like that, you can always put Disallow: /admin-control-panel.pl in your robots.txt too. Except that URL is a script that add their IP address to your firewall's blacklist, think fail2ban, denyhosts, OSSEC, or stockade.