Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday July 04 2019, @06:06AM   Printer-friendly
from the building-better-bot-blocks dept.

https://thenextweb.com/google/2019/07/02/google-wants-to-make-the-25-year-old-robots-txt-protocol-an-internet-standard/:

Google's main business has been search, and now it wants to make a core part of it an internet standard.

The internet giant has outlined plans to turn robots exclusion protocol (REP) — better known as robots.txt — into an internet standard after 25 years. To that effect, it has also made its C++ robots.txt parser that underpins the Googlebot web crawler available on GitHub for anyone to access.

"We wanted to help website owners and developers create amazing experiences on the internet instead of worrying about how to control crawlers," Google said. "Together with the original author of the protocol, webmasters, and other search engines, we've documented how the REP is used on the modern web, and submitted it to the IETF."

The REP is one of the cornerstones of web search engines, and it helps website owners manage their server resources more easily. Web crawlers — like Googlebot — are how Google and other search engines routinely scan the internet to discover new web pages and add them to their list of known pages.

A follow-on post to Google's blog expands on the proposal.

The Draft Specification is available here. Google has put its open-source repository up on GitHub


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday July 05 2019, @03:20AM

    by Anonymous Coward on Friday July 05 2019, @03:20AM (#863341)

    People put all sorts of places in the robots.txt because they don't want robots firing off various scripts that they either won't care about the results of or can't fill out properly. For example, SoylentNews has Disallow: /search.pl in their robots.txt because going to that page causes a script to process their request, at a minimum, and potentially hitting the database.
      However, as mentioned, that shows blackhats that there is definitely some script or something you don't want good crawlers to see.

    But, if you really are worried about something like that, you can always put Disallow: /admin-control-panel.pl in your robots.txt too. Except that URL is a script that add their IP address to your firewall's blacklist, think fail2ban, denyhosts, OSSEC, or stockade.