Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday July 04 2019, @06:06AM   Printer-friendly
from the building-better-bot-blocks dept.

https://thenextweb.com/google/2019/07/02/google-wants-to-make-the-25-year-old-robots-txt-protocol-an-internet-standard/:

Google's main business has been search, and now it wants to make a core part of it an internet standard.

The internet giant has outlined plans to turn robots exclusion protocol (REP) — better known as robots.txt — into an internet standard after 25 years. To that effect, it has also made its C++ robots.txt parser that underpins the Googlebot web crawler available on GitHub for anyone to access.

"We wanted to help website owners and developers create amazing experiences on the internet instead of worrying about how to control crawlers," Google said. "Together with the original author of the protocol, webmasters, and other search engines, we've documented how the REP is used on the modern web, and submitted it to the IETF."

The REP is one of the cornerstones of web search engines, and it helps website owners manage their server resources more easily. Web crawlers — like Googlebot — are how Google and other search engines routinely scan the internet to discover new web pages and add them to their list of known pages.

A follow-on post to Google's blog expands on the proposal.

The Draft Specification is available here. Google has put its open-source repository up on GitHub


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Thursday July 04 2019, @10:56AM (2 children)

    by Anonymous Coward on Thursday July 04 2019, @10:56AM (#863096)

    AFAIK, robots.txt serves two purposes....

    Three, if you want to include flagging 'interesting' parts of your site for the attentions of 'miscreants'..

    I've just enabled 4 virtual hosts on a server, within hours I had a number of IP numbers attached to DSL lines dotted around the globe attempting to grab (non-existent) robots.txt files from these virtual sites, same IP numbers also tried various php exploits, MySQL exploits etc. etc.

    The file has its uses, but seriously?, Google, of all the momsers, attempting to mandate its use seems a mite bloody strange....

    Starting Score:    0  points
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 1, Interesting) by Anonymous Coward on Thursday July 04 2019, @02:12PM (1 child)

    by Anonymous Coward on Thursday July 04 2019, @02:12PM (#863123)

    Three, if you want to include flagging 'interesting' parts of your site for the attentions of 'miscreants'..

    If you're actually using robots.txt to do that, then you're doing it wrong and deserve everything you get. If you think the absence of a robots.txt file is going to protect you, then you are sadly mistaken. Security by obscurity is never a good policy. The robots.txt file is there as a guideline for good actors, but no one has to absolutely respect it, not even all good actors (archive.org for one ignores robots.txt). Maybe these evil robots are looking for a default robots.txt file that's put there by some vulnerable package.

    Google, of all the momsers, attempting to mandate its use seems a mite bloody strange....

    The mandate then, is largely for people who want to make web robots that try to be good neighbours, like Google's crawler tries to be, and for people who for whatever reason don't want portions of their site crawled by these well-behaved robots for whatever reason. Most RFCs, except for those in the Standards Track, aren't really mandates. A lot are just codifications of established practice. This one looks like one of the latter type.

    • (Score: 0) by Anonymous Coward on Friday July 05 2019, @03:20AM

      by Anonymous Coward on Friday July 05 2019, @03:20AM (#863341)

      People put all sorts of places in the robots.txt because they don't want robots firing off various scripts that they either won't care about the results of or can't fill out properly. For example, SoylentNews has Disallow: /search.pl in their robots.txt because going to that page causes a script to process their request, at a minimum, and potentially hitting the database.
        However, as mentioned, that shows blackhats that there is definitely some script or something you don't want good crawlers to see.

      But, if you really are worried about something like that, you can always put Disallow: /admin-control-panel.pl in your robots.txt too. Except that URL is a script that add their IP address to your firewall's blacklist, think fail2ban, denyhosts, OSSEC, or stockade.