Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Thursday May 21 2015, @08:28PM   Printer-friendly
from the hackers.txt dept.

Robots.txt files are simple text files that website owners put in directories to keep web crawlers like Google, Yahoo, from indexing the contents of that directory. It's a game of trust, web masters don't actually trust the spiders to not access every file in the directories, they just expect these documents not to appear in search engines. By and large, the bargain has been kept.

But hackers have made no such bargain, and the mere presence of robots.txt files are like a X on a treasure map. And web site owners get careless, and, yes, some operate under the delusion that the promise of the spiders actually protects these documents.

The Register has an article that explains that hackers and rogue web crawlers, actually use robots.txt files to find directories worth crawling.

Melbourne penetration tester Thiebauld Weksteen is warning system administrators that robots.txt files can give attackers valuable information on potential targets by giving them clues about directories their owners are trying to protect.

Once a hacker gets into a system, it is standard reconnaissance practice to compile and update detailed lists of interesting sub directories by harvesting robots.txt files. It requires less than 100 lines of code.

If you watch your logs, you've probably seen web crawler tracks, and you've probably seen some just walk right past your robots.txt files. If you are smart there really isn't anything of value "protected" by your robots.txt. But the article lists some examples of people who should know better leaving lots of sensitive information hiding behind a robots.txt.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Thursday May 21 2015, @10:57PM

    by Anonymous Coward on Thursday May 21 2015, @10:57PM (#186241)

    > Which would take a few hours to work around by the scammers.

    Yes, yes, yes. All so obvious that I decided not to mention it in the first place. Security is about cost trade-offs, not absolutism, the goal is to make your website less attractive than other websites - make it nondescript so it doesn't attract attention. Ultimately you can not both serve information to the public and prevent information from being served to the public. The best you can do is apply some heuristics and hope they don't screw up for regular users.

    > Real websites already have a better protection mech, they require a referrer field from the site
    > ... but most are more hit and run volume of traffic verses smart/complex processes on their end.

    Really? Faking the referer is harder than a two step process that involves separate IP addresses?
    A spider that doesn't have a single config setting to enable referer headers what is this, 1995?
    Wishful thinking.

    I'm not saying requiring a referer is a bad idea. It is easy to do with minimal downside (other than the fact that referers are an optional part of the http specs so you'll also block valid users who have turned off their referer headers). I am saying that it is an even less effective choice than the one you shot down for the exact same reason.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1