Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Thursday May 21 2015, @08:28PM   Printer-friendly
from the hackers.txt dept.

Robots.txt files are simple text files that website owners put in directories to keep web crawlers like Google, Yahoo, from indexing the contents of that directory. It's a game of trust, web masters don't actually trust the spiders to not access every file in the directories, they just expect these documents not to appear in search engines. By and large, the bargain has been kept.

But hackers have made no such bargain, and the mere presence of robots.txt files are like a X on a treasure map. And web site owners get careless, and, yes, some operate under the delusion that the promise of the spiders actually protects these documents.

The Register has an article that explains that hackers and rogue web crawlers, actually use robots.txt files to find directories worth crawling.

Melbourne penetration tester Thiebauld Weksteen is warning system administrators that robots.txt files can give attackers valuable information on potential targets by giving them clues about directories their owners are trying to protect.

Once a hacker gets into a system, it is standard reconnaissance practice to compile and update detailed lists of interesting sub directories by harvesting robots.txt files. It requires less than 100 lines of code.

If you watch your logs, you've probably seen web crawler tracks, and you've probably seen some just walk right past your robots.txt files. If you are smart there really isn't anything of value "protected" by your robots.txt. But the article lists some examples of people who should know better leaving lots of sensitive information hiding behind a robots.txt.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Thursday May 21 2015, @10:23PM

    by Anonymous Coward on Thursday May 21 2015, @10:23PM (#186232)

    Not going to work. Most bots parallelize tasks and have a fairly short timeout period per request. The ultra tryhard attackers will even use a distributed botnet to stop you from IP blocking them with trap pages.

    Starting Score:    0  points
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 2, Interesting) by Anonymous Coward on Friday May 22 2015, @12:17AM

    by Anonymous Coward on Friday May 22 2015, @12:17AM (#186263)

    Some of the more intelligent ones will even try the URLs and see if the bot gets banned; if so, the rest of the bots will ignore URLs like that.

  • (Score: 2) by maxwell demon on Friday May 22 2015, @06:53AM

    by maxwell demon (1608) on Friday May 22 2015, @06:53AM (#186344) Journal

    Most bots parallelize tasks and have a fairly short timeout period per request.

    But from those bots, you could protect your "private" resources by simply delaying their delivery by a short time, triggering their timeout before your page is delivered. Sure, your users will see a short lag, but hey, who didn't ever wait a few seconds for a web page to appear?

    --
    The Tao of math: The numbers you can count are not the real numbers.