Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Thursday May 21 2015, @08:28PM   Printer-friendly
from the hackers.txt dept.

Robots.txt files are simple text files that website owners put in directories to keep web crawlers like Google, Yahoo, from indexing the contents of that directory. It's a game of trust, web masters don't actually trust the spiders to not access every file in the directories, they just expect these documents not to appear in search engines. By and large, the bargain has been kept.

But hackers have made no such bargain, and the mere presence of robots.txt files are like a X on a treasure map. And web site owners get careless, and, yes, some operate under the delusion that the promise of the spiders actually protects these documents.

The Register has an article that explains that hackers and rogue web crawlers, actually use robots.txt files to find directories worth crawling.

Melbourne penetration tester Thiebauld Weksteen is warning system administrators that robots.txt files can give attackers valuable information on potential targets by giving them clues about directories their owners are trying to protect.

Once a hacker gets into a system, it is standard reconnaissance practice to compile and update detailed lists of interesting sub directories by harvesting robots.txt files. It requires less than 100 lines of code.

If you watch your logs, you've probably seen web crawler tracks, and you've probably seen some just walk right past your robots.txt files. If you are smart there really isn't anything of value "protected" by your robots.txt. But the article lists some examples of people who should know better leaving lots of sensitive information hiding behind a robots.txt.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by VortexCortex on Thursday May 21 2015, @09:22PM

    by VortexCortex (4067) on Thursday May 21 2015, @09:22PM (#186205)

    Furthermore, most spiders identify themselves. Better than robots.txt: If you have multiple (sub)domains simply link them in a loop with (pseudo)random URLs, such as:
    a.com/rand-bot/1234.html < -- > b.com/rand-bot/1234.html and include */4567.html, */5678.html, etc. random entries in the lists of cross-links on each page.

    Once implemented: Congratulations, you just passed SEO 101.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by maxwell demon on Thursday May 21 2015, @10:24PM

    by maxwell demon (1608) on Thursday May 21 2015, @10:24PM (#186233) Journal

    Wouldn't you risk your site being considered a link farm by Google's algorithm?

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by kaszz on Thursday May 21 2015, @10:29PM

      by kaszz (4211) on Thursday May 21 2015, @10:29PM (#186234) Journal

      Ban any crawler from accessing those pages if it comes from a google IP ?