Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday July 05 2016, @10:51AM   Printer-friendly
from the what-would-you-do? dept.

Disclaimer: I work on a search engine (findx). I try not to put competitors in a bad light.

Question: Should a web crawler always reveal its true name?

Background: While crawling the web I've found some situations where using a fake user-agent might help. First example is a web site that checks the user-agent in the http-request and returns a "your browser is not supported" - even for robots.txt. Another example is a site that had an explicit whitelist in robots.txt. Strangely, 'curl' was whitelisted but 'wget' was not. I hesitate in using a fake user-agent, e.g. googlebot because it isn't clear what the clueless webmasters' intentions are. It appears that some websites are misconfigured or so google-optimized that other/new search engines may have to resort to faking user-agent.

I'm also puzzled by Qwant because they claim to have their own search index but my personal website (which is clearly indexed when I search in qwant) has never been crawled by a user-agent resembling anything that could lead to qwant. Apparently they don't reveal what their user-agent is: https://blog.qwant.com/qwant-fr/. And there has been some discussion about it: https://www.webmasterworld.com/search_engine_spiders/4743502.htm

This is different from search engines that don't have their own index (eg. DuckDuckGo uses results from Yahoo! and yandex. Startpage uses Google, etc.)

So what do you Soylentils say, is faking the user-agent in webcrawls necessary? Acceptable? A necessary evil?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by GungnirSniper on Tuesday July 05 2016, @04:38PM

    by GungnirSniper (1671) on Tuesday July 05 2016, @04:38PM (#370146) Journal

    Surely there is a better way than an eternal takedown, because eventually nearly every domain is going to change hands. Archived sites that had great info years ago shouldn't just disappear forever because Sedo or GoDaddy or someone icky gets the registration. It would be like an ISBN number being reused cancelling the copyright on the prior published work.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1, Informative) by Anonymous Coward on Tuesday July 05 2016, @05:53PM

    by Anonymous Coward on Tuesday July 05 2016, @05:53PM (#370172)

    Their take down isn't permanent. Every bit of data they collect is still there and some sites archives are still downloadable to the public as WARC files. Their policy is to take down access through the way back machine until the robot.txt disappears or copyright expires.

    • (Score: 1) by Chrontius on Wednesday July 06 2016, @11:47PM

      by Chrontius (5246) on Wednesday July 06 2016, @11:47PM (#371033)

      I've tried working with WARC files, but I'm still hazy on how to browse them. Do you have any guides you could point me to?