Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday July 05 2016, @10:51AM   Printer-friendly
from the what-would-you-do? dept.

Disclaimer: I work on a search engine (findx). I try not to put competitors in a bad light.

Question: Should a web crawler always reveal its true name?

Background: While crawling the web I've found some situations where using a fake user-agent might help. First example is a web site that checks the user-agent in the http-request and returns a "your browser is not supported" - even for robots.txt. Another example is a site that had an explicit whitelist in robots.txt. Strangely, 'curl' was whitelisted but 'wget' was not. I hesitate in using a fake user-agent, e.g. googlebot because it isn't clear what the clueless webmasters' intentions are. It appears that some websites are misconfigured or so google-optimized that other/new search engines may have to resort to faking user-agent.

I'm also puzzled by Qwant because they claim to have their own search index but my personal website (which is clearly indexed when I search in qwant) has never been crawled by a user-agent resembling anything that could lead to qwant. Apparently they don't reveal what their user-agent is: https://blog.qwant.com/qwant-fr/. And there has been some discussion about it: https://www.webmasterworld.com/search_engine_spiders/4743502.htm

This is different from search engines that don't have their own index (eg. DuckDuckGo uses results from Yahoo! and yandex. Startpage uses Google, etc.)

So what do you Soylentils say, is faking the user-agent in webcrawls necessary? Acceptable? A necessary evil?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Scruffy Beard 2 on Tuesday July 05 2016, @03:46PM

    by Scruffy Beard 2 (6030) on Tuesday July 05 2016, @03:46PM (#370119)

    My understanding is that they treat it like a take-down request.

    They have a model of putting information out there and taking it down if a copyright holder complains.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by GungnirSniper on Tuesday July 05 2016, @04:38PM

    by GungnirSniper (1671) on Tuesday July 05 2016, @04:38PM (#370146) Journal

    Surely there is a better way than an eternal takedown, because eventually nearly every domain is going to change hands. Archived sites that had great info years ago shouldn't just disappear forever because Sedo or GoDaddy or someone icky gets the registration. It would be like an ISBN number being reused cancelling the copyright on the prior published work.

    • (Score: 1, Informative) by Anonymous Coward on Tuesday July 05 2016, @05:53PM

      by Anonymous Coward on Tuesday July 05 2016, @05:53PM (#370172)

      Their take down isn't permanent. Every bit of data they collect is still there and some sites archives are still downloadable to the public as WARC files. Their policy is to take down access through the way back machine until the robot.txt disappears or copyright expires.

      • (Score: 1) by Chrontius on Wednesday July 06 2016, @11:47PM

        by Chrontius (5246) on Wednesday July 06 2016, @11:47PM (#371033)

        I've tried working with WARC files, but I'm still hazy on how to browse them. Do you have any guides you could point me to?

  • (Score: 0) by Anonymous Coward on Tuesday July 05 2016, @08:49PM

    by Anonymous Coward on Tuesday July 05 2016, @08:49PM (#370248)

    if a copyright holder complains

    NO. That's not what's being complained about.

    So, on the one hand, we have a guy who hosts websites on his servers and owns all those hosted domains.
    Let's call him Hostmaster.

    On the other hand, we have a guy who rents one of those domains and generates all the **content** that appears on that website.
    Let's call him Webmaster.
    Webmaster is just fine with his content being archived.

    Now, something changes (e.g. missed hosting payments) and Webmaster loses control of his domain|subdomain.

    In the robots.txt for the domain formerly used by Webmaster, Hostmaster specifies that all that content is inaccessible.

    archive.org GOES BACK IN TIME to the point where they already had permission to archive that content and had *done* so.
    They now treat that content on archive.org's own servers as if it NEVER EXISTED.

    .
    Iin a slight plot twist, if Webmaster actually *owned* his domain and let his registration lapse, and Hostmaster subsequently snatched up that URL, we'd be at the same point.

    This stuff is a problem of Capitalism|ownership|rent-seeking.
    The logical argument to be made (which you are missing) is that the Intellectual Property still belongs to the (former) Webmaster who is still its creator.

    ...as well as the fact that Hostmaster is being a dick and that archive.org is siding with the dick.

    -- OriginalOwner_ [soylentnews.org]

    • (Score: 2) by Scruffy Beard 2 on Wednesday July 06 2016, @05:03PM

      by Scruffy Beard 2 (6030) on Wednesday July 06 2016, @05:03PM (#370770)

      You missed some nuance in my post. Maybe I was not clear.

      I said they treat is "like" a take-down request.

      I am aware that the domain holder may not be the copyright holder in many (most?) cases.