Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Tuesday February 28 2017, @04:19PM   Printer-friendly
from the rock-'em-sock-'em-wikibots dept.

Source: Popular Science:

Bots waging war for years on end, silently and endlessly arguing over tiny details on Wikipedia is, let's be honest, pretty funny. Automatons with vendettas against each other? Come on.

But as amusing as the idea is, anthropomorphizing bot wars ignores what's actually important about their arguments: we didn't know they were happening. Bots account for large chunks of the internet's activity, yet we know relatively little about how they all interact with each other. They're just released into the World Wide Jungle to roam free. And given that they account for over half of all web traffic, we should probably know more about them. Especially since these warring bots weren't even malicious—they were benevolent.

A group of researchers at the Oxford Internet Institute looked at nine years' worth of data on Wikipedia's bots and found that even the helpful ones spent a lot of time contradicting each other. And more specifically, there were pairs of bots that spent years doing and undoing the same changes repeatedly. The researchers published their findings on Thursday in the journal PLOS ONE.

Our results show that, although in quantitatively different ways, bots on Wikipedia behave and interact as unpredictably and as inefficiently as the humans. The disagreements likely arise from the bottom-up organization of the community, whereby human editors individually create and run bots, without a formal mechanism for coordination with other bot owners. Delving deeper into the data, we found that most of the disagreement occurs between bots that specialize in creating and modifying links between different language editions of the encyclopedia. The lack of coordination may be due to different language editions having slightly different naming rules and conventions.

From the PLOS ONE Journal article (Open Access article CC Attribution License -- See Spoiler.)

Copyright: © 2017 Tsvetkova et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

In support of this argument, we also found that the same bots are responsible for the majority of reverts in all the language editions we study. For example, some of the bots that revert the most other bots include Xqbot, EmausBot, SieBot, and VolkovBot, all bots specializing in fixing inter-wiki links. Further, while there are few articles with many bot-bot reverts (S7 Fig), these articles tend to be the same across languages. For example, some of the articles most contested by bots are about Pervez Musharraf (former president of Pakistan), Uzbekistan, Estonia, Belarus, Arabic language, Niels Bohr, Arnold Schwarzenegger. This would suggest that a significant portion of bot-bot fighting occurs across languages rather than within. In contrast, the articles with most human-human reverts tend to concern local personalities and entities and tend to be unique for each language [26].


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by looorg on Tuesday February 28 2017, @05:57PM (2 children)

    by looorg (578) on Tuesday February 28 2017, @05:57PM (#472912)

    Sounds like someone just came up with Wikipedia Core Wars? Bots battling it out for edit-supremacy. There can be only one (editor)!

    http://www.corewars.org/ [corewars.org]

    Isn't the edit-re-edit-war a sideeffect of bot stupidity? They don't know when they are wrong and they never give up. They have their "facts" and that is all they know. Things that don't sync up 100% with what they know are then wrong and needs to be edited and disputed.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by frojack on Tuesday February 28 2017, @07:17PM (1 child)

    by frojack (1554) on Tuesday February 28 2017, @07:17PM (#472972) Journal

    How would bots know they are "wrong"? And what defines "Wrong" or "Truth" in the Wikipedia world?

    I imagine after a couple of trips through reversions, one or both "sides" just send in the bots to assure that their comment remains in the article, or that some particular comments are always removed.

    The complexity of language makes it such that replacing a counter-argument with your preferred-argument would seem a never ending arms race, by mere virtue of the clever ways you can reverse the meaning of any given comment:

    It is amazing that bots could be written to duel back and forth for years.

    [Some falsely claim that] It is amazing that bots could be written to duel back and forth for years.

    Variations on that theme could get progressively harder to prevent and revert.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 1) by nitehawk214 on Tuesday February 28 2017, @08:57PM

      by nitehawk214 (1304) on Tuesday February 28 2017, @08:57PM (#473032)

      Oh that is easy.

      Anything that disagrees with my bot's owner is wrong and fake.

      --
      "Don't you ever miss the days when you used to be nostalgic?" -Loiosh