Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday September 25 2018, @06:11PM   Printer-friendly
from the subtle-influences dept.

Days after the Trump administration instituted a controversial travel ban in January 2017, Google employees discussed ways they might be able to tweak the company's search-related functions to show users how to contribute to pro-immigration organizations and contact lawmakers and government agencies, according to internal company emails.

The email traffic, reviewed by The Wall Street Journal, shows that employees proposed ways to "leverage" search functions and take steps to counter what they considered to be "islamophobic, algorithmically biased results from search terms 'Islam', 'Muslim', 'Iran', etc." and "prejudiced, algorithmically biased search results from search terms 'Mexico', 'Hispanic', 'Latino', etc."

The email chain, while sprinkled with cautionary notes about engaging in political activity, suggests employees considered ways to harness the company's vast influence on the internet in response to the travel ban. Google said none of the ideas discussed were implemented.

"These emails were just a brainstorm of ideas, none of which were ever implemented," a company spokeswoman said in a statement. "Google has never manipulated its search results or modified any of its products to promote a particular political ideology—not in the current campaign season, not during the 2016 election, and not in the aftermath of President Trump's executive order on immigration. Our processes and policies would not have allowed for any manipulation of search results to promote political ideologies."

wsj.com/articles/google-workers-discussed-tweaking-search-function-to-counter-travel-ban-1537488472


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday September 25 2018, @07:13PM (4 children)

    by Anonymous Coward on Tuesday September 25 2018, @07:13PM (#739821)

    What does "algorithmically biased" even mean? Those words sound like they refer to an algorithm that is somehow biased to return islamophobic/prejudiced results even when they're less relevant to the search query. But if so they surely would have just fixed it, right? It wouldn't even be controversial to correct a bias in the algorithm so it returns the most query-relevant results regardless of whether they're islamophobic/prejudiced or not.

    The context, on the other hand, suggests it means the algorithm is already returning the most relevant results for someone's query, but those results are deemed distasteful for reasons unrelated to the algorithm or query. If so, "algorithmically biased" is a strange, and seemingly dishonest, choice of words to characterize these results.

  • (Score: 2) by meustrus on Tuesday September 25 2018, @07:41PM (1 child)

    by meustrus (4961) on Tuesday September 25 2018, @07:41PM (#739837)

    Probably harder than it sounds. A lot of the "algorithmic bias" probably comes from user targeting. Racist or borderline racist profiles are more likely to see racist information because it is more relevant to them.

    My preference is that we do away with targeting altogether and have the same Google search be the same for everybody. People can learn to sprinkle the bias they want to see into their queries. For example, if you like news about Trump, you might need to start telling Google whether you want "pro Trump news" or "anti Trump news".

    Too bad advertisers put a lot of unfounded faith in targeted advertising, making it worth too much money to Google to build and maintain this search profile for everybody.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
  • (Score: 1) by Aurean on Tuesday September 25 2018, @07:43PM

    by Aurean (4924) on Tuesday September 25 2018, @07:43PM (#739838)

    Because algorithms that produce fair results for other queries (e.g. 'farming' or 'pet food') may not return fair results for others, depending on traffic metrics (wide and shallow for non-controversial topics vs. spikes for high-traffic sites obsessed with certain topics).

  • (Score: 0) by Anonymous Coward on Wednesday September 26 2018, @01:57PM

    by Anonymous Coward on Wednesday September 26 2018, @01:57PM (#740177)

    "biased" is probably the wrong word here. It anthropomorphizes the algo. Probably the more correct word is "variance" or "selection". Bias may be introduced into the algo, but the algo itself is not biased.

    What you have here is a paranoid perception of a persecution by a non-cogniscent system. The question is whether not technicians are using the program as a mechanism of social change, rather than whether the program itself does that. Yes they certainly are, whether they are aware of it or not. Conscious decisions are less prevelent than they appear.

    Of course this doesn't mean anything. A greater variance is probably introduced by factors that effect search time. IOW, the bias of the engineers is unlikely to ever be as severe as the bias that has been introduced by commercial SEO. And since SEO and costs time and money, the output is going to be most biased towards large commercial interests.