from the there-are-no-plans dept.
Submitted via IRC for Bytram
Google has been forced to shut down a data analysis system it was using to develop a censored search engine for China after members of the company's privacy team raised internal complaints that it had been kept secret from them, The Intercept has learned.
The internal rift over the system has had massive ramifications, effectively ending work on the censored search engine, known as Dragonfly, according to two sources familiar with the plans. The incident represents a major blow to top Google executives, including CEO Sundar Pichai, who have over the last two years made the China project one of their main priorities.
The dispute began in mid-August, when the The Intercept revealed that Google employees working on Dragonfly had been using a Beijing-based website to help develop blacklists for the censored search engine, which was designed to block out broad categories of information related to democracy, human rights, and peaceful protest, in accordance with strict rules on censorship in China that are enforced by the country's authoritarian Communist Party government.
The Beijing-based website, 265.com, is a Chinese-language web directory service that claims to be "China's most used homepage." Google purchased the site in 2008 from Cai Wensheng, a billionaire Chinese entrepreneur. 265.com provides its Chinese visitors with news updates, information about financial markets, horoscopes, and advertisements for cheap flights and hotels. It also has a function that allows people to search for websites, images, and videos. However, search queries entered on 265.com are redirected to Baidu, the most popular search engine in China and Google's main competitor in the country. As The Intercept reported in August, it appears that Google has used 265.com as a honeypot for market research, storing information about Chinese users' searches before sending them along to Baidu.
According to two Google sources, engineers working on Dragonfly obtained large datasets showing queries that Chinese people were entering into the 265.com search engine. At least one of the engineers obtained a key needed to access an "application programming interface," or API, associated with 265.com, and used it to harvest search data from the site. Members of Google's privacy team, however, were kept in the dark about the use of 265.com. Several groups of engineers have now been moved off of Dragonfly completely and told to shift their attention away from China.
When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.
"Clarifai's mission is to accelerate the progress of humanity with continually improving A.I.," read a blog post from Matt Zeiler, the company's founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.
As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.
But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.
"We don't want to see a commercial race to the bottom," Brad Smith, Microsoft's president and chief legal officer, said at the New Work Summit in Half Moon Bay, Calif., hosted last week by The New York Times. "Law is needed."
Possible != Probable. And the "needed law" could come in the form of a ban and/or surveillance of coding and hardware-building activities.
U.N. Starts Discussion on Lethal Autonomous Robots
UK Opposes "Killer Robot" Ban
Robot Weapons: What's the Harm?
The UK Government Urged to Establish an Artificial Intelligence Ethics Board
Google Employees on Pentagon AI Algorithms: "Google Should Not be in the Business of War"
South Korea's KAIST University Boycotted Over Alleged "Killer Robot" Partnership
About a Dozen Google Employees Have Resigned Over Project Maven
Google Drafting Ethics Policy for its Involvement in Military Projects
Google Will Not Continue Project Maven After Contract Expires in 2019
Uproar at Google after News of Censored China Search App Breaks
"Senior Google Scientist" Resigns over Chinese Search Engine Censorship Project
Google Suppresses Internal Memo About China Censorship; Eric Schmidt Predicts Internet Split
Leaked Transcript Contradicts Google's Denials About Censored Chinese Search Engine
Senators Demand Answers About Google+ Breach; Project Dragonfly Undermines Google's Neutrality
Google's Secret China Project "Effectively Ended" After Internal Confrontation
Microsoft Misrepresented HoloLens 2 Field of View, Faces Backlash for Military Contract