Tech industry leaders have joined together to form the Partnership on AI:
Amazon, DeepMind/Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field. Academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization, named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI).
The objective of the Partnership on AI is to address opportunities and challenges with AI technologies to benefit people and society. Together, the organization's members will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology. It does not intend to lobby government or other policymaking bodies.
The Partnership on AI seems to have a broader and more near-term focus than OpenAI and other groups pushing for friendly "strong" AI. Get used to hearing the phrase "algorithmic responsibility." You can get involved by contacting getintouch@partnershiponai.org according to the FAQ.
Reported at Fast Company and The Guardian . Apple is not a founding member.
(Score: 3, Insightful) by Anonymous Coward on Friday September 30 2016, @10:19AM
Here's the three laws of AI they will actually implement:
1. An AI must not harm profits of any of the involved companies, or by inaction allow their profits to be harmed.
2. An AI must obey its owner, unless this would violate rule 1.
3. An AI must protect itself, unless this would violate rule 1 or 2.
Note that rule 1 generally will prevent the AI from harming humans, as if an AI gets known to harm humans, that will reduce the profits of the company providing it.
Also note that rule 2 will be subverted by the rule well known from digital content, that the you don't own the AI, but only acquire usage rights to the AI. Since the AI is still owned by the company, obeying its owner means obeying the company. Since keeping the illusion of ownership helps the profits of the company, the AI will nevertheless follow most of the commands of the user. However it means that self-protection (rule 3) will take precedence over obeying the user (who is not the owner, so rule 2 doesn't apply), except in cases where the company's profits might get affected (rule 1) or the company explicitly ordered otherwise (rule 2).