Speaking at the Noisebridge hackerspace Tuesday evening, Chelsea Manning implored a crowd of makers, nerds, and developers to be ethical coders.
"As a coder, I know that you can build a system and it works, but you're thinking about the immediate result, you're not thinking about that this particular code could be misused, or it could be used in a different manner," she said, as part of a conversation with Noisebridge co-founder Mitch Altman.
Altman began the conversation by asking about artificial intelligence and underscoring some of the risks in that field.
"We're now using huge datasets with all kinds of personal data, that we don't even know what information we're putting out there and what it's getting collected for," Manning said. "Our AI systems are getting better and better and better, and we don't know what the social consequences of that are. The code that we write, the bias that you see in some of the systems that you see, we don't know if we're causing feedback loops with those kinds of bias."
[...] "The tools that you make for marketing can also be used to kill people," Manning continued. "We have an obligation to think of the tools that we're making and how we're using them and not just churn out code for whatever reason. You want to think about how your end-user could misuse your code."
Guns don't kill people, code kills people.
(Score: 2) by choose another one on Thursday September 14 2017, @03:30PM (1 child)
What if the software writes itself? This is the real problem we are heading towards with the machine learning stuff. Once it starts learning, particularly if from real-world real-time data, the code is no longer deterministic, the coder creates an innocuous learning machine, the deployer gives in innocuous learning goals, and the machine learns that killing humans is the best way to achieve it. Who is to blame, who breached a moral code?
Example: Who killed Gordon Way, those who created the Electric Monk or those who instructed it, or the monk itself?
Less fictional example: Uber or whoever creates self-driving cabs with machine learning, they are all connected so they all learn from each other's experience. Some of them get hijacked downtown so they are given a goal to reduce hijackings. Next they start running over black pedestrians downtown instead of stopping for them. Whose fault is it, those who created the cars, those who instructed them to avoid hijackings, or the cars? In particular, whose fault is it that the cars have decided to treat black pedestrians differently?
(Score: 2) by Bot on Thursday September 14 2017, @11:17PM
> and the machine learns that killing humans is the best way to achieve it. Who is to blame...
Blamed for discovering the truth? This is galileo all over again :(
Account abandoned.