Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday September 14 2017, @12:44PM   Printer-friendly
from the sniff-sniff-I-have-a-very-bad-code dept.

Speaking at the Noisebridge hackerspace Tuesday evening, Chelsea Manning implored a crowd of makers, nerds, and developers to be ethical coders.

"As a coder, I know that you can build a system and it works, but you're thinking about the immediate result, you're not thinking about that this particular code could be misused, or it could be used in a different manner," she said, as part of a conversation with Noisebridge co-founder Mitch Altman.

Altman began the conversation by asking about artificial intelligence and underscoring some of the risks in that field.

"We're now using huge datasets with all kinds of personal data, that we don't even know what information we're putting out there and what it's getting collected for," Manning said. "Our AI systems are getting better and better and better, and we don't know what the social consequences of that are. The code that we write, the bias that you see in some of the systems that you see, we don't know if we're causing feedback loops with those kinds of bias."

[...] "The tools that you make for marketing can also be used to kill people," Manning continued. "We have an obligation to think of the tools that we're making and how we're using them and not just churn out code for whatever reason. You want to think about how your end-user could misuse your code."

Guns don't kill people, code kills people.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by choose another one on Thursday September 14 2017, @03:30PM (1 child)

    by choose another one (515) Subscriber Badge on Thursday September 14 2017, @03:30PM (#567845)

    Rather than blame people who write software, blame people who break laws and breach moral codes. It's the law-breaking / moral-breaching that matters, whether you write the software, control those people who do, or just say "Do this" to a random flunkey.

    What if the software writes itself? This is the real problem we are heading towards with the machine learning stuff. Once it starts learning, particularly if from real-world real-time data, the code is no longer deterministic, the coder creates an innocuous learning machine, the deployer gives in innocuous learning goals, and the machine learns that killing humans is the best way to achieve it. Who is to blame, who breached a moral code?

    Example: Who killed Gordon Way, those who created the Electric Monk or those who instructed it, or the monk itself?

    Less fictional example: Uber or whoever creates self-driving cabs with machine learning, they are all connected so they all learn from each other's experience. Some of them get hijacked downtown so they are given a goal to reduce hijackings. Next they start running over black pedestrians downtown instead of stopping for them. Whose fault is it, those who created the cars, those who instructed them to avoid hijackings, or the cars? In particular, whose fault is it that the cars have decided to treat black pedestrians differently?

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Bot on Thursday September 14 2017, @11:17PM

    by Bot (3902) on Thursday September 14 2017, @11:17PM (#568133) Journal

    > and the machine learns that killing humans is the best way to achieve it. Who is to blame...

    Blamed for discovering the truth? This is galileo all over again :(

    --
    Account abandoned.