Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Thursday September 14 2017, @12:44PM   Printer-friendly
from the sniff-sniff-I-have-a-very-bad-code dept.

Speaking at the Noisebridge hackerspace Tuesday evening, Chelsea Manning implored a crowd of makers, nerds, and developers to be ethical coders.

"As a coder, I know that you can build a system and it works, but you're thinking about the immediate result, you're not thinking about that this particular code could be misused, or it could be used in a different manner," she said, as part of a conversation with Noisebridge co-founder Mitch Altman.

Altman began the conversation by asking about artificial intelligence and underscoring some of the risks in that field.

"We're now using huge datasets with all kinds of personal data, that we don't even know what information we're putting out there and what it's getting collected for," Manning said. "Our AI systems are getting better and better and better, and we don't know what the social consequences of that are. The code that we write, the bias that you see in some of the systems that you see, we don't know if we're causing feedback loops with those kinds of bias."

[...] "The tools that you make for marketing can also be used to kill people," Manning continued. "We have an obligation to think of the tools that we're making and how we're using them and not just churn out code for whatever reason. You want to think about how your end-user could misuse your code."

Guns don't kill people, code kills people.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by choose another one on Thursday September 14 2017, @03:42PM (1 child)

    by choose another one (515) Subscriber Badge on Thursday September 14 2017, @03:42PM (#567853)

    data kills people.

    A good time (as any) to recall the data poisoning attack [google.com] and generate as much "noise" as possible.

    What if you succeed in data poisoning (and succeed in it not being detected)?
    What if the poison data then leads some future IBM Watson to misdiagnose some cancers and people die?
    Whose fault are those deaths - the Watson creators, those who failed to filter the data, or those who poisoned the data with stuff designed to get past the filters?

    Maybe the moral stuff ain't so simple.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Touché) by c0lo on Thursday September 14 2017, @04:05PM

    by c0lo (156) Subscriber Badge on Thursday September 14 2017, @04:05PM (#567865) Journal

    If you manage to poison Watson, it means Watson is unreliable to noisy data**.
    As such, the person who prefer to use Watson instead of a (classically trained) oncologist is responsible of the death.

    ** the noise can arise from many causes, deliberate poisoning being only one of them. There's no warranty that Watson won;t start to go wrong because bad data from other sources.
    Showing that Watson is unreliable under noisy condition is almost a duty.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford