Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Wednesday December 20 2017, @10:40PM   Printer-friendly
from the if-!white-then-kick() dept.

The New York City Council has unanimously passed a bill to address algorithmic discrimination by city agencies. If signed by Mayor Bill de Blasio, New York City will establish a task force to study how city agencies use data and algorithms to make decisions, and whether systems appear to discriminate against certain groups:

The bill's sponsor, Council Member James Vacca, said he was inspired by ProPublica's investigation into racially biased algorithms used to assess the criminal risk of defendants. "My ambition here is transparency, as well as accountability," Vacca said.

A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for "targeting services" or "imposing penalties upon persons or policing" and to make them available for "self-testing" by the public. At a hearing at City Hall in October, representatives from the mayor's office expressed concerns that this mandate would threaten New Yorkers' privacy and the government's cybersecurity.

The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city's forensic methods, including controversial tools that the chief medical examiner's office crime lab has used for difficult-to-analyze samples of DNA. As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Immerman on Wednesday December 20 2017, @11:13PM (3 children)

    by Immerman (3985) on Wednesday December 20 2017, @11:13PM (#612607)

    >this mandate would threaten New Yorkers' privacy

    How exactly would exposing the algorithms threaten anyone's privacy, unless they were tuned to specific individuals? I'd go so far as to say the only "privacy" threatened is that of the people trying to hide their abuses behind "the algorithm said ..." - either because their biases were encoded in the algorithm, or because it said no such thing, but they're accustomed to lying about it to justify their misdeeds. We can only hope that a whole lot of officer privacy gets violated in prison if it's discovered that they were flat-out lying about individuals being red-flagged by the software.

    That's the "nice" thing about drug dogs - easy to train them to "point" on subtle commands, or to lie and claim that they were pointing to subtly for bystanders to notice, and really hard to prove the truth.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Interesting=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 1, Funny) by Anonymous Coward on Wednesday December 20 2017, @11:28PM

    by Anonymous Coward on Wednesday December 20 2017, @11:28PM (#612611)

    Pattern p = Pattern.compile("berg$");

  • (Score: 5, Interesting) by frojack on Wednesday December 20 2017, @11:51PM (1 child)

    by frojack (1554) on Wednesday December 20 2017, @11:51PM (#612619) Journal

    That's the "nice" thing about drug dogs - easy to train them to "point" on subtle commands,

    You'd better hope there is an algorithm to study, because if a finger twitch is all it takes to trigger a dog, how little does it take to
    trigger an artificial intelligence system to pick citizen A over Citizen B in any given situation.

    Even with the best of intent, machine learning will adopt human biases. [theverge.com]

    When AI becomes common, or so called expert systems which learn by feeding it many examples come in to play you might not have anything more to study than a finger twitch.

    Problem with AI, is you may never have a fixed algorithm to publish: The system is learning every day, and discarding parameters that are weak or out dated. How do you publish that?

    The concept of an algorithm may seem out of date in the big data analysis world. But I for one would rather have it done by human coded algorithms (software) that can be inspected and changed than some AI bot in the back room.

    I can see why its not easy to publish algorithms. They are often spread all over a software system, bits of logic here and there that affect services delivered may be located in many different subroutines of a system. Some may even externalized in black-box queries of databases beyond control of the agency involved.

    I've had to write easily audit-able code for some government systems my company developed. It became necessary over time to concentrate all decisions into one heavily documented module, which received everything that was known about Citizen A as input, and it output Decision 1.
    But that module could also show its work as a stream of sub-steps, calculations, inputs, and intermediate results, each such sub-step citing chapter and verse of the regulations.

    Maintaining the citation code took way more time than maintaining the decision code. But that process meant the agency stopped losing "fair hearing" appeals.

    It cost a lot to get to that level of transparency. And a lot of smartass obfuscated code got re-written in the process.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 2) by nobu_the_bard on Thursday December 21 2017, @01:34PM

      by nobu_the_bard (6373) on Thursday December 21 2017, @01:34PM (#612770)

      This is actually why the military has resisted using fully autonomous weapons systems. The complexities of the AIs involved is such that it is hard to explain why it did what it did when used in the field.