Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Wednesday December 20 2017, @10:40PM   Printer-friendly
from the if-!white-then-kick() dept.

The New York City Council has unanimously passed a bill to address algorithmic discrimination by city agencies. If signed by Mayor Bill de Blasio, New York City will establish a task force to study how city agencies use data and algorithms to make decisions, and whether systems appear to discriminate against certain groups:

The bill's sponsor, Council Member James Vacca, said he was inspired by ProPublica's investigation into racially biased algorithms used to assess the criminal risk of defendants. "My ambition here is transparency, as well as accountability," Vacca said.

A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for "targeting services" or "imposing penalties upon persons or policing" and to make them available for "self-testing" by the public. At a hearing at City Hall in October, representatives from the mayor's office expressed concerns that this mandate would threaten New Yorkers' privacy and the government's cybersecurity.

The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city's forensic methods, including controversial tools that the chief medical examiner's office crime lab has used for difficult-to-analyze samples of DNA. As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by jelizondo on Wednesday December 20 2017, @11:53PM (4 children)

    by jelizondo (653) Subscriber Badge on Wednesday December 20 2017, @11:53PM (#612623) Journal

    I hope Mr. di Blasio signs the bill, it is very easy to bias an algorithm and then blame “the computer” for any negative results. Without transparency it becomes a powerful weapon, wielded by unknown persons.

    Most would think the Mayor would wield this weapon, I’m more afraid of an obscure programmer with an agenda hiding behind “cybersecurity” and “privacy”.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Insightful) by bob_super on Thursday December 21 2017, @12:04AM (1 child)

    by bob_super (1357) on Thursday December 21 2017, @12:04AM (#612628)

    > it is very easy to bias an algorithm and then blame “the computer” for any negative results. Without transparency it becomes a powerful weapon, wielded by unknown persons.

    But but but Diebold told me the voting machines are the future...

    • (Score: 3, Insightful) by frojack on Thursday December 21 2017, @12:11AM

      by frojack (1554) on Thursday December 21 2017, @12:11AM (#612633) Journal

      Even Diebold's machines had no agenda, and held no malice.

      --
      No, you are mistaken. I've always had this sig.
  • (Score: 2) by legont on Thursday December 21 2017, @02:25AM (1 child)

    by legont (4179) on Thursday December 21 2017, @02:25AM (#612682)

    Unfortunately it is impossible to find out why a neural network made a certain decision. It is very similar in case of a human, but a human could be put in prison for a bad decision. He simply outlaws AI's until AI's penitentiary concept can be resolved.

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 2) by fyngyrz on Thursday December 21 2017, @06:20PM

      by fyngyrz (6567) on Thursday December 21 2017, @06:20PM (#612875) Journal

      Seems to me that any computed selection of people / families for jobs, services and the like ought to:

      • Be a human-readable / comprehensible algorithm, never an impenetrable mass of ML node weights
      • Be one algorithm that takes in all the parameters and outputs one answer and a list of decision points/weights
      • Be available to the public in every case, whether governmental or corporate

      And just as an aside (a bit lengthy, but...), the bit about an algorithm being the "proprietary and copyrighted statistical tool owned by the City of New York" strikes me as a positive indicator that the City of New York is a corrupt governmental entity. Because copyright is not a tool of government to use against the people, and in fact exists as an outgrowth of the constitution's intent to promote the development of science, etc., for the good of all. In the case of an entity owned by the people developing something, seems to me the people own it – not the governmental entity. There's a fair amount of precedent supporting such a view. Furthermore, the FST is, purportedly anyway, a tool to improve the life of the people. Withholding it seems like a fine way to do the exact opposite.