The New York City Council has unanimously passed a bill to address algorithmic discrimination by city agencies. If signed by Mayor Bill de Blasio, New York City will establish a task force to study how city agencies use data and algorithms to make decisions, and whether systems appear to discriminate against certain groups:
The bill's sponsor, Council Member James Vacca, said he was inspired by ProPublica's investigation into racially biased algorithms used to assess the criminal risk of defendants. "My ambition here is transparency, as well as accountability," Vacca said.
A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for "targeting services" or "imposing penalties upon persons or policing" and to make them available for "self-testing" by the public. At a hearing at City Hall in October, representatives from the mayor's office expressed concerns that this mandate would threaten New Yorkers' privacy and the government's cybersecurity.
The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city's forensic methods, including controversial tools that the chief medical examiner's office crime lab has used for difficult-to-analyze samples of DNA. As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.
(Score: 3, Interesting) by looorg on Wednesday December 20 2017, @11:50PM (1 child)
Just out of curiosity how do you make decisions (algorithmically or not) without "discriminating" against someone or some group? It's all about making choices and selections. So someone is going to be left out for failing one or missing one variable or another.
So from the article it seems they are making selections based on DNA and statistics to weed out potential subjects to test versus. While the margin of error seems to be upwards to 30% there is also this matter of fact that a lot of it seems to come down to how people react to it. OMG IT'S DNA SO HE MUST BE THE MURDERER! And then people plead guilty. On the other hand nobody seems overly concerned with that it could also have selected away people that was actually guilty. Also they seem to confuse people with larger numbers making them seem more important or impressive then they are. Oh so it's like 55 million, that seems pretty bad -- unless you know don't have a frame of reference.
Well if you are going to use statistics as you base it would be appropriate if you at least share your model or the recipe (or algorithm) that you use. After all any idiot can fudge up an index of some kind to show pretty much whatever they like.
Auch! Still if someone (above mentioned people) developed the method didn't they find it odd if their method and the software didn't product expected or similar results? Seems a bit like they are trying to find someone to toss under the bus or fall on the sword etc etc.
(Score: 2) by frojack on Thursday December 21 2017, @12:07AM
While that may be true, the important part is that the algorithm of the software or the programmer should not be substituting its/their discrimination for the discrimination that is agreed upon by society (which presumably is encoded into law).
DNA eliminates many billions of subjects long before it gets down to selecting a small hand full of valid subjects. In medical situations this saves millions of lives. In criminal situations DNA is still not considered proof in the face of incontrovertible evidence to the contrary. Planting of evidence is a thing, and the courts recognize this.
No, you are mistaken. I've always had this sig.