The Ministry of Justice is developing a system that aims to 'predict' who will commit murder, as part of a "data science" project using sensitive personal data on hundreds of thousands of people.
So an AI fueled minority report. Never one to shy from dystopian future. They are apparently want to try and re-create Minority Report with AI. Instead of the three mutants predicting the future they'll have a machine. Apparently there are indicators to murderers. Things to be predicted.
https://www.statewatch.org/news/2025/april/uk-ministry-of-justice-secretly-developing-murder-prediction-system/
https://www.theguardian.com/uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill
This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
(1)
(Score: 4, Troll) by Rosco P. Coltrane on Friday April 11, @10:11PM (3 children)
The AI will overwhelmingly predict upcoming crimes committed by black people in poor neighborhoods.
Not a lot of difference from current police work, mind you...
(Score: 2) by Mykl on Saturday April 12, @04:51AM (1 child)
I understand why this comment is marked Troll, but there is a real possibility that the AI may predict crimes in line with demographic representation of crime today, where some groups are over-represented in the statistics. This will no doubt create huge controversy and may in fact sink the scheme faster than any complaints about overreach or infringement of rights.
(Score: 4, Insightful) by Rosco P. Coltrane on Saturday April 12, @11:44AM
I don't.
It's been proven time and time again that AI is biased towards the white bros who trained it, and it's well known that the police too doesn't behave the same depending on the shade of your skin, to put it kindly.
Where's the trolling? All I said was purely factual - if a little sarcastic.
(Score: 0) by Anonymous Coward on Saturday April 12, @02:58PM
Right, it will.. at first. And then it'll be recalibrated by our favorite postmodern intersectionalists to lose it's "bias".
New model will root out "extremists" or those likely to post offensive things online while turning a blind eye to other crimes.
Will work just like their current justice system. Take pictures in a tourist area and get a talking-to.. based on ethnicity and skin color in the opposite way you'd normally expect.
(Score: 4, Insightful) by hendrikboom on Friday April 11, @10:43PM (4 children)
It's true that most crime-prediction systems are seriously flawed.
But there is still serious research that can be done here: to find methods of collecting and analysing data that avoids all the ways bias poisons it.
(Score: 5, Interesting) by SomeGuy on Friday April 11, @11:34PM (2 children)
Yea, the flaw is that one is not (or should not be) a criminal until they actually, you know, commit a crime.
If the magic AI says you will be a criminal, then what are they going to do? Lock you up for a crime you have not committed? Send you a coupon for mental health counseling?
At best, something like this might be able to predict a location where they can step up a police presence. Perhaps predict a riot, or digging up information about where a specific crime might be committed.
Of course, the way something like this would really be used is just to manufacture "evidence" to justify unwarranted searches.
(Score: 0) by Anonymous Coward on Saturday April 12, @03:01PM
>If the magic AI says you will be a criminal, then what are they going to do?
Surveill your ass. Wait for you to slip up. No need to manufacture evidence, everyone violates some law at some point.
(Score: 0) by Anonymous Coward on Monday April 14, @09:07AM
Nowadays it's easy to put child porn on someone's computer/phone.
So what if you deny it? Who is going to believe you?
The FBI has a track record of manufacturing terrorists...
(Score: 2) by hendrikboom on Friday April 18, @09:36PM
There's a serious problem with positive feedback here. Surveilling groups that are predicted to have high crime results in those groups being detected as criminals at a higher rate than nonsurveilled groups. This, fed back into AI training data, will yield increased surveillance recommendations.
Finding ways to compensate for this feedback effect is the kind of research needed to achieve fairness.
(Score: 4, Funny) by Gaaark on Saturday April 12, @03:02AM
Rent is too high.
Food prices are too high.
Can't afford a car to get to a job.
Don't have any money, but still have to eat and live.
Prime suspect? Everyone but the 1%...soon...
The lyrics seem to be someone we know...
--- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
(Score: 2) by DadaDoofy on Tuesday April 15, @05:02PM
This article is a great example of what AI is being sold as, versus what it actually is.
AI is the ultimate useful idiot. The people who decide which narratives the LLMs are trained on get to play God with what AI believes in, advocates and even fights for. After Musk fixed Twitter and Trump was re-elected President, they realized the messy act of censoring the ideas people aren't supposed to hear is an expensive and imperfect game of whack-a-mole at best. With LLM training, 100% of the "disinformation", "wrong ideas" and anything else that contradicts official narratives, can simply be filtered and left out of the data. That's why there is such a considerable effort being undertaken to replace humans in roles where conscious thought is required with "ideologically pure" AI.
Of course, the whole scam depends on the public believing AI is an impartial, apolitical, arbiter of the truth. Even though the trillions spent indicate the all-in nature of their bet, I'm skeptical the con can be successfully pulled off, particularly in the US where the freedom of speech to expose it is alive and well.
For it t