Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by hubie on Friday April 05, @05:43AM   Printer-friendly

'AI-enhanced' Video Evidence Got Rejected in a Murder Case Because That's Not Actually a Thing

The AI hype cycle has dramatically distorted views of what's possible with image upscalers:

A judge in Washington state has blocked video evidence that's been "AI-enhanced" from being submitted in a triple murder trial. And that's a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

Judge Leroy McCullough in King County, Washington wrote in a new ruling that AI tech used, "opaque methods to represent what the AI model 'thinks' should be shown," according to a new report from NBC News Tuesday. And that's a refreshing bit of clarity about what's happening with these AI tools in a world of AI hype.

"This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model," McCullough wrote.

[...] The rise of products labeled as AI has created a lot of confusion among the average person about what these tools can really accomplish. Large language models like ChatGPT have convinced otherwise intelligent people that these chatbots are capable of complex reasoning when that's simply not what's happening under the hood. LLMs are essentially just predicting the next word it should spit out to sound like a plausible human. But because they do a pretty good job of sounding like humans, many users believe they're doing something more sophisticated than a magic trick.

And that seems like the reality we're going to live with as long as billions of dollars are getting poured into AI companies. Plenty of people who should know better believe there's something profound happening behind the curtain and are quick to blame "bias" and guardrails being too strict. But when you dig a little deeper you discover these so-called hallucinations aren't some mysterious force enacted by people who are too woke, or whatever. They're simply a product of this AI tech not being very good at its job.

Washington Judge Bans Use of AI-enhanced Video as Trial Evidence

Washington judge bans use of AI-enhanced video as trial evidence:

A judge in Washington banned the use of videos enhanced by artificial intelligence (AI) as evidence in the trial of a man who is accused of killing three people.

The ruling, signed Friday by King County Superior Court Judge Leroy McCullough, may be the first-of-its-kind ruling in court as AI tech emerges. It was first reported by NBC News.

[...] Lawyers for Joshua Puloka attempted to introduce cellphone video evidence enhanced by AI. Prosecutors said there's no legal precedent for using the technology in court, the outlet reported.

Puloka has claimed self-defense after he was charged in the Sept. 26, 2021, killings after he opened fire in a bar near Seattle. He killed three people and wounded two at the La Familia Sports Pub and Lounge in Des Moines.

[...] The shooting was caught on cellphone video, and his lawyers wanted to enhance the video. They asked a video production editor to use software that can "supercharge" video, NBC News reported.

The prosecutor's office said the video enhanced images that were "inaccurate, misleading, and unreliable." Experts said the software is meant to make video more visually appealing but may not reflect the truth.

The ruling comes as lawmakers across the country contend with the emerging capabilities and accessibility of AI technology. Just last week, the White House released its first government-wide policy that hopes to mitigate the risks of AI.

Last October, President Biden signed a sweeping executive order on the technology, but McCullough's ruling proves that more policy will be necessary as AI becomes more powerful and widely used.

You can prove ANYTHING with AI enhancement!


Original Submission #1Original Submission #2

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by pTamok on Friday April 05, @06:46AM (12 children)

    by pTamok (3042) on Friday April 05, @06:46AM (#1351737)

    Does this mean that pictures taken with iPhones are inadmissible as evidence: because they are processing in-phone with 'AI' to generate a pleasing image?

    I really hope so, for all the reasons the judge stated.

    • (Score: 4, Informative) by RamiK on Friday April 05, @11:45AM (1 child)

      by RamiK (1813) on Friday April 05, @11:45AM (#1351747)

      Depends on what they're doing. Samsung's AI, for instance, identifies blurry images of the moon and fills in the missing details: https://www.engadget.com/samsung-explains-its-fake-moon-photos-170233896.html [engadget.com]

      Obviously that level of "enhancement" shouldn't be admissible in court.

      --
      compiling...
      • (Score: 2) by Freeman on Friday April 05, @02:18PM

        by Freeman (732) on Friday April 05, @02:18PM (#1351759) Journal

        Hey, crime shows have done this for decades! It's just a natural evolution of the technology paired with the natural deterioration of society.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by looorg on Friday April 05, @11:53AM (5 children)

      by looorg (578) on Friday April 05, @11:53AM (#1351748)

      Is that iphone specific tho or isn't that something that all smartphone cameras, or just any digital camera, does these days? It tries to stabilize the image, cause people have shaky hands and things move, or things are out of focus or somewhat blurry and it tries to fix those by various means or it will try and adjust brightness and contrast etc to make the image more "pleasing" or crisp or colourful.

      That said sure it's probably not a long way off before it tries to use other filters and things to spice things up and eventually perhaps even showing things that are not there. So it's a short skip and a jump to just completely doctored or altered images to show "reality" the way they think it should have been.

      • (Score: 5, Interesting) by Rich on Friday April 05, @12:32PM (1 child)

        by Rich (945) on Friday April 05, @12:32PM (#1351752) Journal

        I saw a photo from car meet with an interesting AI glitch: I was at max zoom to see who what cars there were in the back row of the parking lot. One of the cars was an AI hallucination: The front looked like that of a flat mid-engined car with a fastback (Type 79 Esprit), but the rear was shooting-brake style (Type 75 Elite). I wondered where the bloody camera AI thought the engine would be in that one. ;)

        From memory, I'd estimate the pixel size of the car at maybe 40x20, so that AI clearly went beyond getting rid of the Bayer pattern and other sensor artifacts and hallucinated at a larger scope to add sharpness from things it knew or which were elsewhere in the picture.

        • (Score: 4, Touché) by JoeMerchant on Friday April 05, @03:26PM

          by JoeMerchant (3937) on Friday April 05, @03:26PM (#1351764)

          > that AI clearly went beyond getting rid of the Bayer pattern and other sensor artifacts and hallucinated at a larger scope to add sharpness from things it knew or which were elsewhere in the picture.

          In essence, the AI image becomes a more human-like "eye witness" - filling in details from memory and prejudices.

          --
          🌻🌻 [google.com]
      • (Score: 2) by sjames on Friday April 05, @05:55PM (1 child)

        by sjames (2882) on Friday April 05, @05:55PM (#1351770) Journal

        Even overly creative de-blurring can be questionable if the evidence is a small bit of the image blown up. For example, doesn't that look like YOUR RING? You don't have to push all that many pixels around to turn one ring into another, especially if AI is filling in details.

        • (Score: 2) by kazzie on Sunday April 07, @06:25AM

          by kazzie (5309) Subscriber Badge on Sunday April 07, @06:25AM (#1351915)

          Pushing pixels isn't enough, you need to throw the ring into the fires of Orodruin instead.

      • (Score: 1) by pTamok on Friday April 05, @06:28PM

        by pTamok (3042) on Friday April 05, @06:28PM (#1351774)

        I think there is a difference in type of image processing between stabilization, fiddling with gamma, white balance, brightness, and contrast: and filling in details that don't exist - a 'best guess' at what might be there - is it 'on the balance of probabilities', or 'beyond reasonable doubt' that the imaginary details are correct?

        An iPhone can take multiple images and stitch them together to generate a picture where, after movement of the subject, people can end up with six fingers; or have an image in a mirror not reflecting the position of the subject. People can be unaware of this - there is a reasonably well known picture of a person trying on a bridal dress with reflection in multiple mirrors: and the reflections do not correspond with the position of the subject from just such processing.

        AI interpolation has no place in a courtroom.

    • (Score: 2) by aafcac on Friday April 05, @12:04PM (2 children)

      by aafcac (17646) on Friday April 05, @12:04PM (#1351749)

      Probably not, you'd just have to have the photographer testify that the images are representative. It's not really that big of a deal as you already needed to have somebody to testify about how the image was acquired in the first place. I remember sitting on jury duty when the lawyers has dueling photos of the same item and were trying to influence us over the actual size of it. In retrospect, it was kind of odd that neither photograph had a measuring tape or anything of the nature to demonstrate the actual size.

      • (Score: 2) by sjames on Friday April 05, @06:03PM (1 child)

        by sjames (2882) on Friday April 05, @06:03PM (#1351771) Journal

        Given how easily witnesses memory can be confounded with something as simple as saying "Didn't he have a beard?" right as the witnesses are trying to remember every detail for the police, I can easily see simply reviewing a photo from an iPhone could convince a potential witness that they saw whatever the AI dreamed up.

        • (Score: 2) by aafcac on Friday April 05, @08:15PM

          by aafcac (17646) on Friday April 05, @08:15PM (#1351786)

          Previously, but it's less likely than the alternative. If it's a photograph as the basis, there's no good reason to not retain the basic photo with the AI bits on top. I do think there will be a solution, but it's unlikely that these will be completely inadmissible unless they are more than just a bit engaged.

    • (Score: 2) by Tork on Friday April 05, @06:31PM

      by Tork (3914) Subscriber Badge on Friday April 05, @06:31PM (#1351775)

      Does this mean that pictures taken with iPhones are inadmissible as evidence: because they are processing in-phone with 'AI' to generate a pleasing image?

      Not necessarily. iPhone's image processing is about taking a zillion frames and using AI to find which portions of the image are the best exposed. Now that does mean things like someone's reflection in a mirror might have a different pose than the person themselves as the image was recomposed from different fractions of a second. But what it doesn't mean is that the photo was invented, like in the Samsung AI moon example another poster mentioned.

      That said, will there be legal nitpickery? You betcha.

      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
  • (Score: 3, Interesting) by canopic jug on Friday April 05, @10:25AM (5 children)

    by canopic jug (3949) Subscriber Badge on Friday April 05, @10:25AM (#1351744) Journal

    What is the deal with the court's misapplication of the word "enhanced", when what is being addressed is clearly AI-assisted forgery and doctoring? It's good that the court has addressed the problem but they need to call it what it is.

    "As the jury can plainly see in the photo evidence labelled Exhibit 18d, the suspect clearly has six fingers on the one hand and two and a half on the other. My client, as you can see from his hands here in the court, has five fingers on each hand like us, he is a five-man like us, and is therefore innocent and has not tasted blood."

    --
    Money is not free speech. Elections should not be auctions.
    • (Score: 2) by aafcac on Friday April 05, @01:30PM

      by aafcac (17646) on Friday April 05, @01:30PM (#1351755)

      Regardless of whether it's enhanced, forged or doctored, there's a requirement to have an expert to attest to the specifics of how the image was captured, what processing was done and anything else between when the photo was taken and what's being presented in court. Depending upon how the image was enhanced, that isn't necessarily an issue, but if new details are being added and nobody can be sure if they impact what the image is showing, that shouldn't ever be admissible.

    • (Score: 2) by JoeMerchant on Friday April 05, @01:51PM

      by JoeMerchant (3937) on Friday April 05, @01:51PM (#1351756)

      There is a slippery slope... When two or more enhancement algorithms are available to choose from, the adversarial process guarantees that a (good) litigator will choose the algorithm that backs their position most strongly.

      With AI, there are infinite algorithms that take input of what you want the "evidence" to show, basically turning the evidence into fiction.

      --
      🌻🌻 [google.com]
    • (Score: 4, Insightful) by tangomargarine on Friday April 05, @02:11PM (2 children)

      by tangomargarine (667) on Friday April 05, @02:11PM (#1351757)

      I'd phrase it as "Judge Bans Tampering With Evidence".

      Which, like...yeah, duh? WTF thought this was a good idea in the first place??

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 2) by Freeman on Friday April 05, @02:23PM (1 child)

        by Freeman (732) on Friday April 05, @02:23PM (#1351760) Journal

        CSI TV show fans?

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
        • (Score: 2) by tangomargarine on Friday April 05, @03:21PM

          by tangomargarine (667) on Friday April 05, @03:21PM (#1351762)

          Usually it's also a pretty big plot point on cop shows that the detective who tampers with evidence is far beyond the pale on his own vigilante trip though.

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
  • (Score: 5, Touché) by namefags_are_jerks on Friday April 05, @11:30AM (2 children)

    by namefags_are_jerks (17638) on Friday April 05, @11:30AM (#1351746)

    So...how many convictions from CSI are now going to appeal?

    • (Score: 3, Funny) by JoeMerchant on Friday April 05, @12:55PM (1 child)

      by JoeMerchant (3937) on Friday April 05, @12:55PM (#1351754)

      >applying an AI filter can give them access to secret visual data.

      Well, obviously, anybody who watched CSI: anything knows that!

      --
      🌻🌻 [google.com]
      • (Score: 2) by sjames on Friday April 05, @06:08PM

        by sjames (2882) on Friday April 05, @06:08PM (#1351772) Journal

        Currently exercising my finger. If I press the "enhance" button enough, I'm sure I can find Bigfoot!

        Next up, carbon dating by continuously zooming and enhancing then visually counting the heavy looking carbon atoms. What could go wrong?

(1)