Christopher (Chris) Alan Pelkey had two tours of duty in Iraq, and one in Afghanistan, behind him when, on a November Saturday afternoon, a car stopped behind him at a red light and started honking his horn. The veteran sergeant got out of his car and walked to the other car with his hands held up, as if he wanted to ask what the problem was. Then three shots rang out.
Now, three years later, his family has used generative AI to allow him to appear as a witness in his own murder case. It is worth watching the generated video here [bbc.com].
I have difficulty interpreting this. While this video appears eerily touching to me, it is not hard to foresee how these kind of videos could also be used to sway a judge or jury to much heavier penalties.
The video also reminded me of the Caleb character in the science fiction series WestWorld (seasons 3 and 4). Caleb is an army veteran who, for therapy purposes, is coupled to an AI version of his killed-in-action army buddy. Instead of having a liberating effect though, the feeling is more that his AI buddy holds him back, keeping him instead dependent on the service.
And then there's this video of a recent interview [youtube.com] given by Mark Zuckerberg -- wearing Meta's Ray-Ban AI connected glasses, which he claimed were selling by the millions -- at Stripe Sessions, where he talks about how personal AI will be their focus, and what that actually will mean (at 14'), and, connected to that, the need to long-term invest in glasses as the ultimate devices for AI (24') -- seeing what you see, hearing what you hear.
The currently existing AI ethics rules/frameworks [acm.org] focus on data privacy and legal responsibility. Maybe we should start to think early, here, about potential sociological and psychological impacts.