Samir Chopra at The Nation proposes that we treat algorithms as agents of the companies that deploy them. In effect, treat computer programs as people, too.
From the article:
I suggest we fit artificial agents like smart programs into a specific area of law, one a little different from that which makes corporations people, but in a similar spirit of rational regulation. We should consider programs to be legal agents--capable of information and knowledge acquisition like humans--of their corporate or governmental principals. The Google defense--your privacy was not violated because humans didn't read your e-mail--would be especially untenable if Google's and NSA's programs were regarded as their legal agents: by agency law and its central doctrine of respondeat superior (let the master answer), their agents' activities and knowledge would become those of their legal principal, and could not be disowned; the artificial automation shield between the government (the principal) and the agent (the program) would be removed.
If such a position were adopted, there could be a significant impact on the permissibility of scanning of emails for targeted advertisements or on ISP's ability to perform deep packet inspection.
(Score: 2) by khchung on Friday June 06 2014, @02:07AM
"Guns don't kill people, PEOPLE kill people"
Haven't we heard this often enough by now? We don't need to treat guns as people when we hold the person pulling the trigger responsible, so why do we need to treat programs as people to hold Google responsible?
"The Google defense--your privacy was not violated because humans didn't read your e-mail" is obviously PR doublespeak to get you confused. Nobody will think this logic holds water if you apply it to any other situation --
- Virus writer: I didn't trash your computer, the virus did it!
- Car driver: I didn't hit you, the car did! (maybe it will work if it were an autonomous car...)