Samir Chopra at The Nation proposes that we treat algorithms as agents of the companies that deploy them. In effect, treat computer programs as people, too.
From the article:
I suggest we fit artificial agents like smart programs into a specific area of law, one a little different from that which makes corporations people, but in a similar spirit of rational regulation. We should consider programs to be legal agents--capable of information and knowledge acquisition like humans--of their corporate or governmental principals. The Google defense--your privacy was not violated because humans didn't read your e-mail--would be especially untenable if Google's and NSA's programs were regarded as their legal agents: by agency law and its central doctrine of respondeat superior (let the master answer), their agents' activities and knowledge would become those of their legal principal, and could not be disowned; the artificial automation shield between the government (the principal) and the agent (the program) would be removed.
If such a position were adopted, there could be a significant impact on the permissibility of scanning of emails for targeted advertisements or on ISP's ability to perform deep packet inspection.
(Score: 3, Insightful) by jimshatt on Thursday June 05 2014, @07:50PM
OTOH, suppose you have a rash or STD or whatever you're ashamed for. You can go to your doctor, but he's a friend of the family so you don't really want him to know. The alternative is some smart vending machine that scans your rash and gives you medication. This machine is owned by the same doctor, and he gets the dollar bills you put in it. The doctor doesn't invade your privacy because it's only the computer program that knows.