Samir Chopra at The Nation proposes that we treat algorithms as agents of the companies that deploy them. In effect, treat computer programs as people, too.
From the article:
I suggest we fit artificial agents like smart programs into a specific area of law, one a little different from that which makes corporations people, but in a similar spirit of rational regulation. We should consider programs to be legal agents--capable of information and knowledge acquisition like humans--of their corporate or governmental principals. The Google defense--your privacy was not violated because humans didn't read your e-mail--would be especially untenable if Google's and NSA's programs were regarded as their legal agents: by agency law and its central doctrine of respondeat superior (let the master answer), their agents' activities and knowledge would become those of their legal principal, and could not be disowned; the artificial automation shield between the government (the principal) and the agent (the program) would be removed.
If such a position were adopted, there could be a significant impact on the permissibility of scanning of emails for targeted advertisements or on ISP's ability to perform deep packet inspection.
(Score: 2) by Ken_g6 on Thursday June 05 2014, @10:50PM
If computer programs are my agents, then am I legally liable for my bugs? I don't like this plan.
But wait! If computer programs are like slaves, then if I sold a program - rather than licensing it - would that absolve me of liability? Then could this could get rid of all those awful license agreements? I'm starting to like this plan.
(Score: 1) by khedoros on Friday June 06 2014, @01:16AM