Saturday, January 9, 2010

Legal Subjectivity and Artificial Intelligence

A recent co-authored article in the Globe and Mail written by Princeton University bioethics professor Peter Singer, and Warsaw-based independent researcher Agata Sagan entitled "When Robots Want Rights" raises some thoughts about the legal status and subjectivity of robots and/or other forms of artificial intelligence. Namely, if and when such "life forms" achieve "consciousness" - that is to say - "consciousness" that human society recognizes as such, what might be the legal, social and/or cultural benchmarks for determining this.

Singer and Sagan express reasonable doubts about whether such artificial life forms would be capable of acquiring independent legal status beyond their present status as mere property even if they were able to demonstrate that they had feelings. This is in part based on the experience of animals who are sentient forms of life that are still considered property. With respect to artificial life forms, there might be questions about whether these feelings were genuine or simply feelings that they are programmed to experience. The authors state:

The hard question, of course, is how we could tell that a robot really was conscious and not just designed to mimic consciousness. Understanding how the robot had been programmed would provide a clue: Did the designers write the code to provide only the appearance of consciousness? If so, we would have no reason to believe that the robot was conscious.

But if the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious. At that point, the movement for robot rights would begin.

In various films, books, television shows and even songs, forms of artificial intelligence have become important characters. How and in what way has the concept of legal autonomy for these life forms filtered through these cultural narratives?

A number of science fiction films have anticipated life in the future involving advanced forms of artificial intelligence achieving a high degree of sentience - that is, the ability to feel and perceive subjectively. In the science fiction realm this means the ability of artificial life forms to achieve a certain degree of human-like qualities and the ability to express, amongst other things, desire and insight. The Star Wars films are a perfect example of droids having a number of emotions, attitudes, personalities, wit and it would seem an ability to experience pain upon destruction or injury. Still they are property of their various respective owners and subject to their owners' abuse and/or benevolence.

According to cognitive scientist Steve Torrance (referred to in Singer and Sagan's article), in the event that such conscientious artificial intelligence life forms are not accepted as part of a moral (read: human) community the possibility for abuse is great. Membership in such communities is often socially and culturally contingent regardless of one's status as human beings, animal or artificial life (one can hardly forget that human beings historically had at one time been deemed property capable of being bought and sold - such individuals were deemed to lack legal autonomy or subjectivity).

Perhaps one of the most striking circumstances of an artificially intelligent life form achieving some independent legal status within the narrative of a film or television show was the character Data in the (now defunct) television series Star Trek: The Next Generation and subsequent films. In the Star Trek world, Data is the property of Starfleet, the military branch of the Federation of Interstellar Planets. In an episode entitled "The Measure of a Man", Data is confronted with being dismantled so that a particular Starfleet officer and scientist, Maddox may learn certain aspects about Data's functioning. Data refuses as he believes that Maddox would not know how to perform the procedure correctly and with sufficient care. This would thus endanger Data's ability to operate in the same capacity when reassembled. After Data is ordered to submit to the dismantling at the orders of Starfleet Command, he ponders whether his resignation would avoid his need to submit to Starfleet's orders. Once again he is informed that as property of Starfleet, he must subject himself to their whims. Subsequently, at the urging of Data's commanding officer and friend, Captain Jean-Luc Picard, a hearing is convened to determine Data's legal status - is he merely property to be disposed of at Starfleet's will or is he a type of life form worthy of being conferred an autonomous legal status?

Substantively, the presiding tribunal officer holds that Data is a sentient being given that he satisfies two out of three criteria for determining sentience. Clearly, Data from his depiction on screen has intelligence, and is self-aware. The tribunal officer further concludes that it is unnecessary to determine whether Data has "consciousness" - the third criteria - as the officer determines that this often refers to a spiritual notion of whether an individual possesses a soul (notice here the limited definition ascribed to consciousness in comparison to the broader understanding expressed by Singer and Sagan). Given that there is no available evidence or judicial standards to assess whether human or other anthropomorphic life (on the show) possess this characteristic, it is deemed inapplicable. Thus, Data's legal status is transformed from a type of chattel to that of an autonomous being.

Concurrent with the arguments about whether Data, as an android is capable of (legally determinable) sentience, there is the asserted notion that to rule against Data's claim for legal autonomy would be tantamount to endorsing slavery. This is raised by the character Guinan, played by (African-American actress) Whoopi Goldberg who tries to persuade Picard (Data's impromptu legal counsel for the purpose of these proceedings) that the slavery of a whole class of such life forms presents a significant moral issue for the Federation that imagines itself as guided by benevolent and progressive notions. A whole host of characters on the show might have equally raised this rather valid point, yet the point is probably made more poignant (from the perspective of viewers) when delivered by Goldberg, as a representative of a class of people within United States society whose ancestors were subjected to slavery and deemed chattel by the United States Supreme Court in Scott v. Sandford, 60 U.S. (19 How.) 393 [Dred Scott]. Guinan becomes a conduit to transmit the shameful legacy of the Dred Scott decision into the discourse surrounding Data's worth as a sentient being. In a sense, Data's trial is a science fictional repudiation of the Supreme Court's decision (although that repudiation in the real world was at least formally performed by the Thirteenth and Fourteenth Amendments).

Data's value as a sentient being is also signaled in the series by his ability to fit into the moral community of the USS Enterprise. Although awkward at times in his quest to be more human, Data is a trusted member of the officer class, who engages in combat as part of and for the benefit of the crew and who also socially intermingles with other officers through poker games and other activities.

As an example of the intersection of law and popular culture, "The Measure of a Man" attempts to address two legal concepts that touch upon Data's legal subjectivity. The first concept is the ability of an artificially intelligent life form to achieve (human-like) sentience - as expressed by a sense of self-awareness and demonstrable intelligence. The second concept is that to deny Data legal autonomy is to in effect legally endorse slavery over a recognized form of life. The latter point of course only has resonance if we agree that Data is indeed sentient. But as Singer would point out (returning to our own temporal and terrestrial space), animals are similarly sentient but are clearly still deemed chattel. What perhaps then carries Data over the threshold into legal autonomy is his membership and participation in the moral community, and more to the point that community's willingess to allow him entry. It probably doesn't hurt that Data in most respects looks like a human.

If artificial intelligence life forms are to graduate from forms of personal property to subjects with legal autonomy, what "The Measure of a Man" suggests is that Data might serve as the minimum normative benchmark for achieving this.

No comments: