David J. Gunkel
Philosopher and Advisor, Department of Communication, Northern Illinois University
Artificial intelligence (AI) and other forms of seemingly autonomous technology present us with a unique and epoch-defining challenge. On the one hand, they are designed and manufactured technological artifacts. They are things. And like any of the other things that we encounter and use each and every day they are objects with instrumental value. Yet on the other hand, these things are not quite like other things. They seem to have social presence, they are able to talk and interact with us, and many are designed to mimic or simulate the capabilities and behaviors that are commonly associated with human or animal intelligence.
So are these technological innovations just things or objects that we can use or even abuse as we decide and see fit? Or is it the case that AI can or even should be something like a person—that is, another subject who would need to be recognized as a kind of socially significant other with some claim on us? These questions, which have been a staple in science fiction are no longer a matter of fictional speculation. They are science fact and represent a very real legal and philosophical dilemma.
For me, the problem here is not with the technology of AI; it is with our limited moral and legal categories, specifically the person/thing dichotomy that organizes everything in this domain. Consequently, my solution is not to restrict AI to the category of thing nor to extend to it the status of person (as we have previously done for other artificial entities like the corporation). What we need a new moral and legal ontology—one that can respond to and take responsibility for things that are neither mere things nor persons. And it is this challenge that I have called “the Machine Question” and have investigated in a trilogy of books published by the MIT Press.