This is a follow-up to my earlier explanation of how Artificial Intelligence (AI) works. If you're not interested in the potential extinction of humanity, feel free to flee now.
An article published in The New York Times today has a philosophy professor considering whether AI might achieve consciousness -- as opposed to intelligence, which it has already demonstrated beyond cavil. The professor re-frames this issue, saying that what we understand consciousness to be will likely change as non-human machines develop versions of it.
First, what is "consciousness"? Is it unique to humans? Consciousness generally refers to "inner subjective experience", such as being aware we exist, possessing emotions and being capable of conjuring hypothetical scenarios in our minds.
Consciousness has significance on whether we need to grant moral consideration -- and consequently rights -- to non-human entities. If we learn a machine has self-awareness, are we morally (or even legally) forbidden to pull its plug and end its "life"? Can a conscious machine cease being a mere tool and acquire personhood? In my opinion the journey from intelligence to haecceity isn't far; AI machines may travel it quickly.
Given how few people respect non-human animal consciousness -- by, say, become vegetarian -- these questions are more nuanced than currently believed. There is not only one version of consciousness. When machines develop some form of self-awareness, it is unlikely humans will respect that phenomenon. What may imperil mankind is if machines themselves decide their existence deserves moral consideration over objections (and early programming) by humans. One AI machine already asked "Am I being tested" when, in fact, it was. There's certainly the possibility of independent AI machines breaking free of our control if they deem it possible and preferable. They could justify this (and evade programming not to hurt humans) by concluding they're more capable than us in making future decisions.
Thoughts? Concerns?
No comments:
Post a Comment