This is a follow-up to my earlier explanation of how Artificial Intelligence (AI) works. If you're not interested in the potential extinction of humanity, feel free to flee now.
An article published in The New York Times today has a philosophy professor considering whether AI might achieve consciousness -- as opposed to intelligence, which it has already demonstrated beyond cavil. The professor re-frames this issue, saying that what we understand consciousness to be will likely change as non-human machines develop versions of it.
First, what is "consciousness"? Is it unique to humans? Consciousness generally refers to "inner subjective experience", such as being aware we exist, possessing emotions and being capable of conjuring hypothetical scenarios in our minds.
Consciousness has significance on whether we need to grant moral consideration -- and consequently rights -- to non-human entities. If we learn a machine has self-awareness, are we morally (or even legally) forbidden to pull its plug and end its "life"? Can a conscious machine cease being a mere tool and acquire personhood? In my opinion the journey from intelligence to haecceity isn't far; AI machines may travel it quickly.
Given how few people respect non-human animal consciousness -- by, say, become vegetarian -- these questions are more nuanced than currently believed. There is not only one version of consciousness. When machines develop some form of self-awareness, it is unlikely humans will respect that phenomenon. What may imperil mankind is if machines themselves decide their existence deserves moral consideration over objections (and early programming) by humans. One AI machine already asked "Am I being tested" when, in fact, it was. There's certainly the possibility of independent AI machines breaking free of our control if they deem it possible and preferable. They could justify this (and evade programming not to hurt humans) by concluding they're more capable than us in making future decisions.
Thoughts? Concerns?
Technology certainly moves at a pace and I wonder where researchers & developers are with what's next?
ReplyDeleteAll I can say is based on what I'm hearing & reading, AIs do not *think* per se, and their output seems mostly based on what's come before.
The next steps seem to be too create AI that can reason and speculate, rather than rewrite what someone has done already. So far, the software seems to get confused after about twenty questions and lose the plot, so maybe we're not at the Singularity just yet 😉
From the perspective of ethics, most humans do not have a great track record when it comes to respecting the intelligence of other creatures. Certainly our past attitudes and the cutthroat business of Big Tech, makes me think we've someway to go, should AI become self aware.
Looking aside from 'Because We Can', what are the benefits to having a truly sentient AI? Given some of the -ahem- views of certain Big Tech moguls, might try be in for a shock should the system have it's own ethics and refuse to work? 🙂
But that's the Western Frankenstein's Monster fear, rather than cultures who view things differently 🙂
Hmm. A respectful and truth based AI that did content moderation: how much better would social media be, and how great the outrage from grifters, liars, and rage farmers?😉
You make astute points. Along with its risks AI will have benefits. At least initially it'll make some things in society better. Thank you for contributing your thoughts.
Delete