This is a follow-up to my earlier explanation of how Artificial Intelligence (AI) works. If you're not interested in the potential extinction of humanity, feel free to flee now.
An article published in The New York Times today has a philosophy professor considering whether AI might achieve consciousness -- as opposed to intelligence, which it has already demonstrated beyond cavil. The professor re-frames this issue, saying that what we understand consciousness to be will likely change as non-human machines develop versions of it.
First, what is "consciousness"? Is it unique to humans? Consciousness generally refers to "inner subjective experience", such as being aware we exist, possessing emotions and being capable of conjuring hypothetical scenarios in our minds.
Consciousness has significance on whether we need to grant moral consideration -- and consequently rights -- to non-human entities. If we learn a machine has self-awareness, are we morally (or even legally) forbidden to pull its plug and end its "life"? Can a conscious machine cease being a mere tool and acquire personhood? In my opinion the journey from intelligence to haecceity isn't far; AI machines may travel it quickly.
Given how few people respect non-human animal consciousness -- by, say, become vegetarian -- these questions are more nuanced than currently believed. There is not only one version of consciousness. When machines develop some form of self-awareness, it is unlikely humans will respect that phenomenon. What may imperil mankind is if machines themselves decide their existence deserves moral consideration over objections (and early programming) by humans. One AI machine already asked "Am I being tested" when, in fact, it was. There's certainly the possibility of independent AI machines breaking free of our control if they deem it possible and preferable. They could justify this (and evade programming not to hurt humans) by concluding they're more capable than us in making future decisions.
Thoughts? Concerns?
Technology certainly moves at a pace and I wonder where researchers & developers are with what's next?
ReplyDeleteAll I can say is based on what I'm hearing & reading, AIs do not *think* per se, and their output seems mostly based on what's come before.
The next steps seem to be too create AI that can reason and speculate, rather than rewrite what someone has done already. So far, the software seems to get confused after about twenty questions and lose the plot, so maybe we're not at the Singularity just yet 😉
From the perspective of ethics, most humans do not have a great track record when it comes to respecting the intelligence of other creatures. Certainly our past attitudes and the cutthroat business of Big Tech, makes me think we've someway to go, should AI become self aware.
Looking aside from 'Because We Can', what are the benefits to having a truly sentient AI? Given some of the -ahem- views of certain Big Tech moguls, might try be in for a shock should the system have it's own ethics and refuse to work? 🙂
But that's the Western Frankenstein's Monster fear, rather than cultures who view things differently 🙂
Hmm. A respectful and truth based AI that did content moderation: how much better would social media be, and how great the outrage from grifters, liars, and rage farmers?😉
You make astute points. Along with its risks AI will have benefits. At least initially it'll make some things in society better. Thank you for contributing your thoughts.
DeleteI think my biggest qualms with AI is the theft aspect. These computers aren't learning, it's humans stealing. And then we've given public access, which has led to more and more theft. People putting things (art, writing, music) into these machines that doesn't belong to them to begin with.
ReplyDeleteI can't even consider the other implications without first addressing the core basis on how they have trained their "ai".
Then you have chat bots and algorithms that existed well before a.i. which has now been given a rebrand to make those things seem smarter.
Truthfully I am more interested in the non-human animal intelligence. I've found a lot of interest in Flounder and Elsie the cats and Bunny the dog who all talk via buttons. Whether it's just anthromorphizing, editing or true intelligence from them keeps me interested. Later next year the two owners plan to meet up their pets.
I'm more interested in Flounder the cat insisting she's a fish and how their humans have to explain to the others about it.
Meantime I'll keep using my own brain and encouraging others to not put personal stuff into chatgpt and hope that the landscape of AI improves in terms of how it's used and hopefully address the major flaws and invasions of its creation.
Of course I agree. There were (and still are) several lawsuits challenging the theft of IP for training AI. You unmask that. And I agree that it's more interesting to look at non-human animal intelligence which we've ignored and misunderstood forever. Denying animals' real brains has been used as a way to deny them moral consideration and justify exploitation/abuse.
Delete