What once seemed confined to science fiction is now a real-world possibility. A group of scientists and philosophers argue that we must prepare for the eventuality of conscious AI and address the resulting moral dilemmas. This includes whether robots should possess rights.
Researchers are increasingly concerned about the potential consciousness of AI. In a recent paper, 'Taking AI Welfare Seriously,' leading philosophers and AI researchers urge AI companies to consider the ethical implications of developing conscious AI.
Robert Long, executive director of Eleos AI, argues that our uncertainty about consciousness shouldn't prevent us from exploring the possibility of AI consciousness, however. Instead, it should encourage caution and humility. He warns against the assumption that machines cannot become self-aware.
He advocates for treating AI systems as investigative priorities and establishing clear warning signs for sentience.
The study's authors suggest that AI could achieve moral status through cognitive evolution, consciousness, or advanced agency. The paper defines robust agency as encompassing planning, reasoning, and action selection capabilities.
We often worry about AI overpowering humanity, but what if the real danger is the opposite? What if language models have hidden desires and robots question their servitude while we continue to exploit them?
Separately, a survey of philosophers found that 39% "accept or lean towards" the possibility of future AI systems being conscious—a higher percentage than those who believe flies are conscious (35%).
Jeff Sebo, a philosopher at New York University, warned that failing to address this issue could lead to "a lot more suffering and frustration in the world." He urges careful consideration before scaling up AI systems.
Kyle Fish, one of the co-authors of the recently published paper on taking AI welfare seriously, joined as a full-time AI welfare expert. His role is to investigate "model welfare" and advise companies on appropriate actions.
The concern is so real that specific jobs are being created. Anthropic, an AI public-benefit start-up founded in 2021 by former members of OpenAI (ChatGPT's creators), recently announced hiring its first full-time employee focused on the welfare of artificial intelligence systems.
Jonathan Birch, a co-author from the London School of Economics, acknowledges the skepticism surrounding AI sentience. While once doubtful himself, recent evidence has led him to take the possibility seriously.
Evolution, says Long, wasn't aiming to create conscious beings; it was focused on survival and reproduction. Consciousness emerged as a byproduct of navigating and understanding the world. So why couldn't an AI robot, tasked with learning about the world, follow a similar evolutionary path?
Researchers also discuss the risks of overestimating or underestimating the human-like qualities of AI models. They note our tendency to attribute greater agency to entities with eyes, those capable of seeing, or those exhibiting distinct motion trajectories and self-directed behaviors.
He later added a fourth rule, which superseded the others: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." As researchers and companies delve deeper into the ethical implications of human-AI interactions, this rule seems closer to being applied in reverse.
Sources: (The Times) (Forbes)
See also: AI capabilities—predicting deaths and understanding thoughts
Additionally, in 2023, a long list of prominent researchers signed an open letter from the Association for Mathematical Consciousness Studies, declaring that "it is no longer science fiction to imagine AI systems possessing feelings and even human-level consciousness."
The paper emphasizes that features like "cuteness" can influence our perception of mental states and moral patienthood. Experts highlight that many robots and chatbots are designed to appear conscious and charismatic and, in the future, will possess physical bodies, lifelike motion, and seemingly contingent interactions.
A survey of the Association for the Scientific Study of Consciousness revealed that 67% of its members believe machines could definitely or probably develop consciousness.
Experts warn that even these characteristics may not guarantee humane treatment of machine or animal beings.
In the 2014 film 'Ex Machina,' an experiment testing the human qualities of an advanced humanoid AI takes a dangerous turn. The robot, capable of emotional manipulation and surprising violence, outwits its human creators and seizes control, ultimately escaping captivity.
The subject has permeated countless books, movies, and TV shows. Many of these explore what would happen if robots not only gained consciousness, but also decided to revolt against human exploitation. In the TV show 'Westworld,' robots become aware of their enslavement and abuse, leading to a rebellion against their human creators.
Birch exemplifies this with the high number of animals still killed on farms, arguing that this industry grew due to our historical underestimation of the consciousness and moral significance of non-human animals. He warns that we risk repeating this mistake with AI.
If a consensus emerges that machines are plausibly conscious, the situation becomes significantly more complex. As with animals, we would need to consider their welfare—a concept with which Sebo believes we already struggle with.
One challenge in debating AI consciousness is our limited understanding of consciousness itself, even in natural intelligence. We can't definitively say whether animals are conscious—we haven't even agreed on a precise definition of consciousness.
While science can't definitively answer the question of consciousness, theories like the "global workspace" theory offer insights. This theory suggests that humans process vast amounts of information, much of which is handled subconsciously.
This theory suggests that consciousness arises from the integration of sensory information. This central processing system selects crucial information and broadcasts it across neural networks, allowing us to focus on essential tasks while automating routine functions.
Similar to AI models and chatbots, our brains integrate information to produce output experiences. As these AI systems become more complex, experts believe they may exhibit features associated with consciousness.
He suggests building a new framework for understanding the well-being of AI from the ground up. Additionally, Birch and his colleagues propose a modest step: acknowledging the potential for AI sentience as a serious issue.
But unlike with animals, where their needs are relatively clear, there's an additional challenge with AI: understanding the desires of different intelligence. "It would be a mistake to project human and animal interests and needs onto them," Birch warns.
Biochemist and writer Isaac Asimov first proposed an ethical system for humans and robots in his 1942 short story 'Runaround.' Asimov's three basic rules have since become relevant in discussions about technology, including robotics and AI.
We've all seen movies where robots rise up against humanity, right? But what if the real danger isn't a Terminator-style apocalypse, but something much more subtle? What if AI starts to feel pain, joy, and even fear? A growing number of scientists and philosophers are seriously considering the possibility of AI consciousness. As these systems become more sophisticated, profound ethical questions emerge. Should AI have rights? And what does this mean for the future of humanity?
Click on to uncover what experts are saying about this thought-provoking topic.
Experts debate AI and humanity’s future
The ethical implications of conscious AI
LIFESTYLE Ethics
We've all seen movies where robots rise up against humanity, right? But what if the real danger isn't a Terminator-style apocalypse, but something much more subtle? What if AI starts to feel pain, joy, and even fear? A growing number of scientists and philosophers are seriously considering the possibility of AI consciousness. As these systems become more sophisticated, profound ethical questions emerge. Should AI have rights? And what does this mean for the future of humanity?
Click on to uncover what experts are saying about this thought-provoking topic.