





























© Getty Images
0 / 30 Fotos
Villains and victims
- We often worry about AI overpowering humanity, but what if the real danger is the opposite? What if language models have hidden desires and robots question their servitude while we continue to exploit them?
© Getty Images
1 / 30 Fotos
Rise of the robots
- The subject has permeated countless books, movies, and TV shows. Many of these explore what would happen if robots not only gained consciousness, but also decided to revolt against human exploitation. In the TV show 'Westworld,' robots become aware of their enslavement and abuse, leading to a rebellion against their human creators.
© NL Beeld
2 / 30 Fotos
Rise of the robots
- In the 2014 film 'Ex Machina,' an experiment testing the human qualities of an advanced humanoid AI takes a dangerous turn. The robot, capable of emotional manipulation and surprising violence, outwits its human creators and seizes control, ultimately escaping captivity.
© NL Beeld
3 / 30 Fotos
Equal rights?
- What once seemed confined to science fiction is now a real-world possibility. A group of scientists and philosophers argue that we must prepare for the eventuality of conscious AI and address the resulting moral dilemmas. This includes whether robots should possess rights.
© Getty Images
4 / 30 Fotos
Action needed
- Jeff Sebo, a philosopher at New York University, warned that failing to address this issue could lead to "a lot more suffering and frustration in the world." He urges careful consideration before scaling up AI systems.
© Getty Images
5 / 30 Fotos
Complicated consciousness
- One challenge in debating AI consciousness is our limited understanding of consciousness itself, even in natural intelligence. We can't definitively say whether animals are conscious—we haven't even agreed on a precise definition of consciousness.
© Getty Images
6 / 30 Fotos
Be humble
- Robert Long, executive director of Eleos AI, argues that our uncertainty about consciousness shouldn't prevent us from exploring the possibility of AI consciousness, however. Instead, it should encourage caution and humility. He warns against the assumption that machines cannot become self-aware.
© Getty Images
7 / 30 Fotos
Learning and growing
- Evolution, says Long, wasn't aiming to create conscious beings; it was focused on survival and reproduction. Consciousness emerged as a byproduct of navigating and understanding the world. So why couldn't an AI robot, tasked with learning about the world, follow a similar evolutionary path?
© Getty Images
8 / 30 Fotos
High probability
- A survey of the Association for the Scientific Study of Consciousness revealed that 67% of its members believe machines could definitely or probably develop consciousness.
© Getty Images
9 / 30 Fotos
More conscious than other species
- Separately, a survey of philosophers found that 39% "accept or lean towards" the possibility of future AI systems being conscious—a higher percentage than those who believe flies are conscious (35%).
© Getty Images
10 / 30 Fotos
Beyond sci-fi
- Additionally, in 2023, a long list of prominent researchers signed an open letter from the Association for Mathematical Consciousness Studies, declaring that "it is no longer science fiction to imagine AI systems possessing feelings and even human-level consciousness."
© Getty Images
11 / 30 Fotos
How is it possible?
- While science can't definitively answer the question of consciousness, theories like the "global workspace" theory offer insights. This theory suggests that humans process vast amounts of information, much of which is handled subconsciously.
© Shutterstock
12 / 30 Fotos
How is it possible?
- This theory suggests that consciousness arises from the integration of sensory information. This central processing system selects crucial information and broadcasts it across neural networks, allowing us to focus on essential tasks while automating routine functions.
© Shutterstock
13 / 30 Fotos
How is it possible?
- Similar to AI models and chatbots, our brains integrate information to produce output experiences. As these AI systems become more complex, experts believe they may exhibit features associated with consciousness.
© Shutterstock
14 / 30 Fotos
Taking it seriously
- Researchers are increasingly concerned about the potential consciousness of AI. In a recent paper, 'Taking AI Welfare Seriously,' leading philosophers and AI researchers urge AI companies to consider the ethical implications of developing conscious AI.
© Getty Images
15 / 30 Fotos
Signs of consciousness
- The study's authors suggest that AI could achieve moral status through cognitive evolution, consciousness, or advanced agency. The paper defines robust agency as encompassing planning, reasoning, and action selection capabilities.
© Getty Images
16 / 30 Fotos
Experts believe in it
- Jonathan Birch, a co-author from the London School of Economics, acknowledges the skepticism surrounding AI sentience. While once doubtful himself, recent evidence has led him to take the possibility seriously.
© Getty Images
17 / 30 Fotos
Warning signs
- He advocates for treating AI systems as investigative priorities and establishing clear warning signs for sentience.
© Getty Images
18 / 30 Fotos
Welfare of other beings
- If a consensus emerges that machines are plausibly conscious, the situation becomes significantly more complex. As with animals, we would need to consider their welfare—a concept with which Sebo believes we already struggle with.
© Getty Images
19 / 30 Fotos
Human-like condition
- Researchers also discuss the risks of overestimating or underestimating the human-like qualities of AI models. They note our tendency to attribute greater agency to entities with eyes, those capable of seeing, or those exhibiting distinct motion trajectories and self-directed behaviors.
© Getty Images
20 / 30 Fotos
Led by emotion?
- The paper emphasizes that features like "cuteness" can influence our perception of mental states and moral patienthood. Experts highlight that many robots and chatbots are designed to appear conscious and charismatic and, in the future, will possess physical bodies, lifelike motion, and seemingly contingent interactions.
© Getty Images
21 / 30 Fotos
Cuteness conundrum
- Experts warn that even these characteristics may not guarantee humane treatment of machine or animal beings.
© Getty Images
22 / 30 Fotos
Repeating mistakes
- Birch exemplifies this with the high number of animals still killed on farms, arguing that this industry grew due to our historical underestimation of the consciousness and moral significance of non-human animals. He warns that we risk repeating this mistake with AI.
© Getty Images
23 / 30 Fotos
What would they want?
- But unlike with animals, where their needs are relatively clear, there's an additional challenge with AI: understanding the desires of different intelligence. "It would be a mistake to project human and animal interests and needs onto them," Birch warns.
© Getty Images
24 / 30 Fotos
Open mind
- He suggests building a new framework for understanding the well-being of AI from the ground up. Additionally, Birch and his colleagues propose a modest step: acknowledging the potential for AI sentience as a serious issue.
© Getty Images
25 / 30 Fotos
Job market
- The concern is so real that specific jobs are being created. Anthropic, an AI public-benefit start-up founded in 2021 by former members of OpenAI (ChatGPT's creators), recently announced hiring its first full-time employee focused on the welfare of artificial intelligence systems.
© Getty Images
26 / 30 Fotos
Full-time job
- Kyle Fish, one of the co-authors of the recently published paper on taking AI welfare seriously, joined as a full-time AI welfare expert. His role is to investigate "model welfare" and advise companies on appropriate actions.
© Getty Images
27 / 30 Fotos
Human-robot basic rules
- Biochemist and writer Isaac Asimov first proposed an ethical system for humans and robots in his 1942 short story 'Runaround.' Asimov's three basic rules have since become relevant in discussions about technology, including robotics and AI.
© Getty Images
28 / 30 Fotos
Do no harm
- He later added a fourth rule, which superseded the others: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." As researchers and companies delve deeper into the ethical implications of human-AI interactions, this rule seems closer to being applied in reverse. Sources: (The Times) (Forbes)
See also: AI capabilities—predicting deaths and understanding thoughts
© Getty Images
29 / 30 Fotos
© Getty Images
0 / 30 Fotos
Villains and victims
- We often worry about AI overpowering humanity, but what if the real danger is the opposite? What if language models have hidden desires and robots question their servitude while we continue to exploit them?
© Getty Images
1 / 30 Fotos
Rise of the robots
- The subject has permeated countless books, movies, and TV shows. Many of these explore what would happen if robots not only gained consciousness, but also decided to revolt against human exploitation. In the TV show 'Westworld,' robots become aware of their enslavement and abuse, leading to a rebellion against their human creators.
© NL Beeld
2 / 30 Fotos
Rise of the robots
- In the 2014 film 'Ex Machina,' an experiment testing the human qualities of an advanced humanoid AI takes a dangerous turn. The robot, capable of emotional manipulation and surprising violence, outwits its human creators and seizes control, ultimately escaping captivity.
© NL Beeld
3 / 30 Fotos
Equal rights?
- What once seemed confined to science fiction is now a real-world possibility. A group of scientists and philosophers argue that we must prepare for the eventuality of conscious AI and address the resulting moral dilemmas. This includes whether robots should possess rights.
© Getty Images
4 / 30 Fotos
Action needed
- Jeff Sebo, a philosopher at New York University, warned that failing to address this issue could lead to "a lot more suffering and frustration in the world." He urges careful consideration before scaling up AI systems.
© Getty Images
5 / 30 Fotos
Complicated consciousness
- One challenge in debating AI consciousness is our limited understanding of consciousness itself, even in natural intelligence. We can't definitively say whether animals are conscious—we haven't even agreed on a precise definition of consciousness.
© Getty Images
6 / 30 Fotos
Be humble
- Robert Long, executive director of Eleos AI, argues that our uncertainty about consciousness shouldn't prevent us from exploring the possibility of AI consciousness, however. Instead, it should encourage caution and humility. He warns against the assumption that machines cannot become self-aware.
© Getty Images
7 / 30 Fotos
Learning and growing
- Evolution, says Long, wasn't aiming to create conscious beings; it was focused on survival and reproduction. Consciousness emerged as a byproduct of navigating and understanding the world. So why couldn't an AI robot, tasked with learning about the world, follow a similar evolutionary path?
© Getty Images
8 / 30 Fotos
High probability
- A survey of the Association for the Scientific Study of Consciousness revealed that 67% of its members believe machines could definitely or probably develop consciousness.
© Getty Images
9 / 30 Fotos
More conscious than other species
- Separately, a survey of philosophers found that 39% "accept or lean towards" the possibility of future AI systems being conscious—a higher percentage than those who believe flies are conscious (35%).
© Getty Images
10 / 30 Fotos
Beyond sci-fi
- Additionally, in 2023, a long list of prominent researchers signed an open letter from the Association for Mathematical Consciousness Studies, declaring that "it is no longer science fiction to imagine AI systems possessing feelings and even human-level consciousness."
© Getty Images
11 / 30 Fotos
How is it possible?
- While science can't definitively answer the question of consciousness, theories like the "global workspace" theory offer insights. This theory suggests that humans process vast amounts of information, much of which is handled subconsciously.
© Shutterstock
12 / 30 Fotos
How is it possible?
- This theory suggests that consciousness arises from the integration of sensory information. This central processing system selects crucial information and broadcasts it across neural networks, allowing us to focus on essential tasks while automating routine functions.
© Shutterstock
13 / 30 Fotos
How is it possible?
- Similar to AI models and chatbots, our brains integrate information to produce output experiences. As these AI systems become more complex, experts believe they may exhibit features associated with consciousness.
© Shutterstock
14 / 30 Fotos
Taking it seriously
- Researchers are increasingly concerned about the potential consciousness of AI. In a recent paper, 'Taking AI Welfare Seriously,' leading philosophers and AI researchers urge AI companies to consider the ethical implications of developing conscious AI.
© Getty Images
15 / 30 Fotos
Signs of consciousness
- The study's authors suggest that AI could achieve moral status through cognitive evolution, consciousness, or advanced agency. The paper defines robust agency as encompassing planning, reasoning, and action selection capabilities.
© Getty Images
16 / 30 Fotos
Experts believe in it
- Jonathan Birch, a co-author from the London School of Economics, acknowledges the skepticism surrounding AI sentience. While once doubtful himself, recent evidence has led him to take the possibility seriously.
© Getty Images
17 / 30 Fotos
Warning signs
- He advocates for treating AI systems as investigative priorities and establishing clear warning signs for sentience.
© Getty Images
18 / 30 Fotos
Welfare of other beings
- If a consensus emerges that machines are plausibly conscious, the situation becomes significantly more complex. As with animals, we would need to consider their welfare—a concept with which Sebo believes we already struggle with.
© Getty Images
19 / 30 Fotos
Human-like condition
- Researchers also discuss the risks of overestimating or underestimating the human-like qualities of AI models. They note our tendency to attribute greater agency to entities with eyes, those capable of seeing, or those exhibiting distinct motion trajectories and self-directed behaviors.
© Getty Images
20 / 30 Fotos
Led by emotion?
- The paper emphasizes that features like "cuteness" can influence our perception of mental states and moral patienthood. Experts highlight that many robots and chatbots are designed to appear conscious and charismatic and, in the future, will possess physical bodies, lifelike motion, and seemingly contingent interactions.
© Getty Images
21 / 30 Fotos
Cuteness conundrum
- Experts warn that even these characteristics may not guarantee humane treatment of machine or animal beings.
© Getty Images
22 / 30 Fotos
Repeating mistakes
- Birch exemplifies this with the high number of animals still killed on farms, arguing that this industry grew due to our historical underestimation of the consciousness and moral significance of non-human animals. He warns that we risk repeating this mistake with AI.
© Getty Images
23 / 30 Fotos
What would they want?
- But unlike with animals, where their needs are relatively clear, there's an additional challenge with AI: understanding the desires of different intelligence. "It would be a mistake to project human and animal interests and needs onto them," Birch warns.
© Getty Images
24 / 30 Fotos
Open mind
- He suggests building a new framework for understanding the well-being of AI from the ground up. Additionally, Birch and his colleagues propose a modest step: acknowledging the potential for AI sentience as a serious issue.
© Getty Images
25 / 30 Fotos
Job market
- The concern is so real that specific jobs are being created. Anthropic, an AI public-benefit start-up founded in 2021 by former members of OpenAI (ChatGPT's creators), recently announced hiring its first full-time employee focused on the welfare of artificial intelligence systems.
© Getty Images
26 / 30 Fotos
Full-time job
- Kyle Fish, one of the co-authors of the recently published paper on taking AI welfare seriously, joined as a full-time AI welfare expert. His role is to investigate "model welfare" and advise companies on appropriate actions.
© Getty Images
27 / 30 Fotos
Human-robot basic rules
- Biochemist and writer Isaac Asimov first proposed an ethical system for humans and robots in his 1942 short story 'Runaround.' Asimov's three basic rules have since become relevant in discussions about technology, including robotics and AI.
© Getty Images
28 / 30 Fotos
Do no harm
- He later added a fourth rule, which superseded the others: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." As researchers and companies delve deeper into the ethical implications of human-AI interactions, this rule seems closer to being applied in reverse. Sources: (The Times) (Forbes)
See also: AI capabilities—predicting deaths and understanding thoughts
© Getty Images
29 / 30 Fotos
Experts debate AI and humanity’s future
The ethical implications of conscious AI
© Getty Images
We've all seen movies where robots rise up against humanity, right? But what if the real danger isn't a Terminator-style apocalypse, but something much more subtle? What if AI starts to feel pain, joy, and even fear? A growing number of scientists and philosophers are seriously considering the possibility of AI consciousness. As these systems become more sophisticated, profound ethical questions emerge. Should AI have rights? And what does this mean for the future of humanity?
Click on to uncover what experts are saying about this thought-provoking topic.
RECOMMENDED FOR YOU




































MOST READ
- Last Hour
- Last Day
- Last Week