Yesterday, Microsoft's Majorana 1 was launched here, a prototype of a quantum processor powered by topological qubits, a breakthrough that may enable quantum computers capable of solving real-world problems that are intractable with classical computers.
As @chrismessina and @terrence_kelleman mentioned, some scientists believe that quantum effects give rise to consciousness, including the mathematician, physicist and philosopher Roger Penrose (Nobel prize 2020).
However, most scientists seem to believe that quantum effects are not relevant to information processing in the brain, and many AI researchers (e.g. Ray Kurzweil) think it will be possible to create generally intelligent robots that behave completely as if they were conscious (using classical processors), and then argue that we would then have to assume that they are indeed conscious, just like we do with other humans.
Whether / which AIs or robots are or will be conscious, and if they can suffer, is of course a crucial question because of the ethical implications (the next question then would be if people will even care, considering the staggering indifference towards the suffering of animals).
All the below options are compatible with our current scientific knowledge (and there are further possibilities). What do you believe?
Do you think we will ever find out for sure?
The problem is of course that it is impossible to “measure” consciousness directly from the outside. We should be aware that we can't even proof that other people are conscious (the possibility of zombies is not as far-fetched as most people think: If we live in simulated reality, which some very intelligent people think is probable, whoever runs the simulation may have decided for computational or ethical reasons not to simulate all people in a way that they are conscious).
@rajiv_ayyangar What I mean with consciousness, and what I think is normally understood with that term, seems a very clear concept to me, and an essential one. I know many scientists, especially computer scientists, said something along the lines as you did, but I cannot follow these thoughts.
I think Thomas Nagel gives a very good explanation here, and if you think about the hard problem of consciousness and the possibility of zombies it's clear that this is a meaningful concept.
This is so complex. We should first determine what we imagine under the "consciousness". We think that we know a lot of things but in reality, there are many untouched areas we are not aware of. (Does it make us unconscious? Are there levels of consciousness? Are we at all conscious of certain things?) I think this raises more questions than answers.
@busmark_w_nika You're sure right that this is a very complicated question. And certainly there are different levels and different forms of consciousness.
However, as I answered @rajiv_ayyangar, I think the concept of consciousness itself is very clear, and the question if some entity is conscious or not will always have a clear answer (even though we may never know it).
It's true that "consciousness" is difficult to paraphrase, but e.g. the philosopher Thomas Nagel was quite successful I think: An entity is conscious if and only if there is something that it is like to be that entity.
Also, there are many criteria that are clearly sufficient for something to be conscious, e.g. if it can be happy or sad, if it can feel pain etc.
But we may of course also just ask a more specific questions, like
can an AI be happy or sad?
(and not just give us the impression that it were happy or sad)
@rajiv_ayyangar @konrad_sx If consciousness is determined by emotions, does it mean that they are determined by bio-physiological processes? Like hormones etc.?
@rajiv_ayyangar @busmark_w_nika Didn't say consciousness is determined by emotions, just that only something that it conscious can have emotions.
In humans and animals, emotions are partly controlled by hormones, but certainly not generated by them. The most direct correlation that can be observed for emotions, and all other "contents of consciousness" (like pain, taste, the awareness of seeing something ...), is with certain patterns of current flow in the neutrons (and "through" the synapses).
While these are "bio-physiological processes", we don't know if the same contents of consciousness cannot also be "generated" by e.g certain patterns of current flow in computer chips.
Many scientist and especially AI researchers are "functionalists" today, they think essentially that consciousness is generated by very complex information processing, and that the same information processing (whether in brains or in chips) will lead to the same contents of consciousness, but there are big problems with that theory.
Your question seems to have more to do with the moral and ethical considerations that arise when dealing with apparently-conscious entities, rather than whether AIs can be conscious.
If one believes AI can become conscious, will that change the way they behave towards them? If not, then consciousness attainment is irrelevant.
If consciousness is a property of the universe and matter that attains the ability to "tune into" conscious signals (i.e. pure information experiences), then anything can be conscious and is in fact already conscious.
@chrismessina As I indicated, the ethical question of whether we will need to take into account the "wellbeing" certain AIs seems to be the most important practical implication.
However, I think the question of whether an AI is conscious is in itself meaningful and interesting. It is also closely related to the question of whether there can be mind uploading procedures retaining our own consciousness.
Regarding your last sentence, this would be a possibility, yes.
I'm confident we will get more insights once the neural correlates of consciousness are mapped exactly.
@konrad_sx Thanks for jostling my thoughts some more on this topic. I started to respond with a comment here but it grew and grew and now is a draft on Medium that anyone can view at this link: https://medium.com/@terrencekelleman/is-ai-conscious-well-that-depends-on-what-reality-is-d436b4680431
I'll try to refine this tomorrow and apologize if it's still too raw a form but I needed to put it on pause for now and comeback after some reflection. I welcome your feedback and insights.
@terrence_kelleman Thanks for adding your thoughts.
I didn't know about this theory from Faggin, very interesting (but I don't have time to concern myself with it now). I read Penrose's books some years ago, who also thinks that consciousness emerges through quantum effects. Found this debate between Penrose and Faggin
.I'm not sure what exactly your thesis is in your article. You think there is some property of reality that determines if AIs are conscious? And you think we humans could somehow lose our consciousness, become Chalmers-zombies?
BTW, isn't Microsoft's chip called Majorana, not Marjoram?
The poll options have no line breaks, so here in full:
Some current LLMs already have some form of consciousness
Future LLMs may be conscious (maybe if we reach true AGI)
Some other form of AI, running on classical computers, could be conscious (e.g. an exact simulation of a brain's neurons)
AGI robots will be conscious
AIs running on quantum computers could be conscious
No AI can be conscious
only in the minds of super-rich tech bros who want to get even richer by fundraising mountains of cash to keep their dreams alive
Product Hunt
hot take:
The concept of consciousness has always struck me as a problematic term. A moving goalpost that we use to delineate human and non-human creatures and redefine whenever we feel threatened. The fact that it's difficult to measure and describe consciousness, and that consciousness is an unnecessary concept when reasoning deeply about thinking machines, suggests that maybe to discard the term altogether.