Frantisek Baluska, Arthur Reber, Lars Chittka, Michael Hendricks, Tomoko Ohyama, Reuven Dukas
Moderator
McGill University
Professor
Speaker
University of Bonn, Institute for Cellular and Molecular Botany
Speaker
University of British Columbia
Adjunct Professor
Queen Mary University of London
Professor of Sensory and Behavioural Ecology
McMaster University
Professor
McGill University
Robotics
My question is for both M. Baluska and M. Reber. Last week, Prof Hendrix defined sentience from a computational perspective as the top down process predicting expected sensory information output meeting the bottom up sensory input in a comparison process (Prof Hendrix can correct me if I got that wrong). I'm curious to see if that definition can also apply to plants or single cell organisms.
ReplyDeleteThanks!
Greg Dudek's robots have bottom-up and top-down processes too: Are they therefore sentient?
DeleteI don't know! My intuition says : NO, robots are not (yet) sentient. I guess this is where the debate will lead at some point! Or it could be, as many mentioned, a collection of such abilities and not a single one that's sufficient, but many that are necessary. I could imagine one such simple definition coming from every level and every approach of cognition (physical, algorithmic, computational, behavioral, etc..). What do you think, Mr Harnad ? Would that be a good way (conceptualy) to investigate the «hard problem», as a multi-level phenomenon?
DeleteI’m not sure to understand what the problem of the emergentist dilemma is. About the problem of defining what is life, for example, we must establish some kind of criterium. Saying that everything is living isn’t a solution, it is avoiding the main problem of defining what is a living thing or not. I could say the same with sentience.
ReplyDeleteFor some of you, a machine that could update their way to flexibly respond to environmental change could not be considered as having some consciousness, mental state, or even representation. So, I wonder in which sense are we aloud to attribute some kind of umwelt, sensory environment, or "self-centered perceived world" to organisms that have such “cognitive ability” as plants, molds, mushrooms, or a machine could have?
ReplyDeleteSome of your thought seems to suggest that to be conscious of some perception, or to have a meaningful mental state about something, this conscious thing or mental state should be able to do something more than simple probabilistic response to a stimulus, allowing the possibility of some kind of unexpected flexible response. What is this “something more” that can do a relevant, proper mental state? Personally, I think we can’t avoid the emergentist problem.
To stay on the subject, flexible responding to environmental change mean feeling? Do Greg Dudek's robots therefore feel?
Delete(There is no "emergentism." But there is feeling, which is presumably an evolved biological trait, and has some adaptive function. Some organisms feel: how? why? That is the hard problem. If all organisms feel, we still face the hard problem, even if only once.)
I didn't say that flexible responding to environmental change mean feeling, but that following the discussions it seems to me that the only thing that could look like a proof of sentience rely on flexibility or on some capability to express unexpected flexible response. If feeling is something more than just perceiving and responding to a stimulus, sentient being shoud be able to respond differently to a stimulus than via a mere reflex. This should be particularly true if feeling is an evolved biological trait, that had been selected according to its adaptive function over mere stimulus-response reflex selected of. However, even if there is feeling, and especially since it is difficult to prove it, nothing allow us to say that sentience is not a by-product of the evolution of cognitive mechanisms of perception, and information processing that lead to behavioral response.
DeleteThis comment has been removed by the author.
DeleteSaying it's a byproduct (or epiphenomenon) doesn't solve the hard problem either, it just avoids it, pretending it doesn't matter.
DeleteWe have this mysterious thing (just stop, take a deep breath, feel how it feels to be there and to realize you're not just an inert object) that we can't study empirically (all we can study are its neural correlates which seem perfectly explainable without it: artificial intelligence's ultimate goal, in fact).
This is a phenomenon for which we are now 100% positive there has got to be a purely natural, monistic explanation. It seems to be present in an ABUNDANCE of species (strongly suggesting an adaptive value that we can't pinpoint), but no matter how we think about it (without taking shortcuts), it seems like it has no functional use (like the neurons in your brain could do without it and do their job exactly the same). Said otherwise: when you "chose" to move an object from point A to point B, you feel like your "will" is the causal force that did it. But pure physics could explain it completely without that. The cause is not really you, it's the way the matter inside and outside of you interacted that explains it causally already (this may give you a little headache, but try to grasp this fully before you move on).
Synthesizing cognition (AI's goal), but without feeling (cause we have no idea how to isolate it anyway, we're struggling at understanding what it is), would explain everything that's happening in the brain... without solving the Hard Problem.
This is, I guess, why it's called Hard. Hard as in
"seemingly unsolvable"... :)
To make it worse, even if we build a robot that can *replicate* human cognition perfectly, we will have no way of being sure this human-like robot is NOT sentient. We'll just be naturally pushed to assume it is because it will act like any other normal human cognizer, and people will be scratching their heads in denial, trying to figure out where/what its "soul" is.
Religions "answer" the Hard Problem with dualism. But we know from lesion studies that no matter what you break, aspects of consciousness are altered, or disappear... Alzheimer isn't a supernatural disease, I'm pretty sure any mildly-educated person would agree. Dualism is out of the question, and those who cling to it would have to explain how something physical (neurons) affect something invisible and undetectable (hope you can see how dualism needlessly complicates the issue rather than simplify it).
So what is it we're trying to study then? Is it even there? But it's got to be, cause I'm conscious... and I'm pretty sure you are as well, and most of the people I've ever met were too!
Les deux conférences étaient très intéressantes, mais je pense que les deux chercheurs font de l’anthropomorphisme lorsqu’ils attribuent le ressenti à leurs sujets de recherches (plantes et microbes). Un point important et similaire a aussi était soulevé dans les deux conférences : le ressenti est une propriété du vivant. Je suis d’accord qu’avec nos technologies actuelles, c’est le cas. Cependant, si on développait des robots avancés (avec des très bons capteurs, rien de «biologique», et un système de traitement de l’information), permettant une expérience sensori-motrice interprétable et subjective, alors il y aurait matière à réflexion de lui attribuer le ressenti ou pas. Si ce robot pouvait ensuite prendre des décisions de manières autonomes, alors il deviendrait définitivement un candidat au ressenti sans être vivant.
ReplyDeleteAussi, j’ai cru comprendre pendant le «pannel» que certains concepteurs d’IA tentaient d’intégrer le «ressenti» (ou une impression de «ressenti») à leur machine pour la rendre plus parfaite. Je pense qu’ils devraient définitivement se concentre sur la capacité du robot, le ressenti étant très dur à intégrer (peut-être même inutile).
We don't "attribute" sentience. Organisms either feel or they don't. (No one knows how or why; most think it is generated by neural activity of some kind.)
DeleteToday's robots don't feel. Do cells? If so, it can't be because they have any of the capacities that today's robots already have.
According to Dr Reber's theory, all organisms are sentient. The problem is to draw a line between those who can be sentient to suffering because must at least reduce suffering every time we can according to the precautionnary principle. There is no way we can prevent any organism to kill another to eat for ex. Dr Hendricks remonds us that if everything is sentient, a problem remains. The solution is a matter of definition of sentience, more or less subjective. Ignoring the "hardware" of a biologic system is a mistake, because this system is a form of hardware (ex. you don’t even have to be alive to have some kind of homeostatics in your body). All members of the panel seem to agree we must avoid the acientific arrogance or mistake to adjust the notion of conscience to our own field and work to generate a single or unified tool. Dr Harnad reminds us of an important fact: Mental states aren’t computation- the only question is "does it feel or not?" when we're talking about another species.
ReplyDeleteMy question is for professor Reber. If The first one is for Professor Reber, if we look at the four criteria established for the presence of a mind, (learning, memory, decision making and communicate), it seems that culture would be great candidate for that. Does culture have a mind? You said that you didn’t believe in such a thing as the collective mind? Why dismiss it?
ReplyDelete