Blog Archive

Saturday, June 23, 2018

Michael Hendricks: Integrating Action and Perception in a Small Nervous System (Wednesday, June 27 2pm)

Michael Hendricks:  
  (Wednesday, June 27 2pm)

Michael Hendricks (Speaker)
Professor McGill University


Jonathan Birch (Moderator)
London School of Economics

Nematodes (roundworms) are separated from mammals by 600 million years of evolution and have major differences in the organization of their nervous system. They are also extraordinarily successful—4 out of 5 animals on Earth are nematodes. Despite possessing a brain many orders of magnitude smaller than that of vertebrates, they have been able to solve the fundamental behavioural problems of survival and reproduction.Caenorhabditis elegans has just 302 neurons, but can explore, learn, avoid pathogens and predators, and mate. It inhabits a sensory world where our basic intuitions about modalities and representations break down. Even so, fundamental circuits related to self-monitoring and egocentric representations of behaviour contribute to sensory processing, suggesting a deep origin for the types of feedback and internal representations that are though to give rise to awareness and the complex cognitive abilities of larger animals. 
Hendricks, M., Ha, H., Maffey, N., & Zhang, Y. (2012). Compartmentalized calcium dynamics in a C. elegans interneuron encode head movementNature, 487(7405), 99-103. 
Bargmann, C. I., & Marder, E. (2013). From the connectome to brain functionNature Methods, 10(6), 483-490. 

28 comments:

  1. My question all disappear when I hit publish (maybe it will be there twice; sorry for that), but the essence was that I understand that nematodes avec chemo and thermo-sensory neurons that help producing the motor dorsal bend and ventral bend. The reasoning that I had learning that is that it seems a bit like a "proprioception sensory input" : the nematode "knows" that is body is oriented that way and needs to go the other way to go forward. I find it really complex for that small of a nervous system and those "inner representations". Therefore, is it possible that those movements are simply reflexes (as an infant when he is just born, for example)?

    ReplyDelete
    Replies
    1. Always save your comment in a text file until you see it appear and stay.

      The problem is usually fixed if you log out of all your email accounts first, then log in only to the one you are using to post.

      Delete
  2. In this talk you (Hendricks) tied consciousness to an experience of individuation and selfhood that you said could be produced by an organism being able to differentiate between its elf and its (necessary for this process) external surroundings. While I think that it´s a very interesting idea where it comes to selfhood and identity, I agree with Harnad that I don´t see the necessary connection between these attributes and sentience.

    In response to questions about sentience, you said that it´s easy to project human emotion onto other organisms. I was wondering what methodology you would use to distinguish projection from a genuine perception of other animals´ feelings.

    A final note, I noticed choice and reflex kept being used as almost mutually exclusive categories, but don´t they also often function as multiple ways or paradigms to describe the same phenomena without necessarily negating each-other?

    ReplyDelete
    Replies
    1. Hi Jeremy, great questions. The relationship between a sense of self, consciousness, and sentience is not entirely clear to me. If we take sentience as the capacity to feel, the question is then "feel what?" My intuition is that just as animals inhabit vastly different umwelts or sensory worlds, they likely inhabit very different "feeling" worlds. Many of these aspects of sentience might be shared, while others are different and, just like senses we don't have, unimaginable to us.

      Back to my talk -- I think the kinds of internal representations or models I talked about are necessary for some form of conscious awareness, and therefore for sentience, but not sufficient.

      Projection/perception is hard question. We rely on empathy to understand the feelings of other humans and animals. With other humans, we have good reasons to believe their feeling are broadly similar to our own... with animals, perhaps we can assume that more closely related ones are more similar, but we don't know. The neuroscience of consciousness is largely about observing correlations between large-scale brain activity and measurable or reportable aspects of consciousness.

      Choice/reflex. That's a nice way of putting it, yes I think our "real" behaviors--walking down the street, catching a ball--are complex interactions between intentional and reflexive actions.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. Sentience, consciousness and feeling may very well have had the same referent in the individual minds of each who tried to establish them at THE best word to use. Some went on to call it subjective experience, awareness, qualia, just making the dialogue more obscure. At the very end of this summer school, a lawyer even mentioned the word “soul” but then backtracked, saying it was just meant in a metaphorical way… :) During one of his classes, Stevan listed over twenty of those synonyms to try and make us more aware of the linguistic chaos we’ve got to solve before we can start having an interesting conversation about what it is that we're hoping to understand.

      All these words equally lead back to the same annoying problems : perhaps most notably having to figure out who, or what kind of “autonomous central processing unit” could that “conscious subject” “representing” the “felt experiences” be? How does THAT thing work and is *that* any different (if so, how?) to what would propel an autonomous robot that could do the same things (i.e.: passing the Turing test)? It really seems like unless we give up on monism, we end up with infinite regress, homoncular non-explanations about what the cognizer is (versus its “physical” properties). I think Prof Hendricks' approach is careful and wise in that he proposes we leave the infinitely-complex aside and focus on the (hard enough as it is) inner workings of simpler beings. Interestingly, he does acknowledge, unlike so many who fail to even see it, the "Hard Problem".

      In class, I argued that one of the possible roots of this general “failing to see” among cognitive scientists could be that the language we have to describe what we study is limiting our scope. Our ancestors were hard-core dualists and as hard as we try not to let it affect our reasoning abilities, their thinking transpires in our sentences... like this classic: “I said to myself”. What “self”? Who’s this “I”? What does the “my” in myself even refer to? Hopefully English, the language of science, can sooner than later evolve into something better suited to a monistic worldview… Until then, keeping it in mind (as much as cognitively possible) may help avoid such dead-ends.

      Thanks Mr Hendricks for one of the most interesting talks (and talked about) of this summer school! Many of us will definitely be following the progress on nematodes.

      Delete
  3. During the talk, I couldn't help but think that we should stop trying to retro-engineer the brain if we can't even retro-engineer a ''simple'' nemathod; as M. Hendricks said, the most sophisticated software is nowhere near as complex as these worms. I'm fascinated with human cognition, but I feel that only by retro-engineering at first ''less complex'' forms of cognition, we could stand a chance later on to answer the easy problem.

    ReplyDelete
    Replies
    1. Yes! I often say this to people to be provocative--if we can't figure out the worm, we can give up on everything else. Others have asked: What would it cost to "solve the worm." Measure/describe everything we can about all of its genes, their expression, metabolism, nervous system, muscles, etc, under a range of realistic conditions. I don't know the answer, but consider that I don't think we've even "solved" single cells, it's a ways off!

      Delete
  4. Some interesting links I’ve consulted to better understand or to elaborate around the presentation :
    https://www.ncbi.nlm.nih.gov/books/NBK116086/ (neurogenesis of Caenorhabditis elegans)
    http://www.wormatlas.org/index.html (this is a complete website about Caenorhabditis elegans)
    I’ve seen there is a mistake just above on the website : it is the informations of this morning’s presentation (Mr Ophir’s) which appear instead of Mr Hendrick’s. I didn’t have the time to note the acknowledgements for some of the studies (other than his own) Mr Hendricks did talk about, and so I can’t find them. Is it possible to have these references, including the studies about the underlying processes of schizophrenia? Thanks.

    ReplyDelete
    Replies
    1. A good review article on corollary discharge in general:
      https://www.ncbi.nlm.nih.gov/pubmed/18641666

      Papers on links between CD and schizophrenia:
      original 1978 proposal https://www.ncbi.nlm.nih.gov/pubmed/734369
      more fleshed out
      https://www.ncbi.nlm.nih.gov/pubmed/10448443

      experimental papers
      https://www.ncbi.nlm.nih.gov/pubmed/12027049
      https://www.ncbi.nlm.nih.gov/pubmed/18450174

      Delete
    2. Thanks for the correction, Alexandra! Fixed.

      Delete
    3. Thanks to both of you for the complementary articles (Mr Hendricks) and the correction (Dr Harnad)!

      Delete
  5. Hello, I have a keen interest for molecular biology. This being said, I wonder: are there studies about nematodes with a mutated RIA and the rate of survival according to the motions they took?

    ReplyDelete
    Replies
    1. Hi Talar,

      We've done a number of manipulations on RIA, including killing it, mutating genes important to specific functions, and blocking its ability to release neurotransmitter. Animals survive fine (in the lab) but lose the ability to steer. (They also can no longer learn to avoid pathogenic bacteria, but that appears to be a separate function.) I think if animals were in a more challenging environment than the lab, loss of RIA would likely be fatal.

      Delete
  6. It strikes me that such small creatures could have a certain conception of the self. It’s like they have the same essential traits as more developed organisms, but in a more primordial form. Thus there’s no emergence of any abstract structure, only a bunch of traits set on a continuum, traits that are indistinctly present, but that can accomplish variably (of course the brain of the worm can’t do what the brain of the human does). It seems as if there aren’t different brains, with their own essential possibilities, but rather THE brain in his general form and accidents (evolution/adaptation). Thus, the brain is effective because of a redundant form/configuration it seems, independently of size or complexity.

    In this general form, the brain produce sensing, and you said that all sensing is active sensing. However I doubt that all sensing is the same, leading to the idea of “potential sensing”. From an evolutionary perspective and even with your definition of what consciousness should be, it seems that some sensing is perhaps “in potentiality” (to use a useful Aristotelian terminology) relatively to the idea that all animal species are subject to contingency and accidents in their traits, but that these differences share a substantial form in which they all take root. Your presentation convinced me that the term “emergence” as we sometimes use to talk about consciousness by example isn’t that clear of a concept. In fact, it means pretty much nothing. But the idea of a continuum of sensing, or of a continuum of consciousness, of the sentiment of the self has much more explanatory power.

    ReplyDelete
    Replies
    1. Interesting thoughts! And touches on a lot of things I think about. Yes I believe that evolutionary happenstance is a major player in how our (and other) brains work. As I like to put it, "Evolution is just a bunch of stuff that happened." It does not describe a system with formal properties. Likewise, our brains are a hopeless jumble of widgets, kludges, and constraints--nothing is optimized, it's all just good enough to hang together. It does seem like there is some tendency toward complexity, on the other hand animals with much simpler nervous systems than ours still dominate the earth and likely will after we're gone.

      On emergence, I agree. Of course, more complex systems have properties simpler (or constituent) systems don't, but that doesn't mean their has to be a discrete phase change at which a property "emerges." I think most of the brain traits we're interested in are continuous, quantitative traits.

      Delete
  7. Au début de sa conférence, Michael Hendricks semblait dire que le cerveau tente d’anticiper sur sur le monde et que le modèle “input-output” était trop simpliste par la manière dont la cognition et l’action sont toujours, d’une manière ou d’une autre, ensemble. Ainsi, je me demandais si le conférencier endosserait l’idée suivant laquelle le cerveau est en quelque sorte une machine probabiliste de prédiction, comme le soutiennent les neuroscientifiques Karl Friston et Anil Seth (théorie qu’ils nomment le “predictive processing”). J’ai trouvé que le début de la présentation du conférencier semblait en accord avec cette idée qui stipule de manière semblable que le cerveau est toujours en train d’essayer de prédire ce qui est là dans le monde de manière active.

    ReplyDelete
    Replies
    1. That sounds right to me... unfortunately despite considerable effort I can never figure out what Friston is talking about! I do resist the idea to try to generalize what the brain does to mathematical formalities, though it is perhaps descriptive. The first reason is I think it is less interesting to figure out what the brain is doing than to figure out how it does it. Maybe the brain does Bayesian computations, it can add 1+1, it can "do calculus" to catch a ball--none of that helps us figure out how the brain works. It is the implementation that, to me, is the subject of neuroscience. The flip side of that is that, of course, theory sometimes helps us guess what implementations we might be looking for, but from my perspective the vast majority of understanding in neuroscience has come "bottom up" from data and not "top down" from theory.

      The second reason I'm wary is that you don't evolve to implement a mathematical theorem, you evolve to survive. The fact that animal behavior matches theoretical prediction only tells us something if it doesn't match many other alternative theories. This is very hard because it is usually easy to come up with many theories that match behavioral data + noise (there's always noise). And survival is not just about optimal behavioral decision making (nothing in biology is optimal), but thousands of constraints from anatomy, development, metabolism, genetics, cell biology, etc. The brain is situated within these constraints, and it shows. Our eye is wired backwards, we are stuck with legacy systems in the spinal cord, midbrain, etc, and all we can do is add more neurons in the suburbs of the frontal cortex, etc...

      I'm on a tangent...my suspicion is Friston is right on some level, but I am not sure that level has a lot to say about how the brain works.

      Delete
    2. The "easy problem" is to explain causally how organisms are able to do all that they can do. Biophysical dynamics and algorithms are fine for explaining all of that causally.

      For those organisms for whom it feels like something to be able to do (some of) what they can do -- i.e., for those that have some sort of felt distinction between voluntary and involuntary movement -- the "hard problem" is to explain the causal role (if any) of that feeling.

      For this distinction to exist in nematodes, they would have to feel (something): Do they? (Btw, sensations too, can be felt, if there is feeling.)

      Delete
    3. This comment has been removed by the author.

      Delete
  8. Do your nematodes show negative priming effect, or some kind of behavioral inhibitory persistence?

    ReplyDelete
  9. An interesting essay about feeling/suffering from the perspective of a tiny brain, and the relative degree of consciousness:
    http://reducing-suffering.org/is-brain-size-morally-relevant/

    ReplyDelete
  10. I was very surprised to learn that such a nervous system exists in a worm of such a small size and that chemical, thermal and mechanical mechanisms of sensation, and mechanisms of locomotion are present. Caenorhabditis elegans can live up to 20 days and some mutations can allow it to live up to 100 days. I would like to know if a genetic map has been developed for this worm? Are the genes that, following the mutation, are responsible for its longevity identified?
    Thank you.

    ReplyDelete
    Replies
    1. Hi Amandine, you might find this article interesting, although it deals mostly with expanding the lifespan of yeast through genetic modifications, if you search through the abstract (CTRL+F), you will find no less than 8 references to nematodes... :)

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3696189/

      Delete
  11. Toutes les conférences de l'école d'été 2018 ont été très enrichissantes et j'ai appris beaucoup de nouvelles choses. Celle-ci m'a vraiment fasciné en ce que je n'avais aucune idée qu'une si petite chose pouvait avoir un système nerveux si complexe. Même s'ils sont plus petits, ils semblent avoir des traits semblables aux plus gros organismes, mais à degrés moindres. Le fait que les vers peuvent réagir aux stimulis et faire différents choix, pour moi, veut tout dire. Pourquoi alors leur refuser la sentience.

    ReplyDelete
  12. Thank you very much for the talk. During the night’s workshop, Mr Birch argued that we should use the precautionary principle regarding animal sentience, wich would translate in setting a lower criteria to attribute sentience to a living organism. Would the presence of an integrating structure, such as the RA in nematodes, be sufficient to attribute sentience to it?

    ReplyDelete
  13. J'ai trouvé cette conférence spécifiquement intéressante car je trouve qu'elle aborde la question de sentience sous un point de vue différent. En effet, selon Mr. Hendricks, la question centrale ne serait pas de savoir si les animaux doivent être considérés comme étant des êtres sensibles ou non, mais plutôt de savoir jusqu'à quel degré le sont-ils?

    Ainsi, on constate que mêmes les comportements les plus simples évoqués par de systèmes nerveux aussi primitifs nécessitent l'usage de représentations mentales de soi (internal representation and self-monitoring mecanisms). De ce fait, nous avons tendance à expecter de toujours retrouver ces représentations, qu'elles soient motrices, représentatives ou symboliques, car notre cerveau est constamment entrain de créer des copies, des prédictions et des modèles de ce qu'il perçoit dans le simple but de tirer un certain sens du monde qui nous entoure et ainsi exprimer des comportements et réactions face à ces circonstances, et c'est ce phénomène qui est alors plus complexe et plus intégratif chez les humains, et qui fait pourrait ainsi expliquer la différence de sensibilité comparativement aux non-humains.

    ReplyDelete
  14. On évoque ici l'idée de degré de sensibilité. Il me semble que la sensibilité est relative à l'individu, c'est-à-dire qu'elle doit être pondérée de manière à influencer l'organisme de la bonne manière. Si une pomme tombe sur la tête d'un oiseau, il ressentira probablement une souffrance assez grande (la pomme est en mesure de faire un dommage important à l'oiseau). Si la même pomme tombe sur la tête d'un éléphant, je ne suis pas certaine qu'il en sera dérangé.

    Je ne crois pas qu'on puisse quantifier des degrés de sensibilité de façon absolue, mais relativement à l'organisme par rapport à certaines choses.

    Doit-on se baser sur la quantité de neurones lorsqu'il est question de sensibilité? Je ne crois pas puisque nous ne savons pas comment la sensibilité est générée.

    ReplyDelete

Note: Only a member of this blog may post a comment.