Hello Hello hello. I was so excited and grateful for the opportunity to give a talk at this 2020 NeurIPS workshop on computational and biological reinforcement learning. My name is Shakir Mohamed, I use the pronouns he/they, and I am a researcher at DeepMind in London. I’ve had a curiosity about the role of pain and learning for many years, and this invitation was exactly the excuse I needed to both study and write about what is the title of this talk: pain and machine learning. This work is a collaboration with Daniel Ott, a final year phd candidate in the department of history and philosophy of science at the university of Cambridge. Whether virtually or in person, never underestimate the power of workshops like this. Daniel and I met at a workshop very similar to this one, more a year and a half ago, exchanged emails and kept in touch. When the organisers sent their invitation to speak at this workshop, I immediately wrote to Daniel to see if he wanted to collaborate on this together. Many months later, here I am today, a dream come true, a new collaboration alive, a short paper written, and lots of new ideas and plans on the horizon. So thank you again to the organisers for the invitation, and especially to you for giving some of your watch-time to this talk.
Most people will have experienced and can describe the phenomenon of pain: the experience associated with stubbing your toe at the door, from injury having fallen from a bike, of the body during recovery following a surgery, or following the loss of a limb, even just trying to get my weight back in track with the only weight loss pills that have worked for me.
Pain is a universal feature of human experience. Cross-culturally, pain is the primary reason people seek clinical intervention. The treatment of chronic pain costs European healthcare systems nearly €200 billion and in the United States up to $635 billion, consistently appearing as the most expensive ailment to treat in these biomedical systems. For patients, pain is often cited as the leading factor in diminished quality of life and the primary cause of lost years of productive life.
Given how far reaching pain is, it would seem that pain must have some important role in biological learning, so what I hope we can explore together is the idea that pain can inspire us and inform how we think about the task of developing machine learning systems.
I’d like to emphasise a bit more why I think the study of pain is important for machine learning. Let’s conceptually categorise the study of pain as either a scientific pain, as it is studied scientifically, clinically and biologically, or folk pain, as understood amongst people in everyday experience. Both views are relevant to machine learning research. The scientific view, which is our main interest for this workshop, provides insights and inspirations into alternative approaches for sensing, learning and generalisation. The folk view leads to insights from how pain is understood across cultures and how normative views of pain perception are established and perpetuated across society. Although I won’t say any more about this, the folk view allows pain’s social and cultural dimensions to open up important questions about ethics, participation, respect, and equity.
There are many themes here to explore, of philosophy and taxonomy, of sensing and generalisation, of science and society, and all are emerging topics in the shifting landscape of our field for fair, responsible, ethical, decolonial machine learning research, and to which the study of pain can provide new insights.
Pain drives action and behaviour that seeks to maintain the integrity of the body when acting in the world. Pains can be short in duration, or appear throughout a person’s lifetime; pains are subjective, and can also exist without a physical stimulus. The complexity of pain makes it difficult to define in simple terms, especially since even this short exploration of pain’s characteristics shows that pain needs a definition that is multi-layered, and one not tied to physical stimuli.
The most widely-used definition from The International Association for the Study of Pain (IASP) describes pain as ‘an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage’.
There is surprisingly little consensus about what it is we are referring to when speak of pain. A large note followed the clinical definition of pain i just quoted, as an addendum. It’s worthwhile reading this out loud.
‘Pain is always subjective; It is unquestionably a sensation in a part or parts of the body, but it is also always unpleasant and therefore also an emotional experience; Biologists recognize that those stimuli which cause pain are liable to damage tissue; Many people report pain in the absence of tissue damage or any likely pathophysiological cause and that usually this happens for psychological reasons; and there is usually no way to distinguish their experience from that due to tissue damage if we take the subjective report.’
These clauses result in compounding problems.
- For researchers, pain having both an identifiable biological cause, yet also existing as a purely subjective state, makes it difficult to quantify pain experiences. The measurement of pain is one of the field’s holy grails.
- For clinicians, having no way to distinguish between psychologically derived pain and physically derived pain makes effective treatment difficult.
- Finally, for patients, persistent pain conditions and the inability to adequately treat, or explain, their experiences often results in mental health comorbidities that compound their presenting illnesses.
This difficulty of definition makes this a good point to briefly review an important debate in the Philosophy of pain. Already in the 1970s, people were asking the question if we could ever create a machine that feels pain. In his famous paper ‘Why you can’t make a computer that feels pain, Daniel Dennet poses a thought experiment about building a machine with a biologically-inspired pain system. His task was two-fold: to both develop a better type of computer system, and to better understand the underlying neurobiology of pain states. Given all the complexity we just encountered–pain states without injury, and injury without pain states, as well as noting other pain conditions brought on through pharmaceutical interventions–Dennet eventually concludes this task to be impossible. This is not because a machine could not feel pain, but because there is no one coherent pain concept to design into it. This starts the tradition known as pain eliminativism, which argues that because the concept of pain fails to refer to anything empirical, we would be better served by eliminating it: removing it from our usage and instead discovering other vocabularies to serve its place. As we do more work on pain in machine learning, we will also enter this debate. Instead, I think we should think of this as an opportunity: pain highlights a discrepancy between pain perceptual states and environmental knowledge, and we can ‘rescue’ pain by explaining this discrepancy.
Models of Pain
Since i haven’t said anything about reinforcement learning, you might implicitly have in your mind that pain is simply another signal and that is how pain fits into what we already know. Although natural and relevant, I’d like to caution against assuming this default view, because it is one of many possible views. So let’s dig a bit more into these different views of pain. I’d like to explore 3 models of pain.
The first is the sensation model. In this model, pain simply is the painful sensation. Pain is not paired with any representation of a physical state or stimulus, but is seen to occur in correlation with them. This is simple and intuitive, but not a great model, for the simple reason there seems to be no homogeneous pain clinically present. The McGill Pain Questionnaire, the most prominent subjective pain scale used clinically, comprises a list of over 50 terms to identify salient features of one’s pain experience to aid in diagnostic practice, each informing a unique aetiology for pain. Were pain a single sensory feature as proposed, such an extensive list of descriptors would not be necessary to differentiate both symptoms and underlying causes.
The second model is the Representational model, and is one that will be more familiar to us. We can develop either a sensation, perception, or emotional theory of pain. What is common between all representational theories is the argument that pain is a representation, or abstraction, of a perceptual feature of one’s environment or body. But again, we have evidence that departs from this model. For example, phantom limb pain and referred pain, where the body location the pain is meant to represent either does not exist (because of amputation) or is only indirect (as with heart attacks), suggests that pain does not conclusively represent a physical feature of our body or environment.
The third model is the motivational model. Here, pain is a request or command to protect a part of your body. Again, this has its limitations. People commonly experience pain past the point where any behavioural intervention affects the initial cause. There are also examples of pains that are actively sought out, either through treatments such as acupuncture or in masochistic practices, which seem to push against the notion that pain is only a motivator against certain behaviours.
So now comes one of the key points I want to leave with you. I’d like to pose the possibly contentious claim, mimicking the contention by Ann-Sophie Barwich, that “theories of perception suffer from one fundamental flaw: they are theories of vision.” So, have we in machine learning fallen into the same trap, overfitting to our understanding of vision? It seems that we too have tied so much of how we think about learning to the detection of perceptual objects that are then bound to states, which then informs action. So our key proposal is to switch our frame of view to one of perception as situational assessment. By this we mean that instead of focussing on perception as identifying perceptual objects, and as working as separate systems, we can think of perception as always combining multiple sources of contextual information to form final perceptual states. This is not very different to how we think of perception algorithmically, except that this view is unique in that it allows for painful situations to be explained and allows us to provide an object-less account of sensory states. Like so many other experiences, pain is a multimodal approach to perception and can be involved in learning without requiring an association with discrete environmental knowledge.
So what we need is a distributed situational assessment model of pain to help us inform new ways of learning. Let’s briefly look at two proposals that will be natural to us in machine learning: pain as inference, and pain as reward.
The pain as inference model treats pain as a Bayesian application of learning, where an agent only has indirect knowledge of their internal states and environment and must infer final states on the basis of ambiguous and incomplete information. Pain is considered an active predictor of future bodily states, as well as an assessor of current afferent information, and the Bayesian updating transforms multimodal prior experiences into future assessments.
A second view takes our first guess about pain, thinking of pain as an internal reward signal or a general type of cumulant that is used within a risk-averse intrinsic motivation system. This model integrates sensory and nociceptive information. Assessment is not about the specifics of a physical object within the environment, but about the situation through which the afferent information was acquired. The resultant pain signal generated in the primary teaching signal that drives the formation of value and behaviours.
Both these views are incomplete in some ways and don’t yet have all their details worked out. we can dig deeper by Considering three areas of pain learning : single exposure pain learning (we usually say single-shot learning), generalisability of pain experiences to novel stimuli (what we usually refer to as transfer learning), and the ability to socially transfer acquired pain knowledge (what we usually refer to as imitation learning). All three of these are learning abilities we have with other perceptual systems, but when taken together, using a view of situational assessment, pain shows that we can provide a view on these approaches to learning that is not tied to object recognition in an environment.
So this is where a second key point I’d like to raise enters. That machine learning can be a tool with which to provide some of the details we are currently missing. I like to use Marr’s levels of analysis as a method for thinking through these problems. In the interest of time, let’s discuss just the case on single-exposure learning.
At The Computational level: The ability to pair pain perception to harmful stimuli quickly and efficiently is one of the primary features of pain systems. There is recent evidence showing that drosophila, using odour-and-food associative conditioning, demonstrates single-trial learning for both reward and punishment. For rewards, we see learning when an odour is paired with a fructose reward. For punishments, we also see learning when an odour is paired with a high concentration salt solution. And interestingly, single-trial learning was not demonstrated for adversive quinine solutions.
The ability to learn from a single exposure of salt solution rather than from quinine is thought to result from activation of a multimodal pain system through micro-wounds, emphasising modality specific pain learning from punishment, rather than from any and all negative or adverse stimuli.
As you might have experienced yourself, the rapid onset of pain learning following a single exposure also occurs in humans, and can provide lifelong behavioural adaptations. When rapid encoding of environmental stimuli occurs with pain, it typically also invokes a fear and emotional response, thereby enlisting structures of the memory system. This rapid-onset learning is an important survival mechanism and is a fundamental feature of pain behaviour across the animal kingdom.
Now the Implementation level: One crucial element to rapid encoding of learned pain experiences is accomplished through the recruitment of the hippocampus. Stimuli that present unexpectedly with pain result in a larger activation in the hippocampus, amongst other regions. The activation of this hippocampal system is thought to underscore its role as a situational comparator, where actual states are compared to expected states, with errors (or novel stimuli) resulting in encoded memory formation. Also related to pain learning is an aversive or fear circuit of the amygdala-striatal system and also the lateralized consummatory system. This all points to a detailed knowledge base of pain neurobiology we have available to work from.
Yet, while we have made progress explaining the computational and the implementation levels, I’ve not described the algorithmic level, because what constitutes the algorithmic level for these pain problems is still missing. Helping to fill in this missing algorithmic level is where I think we have many contributions from machine learning to make.
At this point, we’ve reached the end of our time together. If I’ve accomplished my aims, then you will move on from this video with two takeaways. Firstly, you will have a brief view into the world of pain research and the many dimensions it takes, whether in technical, social, or sociotechnical domains. And secondly, you will see the opportunity for new research in both biological and computational learning by filling in the missing algorithmic level, using the idea of situational assessment as a guide.
The human pain system is a highly refined, skilled learning apparatus essential for our survival, as well as a mediator of so many complex social and cultural traditions. People lacking pain systems often live tragically short lives, unable to learn vital protective information about their environment. And the pain system malfunctioning is one of the primary health care burdens felt worldwide. Despite the importance of pain to human lives, machine learning has as of yet failed to use pain as a source of algorithmic inspiration, despite the long tradition of mutual exchange between machine learning and neuroscience. This has been an idea i’ve been curious to explore for many years, and I’m so grateful to be able to share this initial exploration with you. and I hope to soon learn from how you might interpret the role of pain in machine learning. Thank you.
To close, please join with me in reading a poem. This poem gives a sense of how deeply mystical pain is and it’s many many facets, and is a way to express the ideas we’ve been exploring a bit differently. This is perhaps one of the most-widely recognised poems
On Pain by Kahlil Gibran
And a woman spoke, saying, Tell us of Pain.
And he said:
Your pain is the breaking of the shell that encloses your understanding.
Even as the stone of the fruit must break, that its heart may stand in the sun, so must you know pain.
And could you keep your heart in wonder at the daily miracles of your life your pain would not seem less wondrous than your joy;
And you would accept the seasons of your heart, even as you have always accepted the seasons that pass over your fields.
And you would watch with serenity through the winters of your grief.
Much of your pain is self-chosen.
It is the bitter potion by which the physician within you heals your sick self.
Therefore trust the physician, and drink his remedy in silence and tranquility:
For his hand, though heavy and hard, is guided by the tender hand of the Unseen,
And the cup he brings, though it burn your lips, has been fashioned of the clay which the Potter has moistened with His own sacred tears.
As a short postscript here are a few resources that might be of relevance:
- The paper we wrote as a companion to this talk called Pain and Machine Learning with my amazing collaborator Daniel Ott, and with many references.
- The Routledge Handbook of Philosophy of Pain is an edited volume covering pain across disciplines. You’ll recognise the missing algorithmic level from the contents.
- Daniel Dennet’s 1978 paper on Why you can’t make a computer that feels pain.
- A paper on opportunities to support clinical research called Machine learning in pain research.
- A paper taking the opposite direction of pain for learning, Pain: A precision signal for RL and Control.
Thank you again.