Did you know that your parents’ genes might be competing inside you? In my article for the October issue of The Psychologist, a magazine published by the British Psychological Society, I discuss this fascinating phenomenon and its potential origins in non-monogamous mating behaviours. Check out my piece: The genetic battle of the sexes. I would love to hear your thoughts!
At this point, you might already be familiar with the visual illusion which, according to Time Magazine, is here to break the internet. Exactly 12 small black circles are scattered around this image – but the tantalising bit is that you can only see a few of them at any one time. As you move your eyes across the picture, it might strike you that circles which you were just looking at suddenly vanish from sight, replaced by a few new ones that were previously invisible. This so-called extinction illusion, which was in fact published in a research journal already in 2000, gained immense popularity last Sunday when the Japanese psychologist Akiyoshi Kitaoka shared it on his Facebook site.
While visual illusions are fascinating to look at, what I find most interesting about them is that they only exist in the first place because they are an artefact of the way our visual system is organised. Thus, behind every great visual illusion is an even greater neurobiological reason why we see it the way we do. So what is it that prevents us from seeing more than a few dots in the extinction illusion? To begin understanding the basics, let’s take a look at how information is transmitted from the eye to the brain.
Lining the back of our eye is the retina – a thin sheet of tightly packed cells, called photoreceptors. Whenever these cells capture photons of light that enter the eye, they send continuous chemical signals to receiving cells in the retina in order to inform them that they have detected something in the tiny area of the visual field that is under their surveillance.
While it might be intuitive to assume that the brain receives constant pixel-by-pixel updates of the light patterns that hit the eye as we scan our environments, this is not the case. If it were, then the eye would just be creating an exact replica of the external world and transmitting a photograph to the brain. We would then need to ask ourselves – is there a little man inside our visual brain, keeping an eye on the constant flux of images that is being projected from the retina and deciding how to interpret the images? This absurd idea is something that the American philosopher Daniel Dennett refers to as the problem of the Cartesian Theatre (discussed briefly in this YouTube video).
On a fundamental level, the visual system cannot possibly be of any use to us if it merely reconstructs the light patterns that fall on the eye, for the perusal of the brain. It needs to use these patterns to extract information about the external world, which requires the initial light-detection signals from the photoreceptors to undergo some transformations before reaching the brain. The illustration below explains one of the primary principles of retinal processing that holds the key to the extinction illusion.
So what is the purpose of lateral inhibition in the retina? Let’s consider what kind of stimuli are optimal for activating this bipolar cell. If a diffuse light uniformly activates photoreceptors both in the centre and the surround of this bipolar cell’s receptive field, then the outcome will be a weak signal (since the positive effect of the central inputs will be largely cancelled out by the negative effect of inputs from the surround). On the other hand, the bipolar neuron is more likely to become excited and send a robust signal if it receives a strong input from photoreceptors in the centre of its receptive field and a weak signal from the surrounding area. And there you have it… a dot on a contrasting background is just perfect. Lateral inhibition means that retinal neurons like bipolar cells prefer receiving inputs that contain contrast, which is why it’s thought to be one of the most fundamental mechanisms that enables our visual system to be sensitive to dots and edges (which usually sit at points of change between light and dark regions). While the dots found at the intersection points of the extinction illusion are technically the ideal stimulus for our retinas, one other property of the visual system prevents all the dots scattered across the image from being simultaneously visible.
That property is convergence of information. A bipolar cell collects its signals from multiple photoreceptors, and researchers know that how widely the cell extends its network to collect information differs massively between different parts of the retina. As an example, if we were to examine the centre of the human retina, we would find that a bipolar cell often receives its central input from just one photoreceptor, and its surrounding input from a couple more. In contrast, a bipolar cell in the more peripheral regions of the retina might collect signals from 25 photoreceptors before joining signals with 5000 other bipolar cells when communicating with the retinal ganglion cell (which, in turn, transmits information to the brain). This means that a single retinal ganglion cell in the periphery of our retina collects information from up to 75,000 photoreceptors! This allows it to keep watch over a much larger area of the visual world than cells that pool information from a much smaller number of receptors (hence, cells in the periphery are said to have large receptive fields). A further perk of such great convergence is that these cells are exquisitely sensitive to light, since they can add up weak signals from thousands of receptors. This is primarily why astronomers have historically preferred to look for faint stars by directing the telescope just slightly off-centre of their retinas – a technique known as averted vision.
The downside of the vast convergence of photoreceptor signals that takes place in the periphery of the retina is exactly why the extinction illusion works the way it does. As neurons collect information from more photoreceptors and grow larger receptive fields, they become less capable of resolving small details. Thus, the brain is unlikely to be informed of the presence of small objects in the visual periphery – something that is responsible for the fact that the dots you see in the extinction illusion are never far away from the centre of your gaze.
To better understand this, take a look at the illustrations below and imagine that you are the visual brain whose task is to read and interpret the signals being transmitted to you from the retina as the eye is scanning the extinction illusion image. First, let’s examine the signals produced by a patch of retina close to the centre of the eye, where bipolar and retinal ganglion cells pool information from very small numbers of receptors and thus have small receptive fields (the centres and surrounds of their receptive fields are outlined by solid circles).
Here, you can see that when a small black dot on a lighter background happens to fall on the receptive field centre of one of these cells, it provokes a robust excitatory signal.
If this dot happens to be located elsewhere (below), then it will likely stimulate the receptive field of another cell that keeps a slightly different area of the visual world under surveillance. As the brain reads the signals arriving from this patch of the retina in these two cases, it has reason to believe in the existence of two small stimuli in different locations. Thus, it provides you with a conscious experience of these dots as distinct entities.
On the other hand, let’s briefly consider the types of signals the visual brain would receive from a more peripheral region of the retina, where retinal neurons collect their inputs from vast numbers of photoreceptors. Here, the centre of one such cell’s receptive field is outlined with a red circle (the surround extends far beyond the image). When a small dot falls within the centre of this cell’s large receptive field, it will likely provoke a signal.
However, the vast size of this receptive field means that if another dot were to occur in the vicinity of the first one (below), there is a high probability that it would also stimulate the same receptive field’s centre.
In the case of these two dots, the visual brain is receiving the same signal from the same retinal ganglion cell. This means that the information it has access to does not allow it to reasonably infer where something might be happening in a particular area of the visual world. Thus, as receptive fields of neurons in the retina grow larger, their signals give the brain less and less certainty as to what exactly is happening and where. When such certainty about visual events is lacking, there is no reason for them to arise in our conscious experience. Instead, the brain appears to provide us with an experiential ‘filler’ (you don’t exactly go around seeing ‘uncertainty’) and some level of ignorance regarding just how poor our spatial vision is in the eye’s periphery.
Thus, as you move your eyes across the extinction illusion, small black dots dip in and out of your conscious awareness as the periphery of your retina quickly becomes ‘blind’ to the existence of those dots that might have been visible to you when you were gazing directly at them.
* The person behind this article and its illustrations is Lev Tankelevitch, a current Neuroscience PhD student at Oxford University who is really excited about science communication. Scientific study references that back up his points can be clicked on in the text to open the original reports in a separate tab.
When I was twelve I visited my grandmother in Germany. She lived in Konstanz, a small university city lying along a picturesque bank of Lake Constance, a name that coincidentally alludes to the point of this story. On one of the islands dotting this lake, we happened upon a quaint restaurant offering variations on the traditional German meat-and-potatoes fare. Although I foolishly did not remember the name of the restaurant, the meal left such an indelible impression on me that when I returned to Konstanz six years later, I knew that I had to find this place at all costs. When I stepped onto the island again, I began to search the narrow cobbled streets. To my surprise, I had managed to find the place rather quickly, using a mysteriously intuitive sense of direction which I had somehow retained from my initial visit, years earlier. What did this navigational sense really consist of, and how did I learn it?
In an attempt to answer these questions, one set of researchers based in Paris staged a fascinating, Inception-like version of my own learning experience. Instead of the island on Lake Constance, they used a circular open field, about one metre in diameter. Instead of the hearty German grub, they provided electrical brain stimulation. Likewise, the protagonists in their story were not hungry adolescent boys, but rather a group of mice. The heart of their study, though, lies in the idea that the mice likely had no awareness of their own learning process, for the simple fact that most of it occurred as they soundly slept in their cages.
There is a seahorse-shaped structure lodged deep in the brain of mice and humans alike called the hippocampus. Brain cells, or neurons, in the hippocampus are thought to represent a kind of map of the space around us. Work conducted in the 1970’s by John O’Keefe (which recently earned him the 2014 Nobel Prize in Physiology and has been explored by countless others since), has shown that when a mouse is in a particular location, certain neurons get excited and fire away. That is, each location in the surrounding area corresponds to a set of neurons which represent that location, as a sort of coordinate. Considered together, these “place cells” are thought to make up a map of the environment, presumably allowing the animal to navigate to the mouse equivalent of homely German restaurants.
Although it is accepted that place cells form a map of space, it is less clear whether mice actually use these maps to navigate and how they might do this. This uncertainty arises because previous studies have looked at place cells at the same time as mice are navigating their environments. This is a paradoxically frustrating situation: a mouse’s current location always matches the firing of the associated place cells. This is what’s expected, of course, but it also leads to an alternative and relatively more boring explanation that place cells reflect a simple “you are here” signal, rather than a full map that animals actively use to find their way around. It is the age-old problem haunting science: correlation is not the same as causation. To determine that place cell maps have a causal influence on mouse navigation, it is necessary to first interfere with place cell activity, and then see if navigation changes accordingly. To do this, the Paris team decided to interfere with place cells as the mice slept, a time during which they would certainly not be navigating their environments – except maybe in their dreams.
This last point is critical to the Paris study. Earlier work has shown that place cells spontaneously activate during sleep in a rhythmic pattern. The presumption is that they are “replaying” the spatial map that the animal has learned while awake, and thereby solidifying it for future visits. The team in Paris manipulated this replay process in real time as the mice slept, and then observed whether this would have any effect on where the mice navigated once they awoke. In essence, they wanted to test if this intervention could implant into a mouse’s mind a ‘memory’ of visiting a particular place.
But first, they had to find some place cells to work with. To do this, they recorded from cells in the hippocampus as mice ran around a small circular field, and observed the bustling neuronal activity. If a cell reliably fired when the mouse was in a particular place in the circular field, the team could be confident that this was a place cell which represented that particular location.
Next, they had to find a way to reliably influence the carefully orchestrated replay process that these place cells would engage in when the animals would later fall asleep. Since it is unclear exactly how place cells work together during the replay process, directly tinkering with them may be too messy of an affair, and could lead to uninterpretable results. Instead, the Paris team made use of the brain’s natural teacher and reinforcer – the dopamine system – to bias the place cell map toward an arbitrary but real location (that is, to “convince” these neurons that one place was special). It is as if you became certain, overnight, that precisely down the street there exists a fantastical German restaurant serving the best food you’ve never tried.
To achieve this, the mice were implanted with stimulating electrodes in a bundle of nerve fibres in the brain creatively named the medial forebrain bundle, or MFB for short. Electrical stimulation of the MFB causes the release of dopamine. Often simplistically and misleadingly called the brain’s “pleasure” chemical, what dopamine really does, at least in part, can be more accurately defined as reinforcing behaviour. Events or actions that lead to rewarding outcomes (like eating food!) cause the release of dopamine, which acts to reinforce those events or actions in the brain and thereby ensures that they are marked as important for the future. Here’s a classic demonstration from the 1950s: if given a chance to press a button which triggers MFB stimulation, and therefore dopamine release, animals learn to press this button without end. What this illustrates is that a rewarding outcome like a delicious German meal is not actually necessary to learn anything, as long as you have the associated release of dopamine.
In the current study, mice didn’t have to press a button to trigger stimulation of the MFB. Instead, stimulation was dependent on the spontaneous activation of place cells that the researchers had selected earlier when the mice were exploring their circular field. That is, as the mice slept, every time a specific place cell fired, stimulation of the MFB would occur immediately after. It’s as if flipping through a photo album on repeat, you would highlight one particular photo over and over again, marking it as especially memorable.
Now, if a place cell represents a particular location in a mouse’s environment, and its replay of this particular spot during sleep can be reinforced through stimulation of the MFB, then perhaps it is possible that mice would come to prefer this particular location once they wake up and return to the field? That is exactly what the Paris team found. Before the stimulation trick, the mice explored each location of the field with equal curiosity or boredom. They showed no preference for any given location, as they had no good reason to do so. But after pairing the place cell activity during sleep with MFB stimulation, the mice immediately darted for the location which was represented by the chosen place cell, and spent more time there than anywhere else in the field. Something in each mouse’s brain, presumably some communication between its place cells and dopamine system, was telling it that this location was especially important. The team in Paris had not only demonstrated that mice indeed rely on their place cell maps to navigate, but had also provided further evidence that replay during sleep is important for reviewing and cementing these maps for future use.
Of course, after a few trips to that memorable location, becoming aware of the lack of good German restaurants in the neighbourhood and no longer receiving dopamine reinforcement caused the mice to quickly abandon their preference. This only goes to further demonstrate the continuously evolving nature of place cell maps. And this is what makes my own memory of that restaurant’s location so fascinating: I had learned it after one visit, and retained it for years later. What enables such persistence? Perhaps it is the fact that the continued and detailed existence of our memories so often relies on external memory storage in the form of photos and stories that we share with others.
* Scientific study references that back up my points can be clicked on in the text to open the original reports in a separate tab.
In a recent US survey conducted by the Substance Abuse and Mental Health Services Administration, roughly 16% of adults reported some form of drug use. One thing that is quite clear, if we compare these figures to rates of psychiatric diagnoses, is that only a portion of those who have experience with drugs go on to develop full-blown addiction. This begs the question – why is that? Is it possible that some people really are more or less immune than others to the risks of substance abuse?
The idea that some individuals have addictive personalities harks back to old-school personality psychology, which classically considers those with impulsive traits and sensation-seeking tendencies (eg. interests in activities like extreme outdoor sports or swinger parties) to be at greater risk of developing drug addiction than other individuals. Currently, we have good reason to believe that this idea might be grounded in some biological truth – one that doesn’t only apply to humans. Already two decades ago, researchers studying the behaviours of rats observed remarkable variability in their tendency to become addicted to psychostimulants like methamphetamine and cocaine. It turned out that, given equal and total access to an addictive drug, not all rats were equally likely to develop a compulsive habit. Interestingly, a rat’s level of trait impulsivity (that is, a life-long tendency rather than a temporary state of impulsiveness) has been found to predict how likely the animal is to acquire addiction if exposed to cocaine. This raises the interesting question of whether the ‘addictive personality’ is rooted in some biological reality, and whether rats can genuinely shed some light on this.
While the idea that rats have something akin to elements of personality might be surprising, it probably shouldn’t be. It is to be expected that natural variation in the genetic makeup of complex animals causes the brains of different individuals to develop in slightly diverse ways, making them produce behaviours that might be characteristic of some individuals, but not others. Ultimately, this means that some individuals may be biologically predisposed to be more anxious, risk-taking, aggressive, or perhaps addiction-prone. Evidence from research labs tells us that it is indeed possible to produce rats with what appear to be ‘addictive personalities’ – something that was used in a recently published set of experiments by researchers from the University of Michigan.
Their study looked into the genetic factors that might contribute to some rats being at greater risk of developing full-blown cocaine addiction. They started by breeding two colonies of rats that were expected to differ in their natural tendency to become addicted to drugs like cocaine. Specifically, rats in one colony were bred to be highly active and explorative when placed in new environments, while rats from the other colony were less explorative than average. Since high sensation-seeking is a personality trait classically associated with increased likelihood of becoming drug-addicted, the logic of raising these colonies was that researchers created two groups of rats that were, to some extent, ‘biologically destined’ to either struggle with drug addiction or be more resistant to it. This was a reasonable possibility, as the same researchers had previously found that the highly explorative rats were more likely than the less explorative ones to start taking cocaine in the first place. Now, the fact that these rats might also have inbuilt differences in terms of their long-term addictiveness could be used to examine aspects of their biology that might underpin these differences. In other words – if some rats are more likely than others to become hooked on drugs like cocaine, what might be so special about them?
Measuring addictive behaviours in rats
In this experiment, both rat colonies underwent extensive training during which they learned that if they used their noses to poke a particular spot in their chamber, they would get an instantaneous infusion of cocaine into the bloodstream. During the first 20 seconds that the rats were experiencing their high, the spot in the wall that they had poked would light up – this would become relevant later when the researchers tried to assess addictiveness. It turned out that on multiple behavioural measures, the highly explorative rats indeed showed substantially stronger signs of having long-term addictive tendencies. How was this tested?
Firstly, the researchers examined how desperate the rats became at seeking out cocaine even when it was no longer an option for them. Throughout the course of training, the researchers switched on a ‘house light’ on the top of the chamber to indicate to the rats that they were free to approach the spot that resulted in cocaine delivery. On the occasions that this light was switched off, cocaine became unavailable no matter how vigorously the rats would poke the spot – something that they managed to learn quite quickly, as they essentially stopped approaching the area.
This gradually became an issue for the highly explorative rats. As the experiment progressed, these animals became less and less capable of refraining from poking the spot even when the light was no longer switched on. I should note that it was quite clear that these rats were initially able to conceptualise that they couldn’t get hold of cocaine on these occasions, since earlier on in the experiment they stopped paying attention to the spot for the duration of the light-off period. But the longer they had spent receiving the drug, the more their initial self-restraint began breaking down. Interestingly, only rats from this colony seemed to make this transition to compulsive cocaine use, as the non-explorative rats were consistently able to lay off the drug-delivery spot during the light-off period.
Next, the researchers found that the highly explorative rats were also abnormally sensitive to relapse after a period of abstinence. This was tested by forcing rats from both colonies to go cold-turkey for a month, and subsequently placing them in the same chamber where they would normally be given cocaine. To make this environment all the more similar to times when the rats were free to obtain the drug, the house-light was switched on (as if signalling that cocaine was available). Furthermore, every time the rats used their noses to poke the spot in the wall that used to trigger cocaine infusion, that spot would light up as it had always done during training.
To capture the essence of the situation in fewer words: everything that the rats were experiencing during this test was the same as during their cocaine-taking days apart from the fact that performing the action that normally resulted in cocaine delivery was now entirely futile. This test of so-called cue-induced reinstatement is quite important in drug addiction research, because it is thought to capture an important aspect of drug addiction – namely our tendency to become triggered into old habits by elements of our environment (or ‘cues’) that were previously associated with taking drugs. This is perhaps the single most common problem with recovering drug addicts coming out of rehab, as going back to the same group of friends, the same social venues with the same music and smells as before often makes the temptation to go back to taking drugs too strong to resist. Indeed, one large-scale study has found that roughly a quarter of rehab patients continue taking cocaine on a weekly basis following treatment, while a slightly smaller fraction of patients develop habits severe enough to drive them back into another rehab programme. Thus, it’s reasonable to suppose that measuring rat behaviour during the cue-induced reinstatement test could give us some insights into behaviours that have something in common with recovering cocaine addicts going back to their old environment.
Under these conditions, the highly explorative rats turned out to be substantially more susceptible to relapse than rats from the other colony. This was evident in the fact that these rats reverted back to poking the spot normally associated with cocaine roughly four times more often than the other rats, even though their attempts were clearly futile.
Measuring gene activity in the brains of addiction-prone rats
All in all, the behavioural profile of the highly explorative rats suggested that they had something akin to an addictive personality. For the sake of simplicity, I will now refer to this group of rats as addiction-prone, and the other group as addiction-resistant. To understand this phenomenon from a biological perspective, the researchers examined the brains of addictive and non-addictive rats who either never got to experience cocaine, or consumed it for a prolonged period of time. The experimenters particularly focused on a cluster of neurons called the nucleus accumbens – an essential component of the brain’s reward system that receives large portions of dopamine that is triggered by events such as sugar consumption, money, sex, and drugs like cocaine and amphetamines. Researchers have known for decades that this neurochemical plays a critical role in triggering the sensation of ‘wanting’ something, even in the absence of enjoyment. Given this evidence, it’s to be expected that the dopamine system also has something to do the development of pathological ‘wanting’ that characterises addiction. Thus, the researchers looked into whether brain cells from the nucleus accumbens of the relatively addiction-prone rats might differ from the more addiction-resistant animals in terms of the activity of certain genes.
How do we measure gene activity? One commonly used method involves looking at the amount of messenger RNA (mRNA) that can be detected in a brain region of interest. This is because whenever a gene is being actively used to produce protein, its DNA is first used to make strands of a molecule called mRNA, which acts as a ‘portable script’ containing the instructions, or code, for the production of a specific protein (explanation below). It is thus often assumed that the more copies of a particular mRNA we find inside a cell, the more actively that gene is being used to make protein.
Using the method of measuring mRNA, experimenters found that two genes in particular appeared to show different levels of activity in the studied part of the brain’s reward system in addiction-prone rats compared to their less addictive peers.
Clues from the dopamine system
Firstly, addictive rats with no experience of cocaine turned out to have significantly weaker activity of a gene that is used to produce a particular type of dopamine receptor, known as the D2 receptor. We know that receptors are tiny proteins that pepper the membranes of brain cells and work by capturing molecules of neurochemicals that are being released by other neurons, and triggering reactions to these signals. Receptors are the foundation of the ability of brain cells to communicate with each other through chemical signals, and sometimes they are also found in locations that allow them to have slightly different functions. This is the case for D2 receptors, which are often anchored to the endings of dopamine-releasing cells. There, they keep track of levels of dopamine in their surroundings and accordingly use this information to suppress further dopamine release from the neuron and prevent neurochemical build-up.
The fact that the researchers observed low levels of D2 gene activity in the nucleus accumbens of addiction-prone rats, before they’ve ever experienced cocaine, tells us that the reward system of their brain is likely producing fewer of these receptors compared to the more addiction-resistant rats. As a consequence, the brains of rats susceptible to addiction might be intrinsically less capable of keeping a lid on the release of dopamine that is triggered by substances like cocaine, which allows the neurochemical to reach high levels and perhaps to have a greater influence on the brain cells that it targets. The fact that this was observed in the animals who hadn’t even experienced cocaine might go a long way towards explaining why these animals are inherently more sensitive to the addictive effects of this drug (which is widely known to trigger dopamine release). Furthermore, this abnormality of the dopamine system might also partially lie at the root of why these rats have explorative personalities, as you might expect brains that are more sensitive to rewards to become more active at pursuing exciting and rewarding experiences.
In a further turn of events, the researchers uncovered something interesting about the histone proteins that ‘package’ DNA in some parts of the D2 gene, in the brains addiction-prone rats. That is, these proteins carried a particular chemical mark (H3K9me3) that is considered to permanently suppress gene activity. The mark achieves this by causing the proteins to wrap around the DNA so tightly that they make the gene inaccessible to components of the cell that are required to begin ‘reading’ it out to begin producing protein (explanation below).
As a result, the presence of this suppressive mark in parts of the D2 gene in addiction-prone rats means that this gene is essentially unreadable, forced into a state of long-term hibernation (which explains why this gene was found to have low levels of activity). The fact that this is found in rats with no actual experience of cocaine suggests that this situation is already in place early on in life, which might be one factor predisposing these animals to respond to cocaine differently from their addiction-resistant peers. Interestingly enough, rats with the greatest attachment between the suppressive chemical mark and the D2 gene turned out to be the ones who were most likely to relapse after a period of abstinence from cocaine.
Links to a gene… with links to anxiety
In a further turn of events, the experiment revealed that the brains of addiction-prone rats have substantially higher activity of a gene coding for the protein FGF2 (Fibroblast Growth Factor 2). This effect was apparent both in individuals who never actually experienced cocaine and in those who had extensive exposure, which raises the possibility that high activity of the FGF2 gene is an inherent characteristic of brains that are more likely to become addicted.
What do we know about the FGF2 protein? It is uncertain exactly why it might be associated with cocaine addiction, since it appears to be generally involved in the proliferation (multiplication) and maturation of brain cells. What is perhaps more intriguing about the involvement of this gene is the fact that it is also associated with disorders of anxiety. Specifically, researchers have found that the rats with heightened levels of anxiety show weaker activity of the FGF2 gene, and that this anxious phenotype can be reversed by treating rat pups with high doses of FGF2 early in life. This same treatment, which results in higher levels of FGF2, also enhances the likelihood that rats given the opportunity to take cocaine will go on to develop addiction. It appears that the naturally high levels of FGF2 gene activity in highly explorative rats might predispose them to addiction from the very beginning of their lives.
While I admit this to be quite speculative, it’s tempting to consider the intriguing possibility that anxiety and proneness to addiction might in some ways be opposing ends of a biological spectrum. It is possible that those genetic factors which predispose individuals to high levels of anxiety also serve to protect them from the risks of drug addiction. Irrespective of whether this is true, the evidence is quite clear that some animals, including humans, really might be inherently more likely to develop substance addiction than others, and that genetic factors have a role to play in this story.
* Scientific study references that back up my points can be clicked on in the text to open the original reports in a separate tab. They can also be found at the bottom of the article.Many of us have at some point heard the story of dolphins sleeping with one brain hemisphere at a time. This partial sleep phenomenon, whereby some regions of the brain appear to exhibit properties of wakefulness while others submerge themselves into deeper sleep (referred to as slow-wave sleep), is actually quite common across the animal kingdom. Many aquatic mammals, birds, and possibly reptiles can sleep with one eye open, while the hemisphere connected to that eye remains in a wake-like state, as if on a lookout for threat.
Indeed, the consensus amongst biologists appears to be that partial brain sleep has an essential safeguarding purpose, allowing animals to remain vigilant of their surroundings and capable of reacting to potential threats at short notice. This is supported by observations of groups of wild ducks, where individuals lying on the edges of a sleeping flock have a habit of keeping an eye open away from the group centre – where predators are most likely to emerge. Researchers have found that this tendency declines significantly towards the centre of the flock, where roughly 88% of ducks indulge in sleeping with both eyes shut, as opposed to the relatively fewer 69% of individuals on the edge of the group.
Furthermore, recording these ducks’ electrical brain activity revealed that, while the hemisphere connected to the closed eye exhibits classic slow-wave activity (a hallmark of deep sleep), this phenomenon is significantly weaker in the hemisphere that receives inputs from the open eye. This indicates that, in one-eye-open sleepers, one hemisphere remains in a ‘quiet waking state’. This, it turns out, enables surprisingly quick escape reflexes! When ducks sleeping with one eye open were shown videos of expanding images to mimic the approach of a predator, it took them on average only 165 milliseconds to make a sudden dash for safety.
It’s clear that preventing the entire brain from becoming consumed by deep sleep can pay off when animals live in risky environments. Recently, the journal Current Biology published some evidence that a similar phenomenon might actually be happening in our own brains the first time that we sleep in a new and unfamiliar place. Under these circumstances, humans have a well-documented tendency to take a little longer to doze off and have more fragmented sleep than normal. The new findings indicate that this first-night effect might be underpinned by the fact that parts of our left hemisphere remain on night watch when we fall asleep some place new. Ultimately, degraded sleep quality is likely a small price to pay for having a brain that temporarily remains more alert just when it counts the most – that is, when we shut our eyes and minds on new and uncertain places, full of potential threats.
In this experiment, researchers examined the brain activity and eye movement patterns of their participants during the first and second nights of sleeping in an unfamiliar room, using recording electrodes placed at various sites on the scalp and around the eyes.
Their analyses revealed that the typical sleep disturbances of the first night were associated with some peculiar effects in the so-called default mode network of the brain. This term has been given to a collection of brain regions that are consistently found to be active whenever people lying in a brain scanner aren’t asked to do anything in particular and are left to their own devices. Researchers don’t pretend to know what exactly human minds get up to when there is nothing to do – they could be contemplating life, thinking about what’s for dinner, or mentally replaying an awkward conversation from earlier. But whatever the mental state is, the fact that it appears to be associated with activity in the same set of brain regions across many brain scanning experiments has given rise to the idea that these regions represent a fundamental network that acts as some sort of scaffold for our thought processes. So what happened to the default mode network when participants entered a state of deep sleep for the first time in a new place?
It turned out that this network produced substantially weaker slow-wave activity in the left hemisphere compared to the right. This asymmetry indicated that parts of the left hemisphere remained in a much lighter sleep state than the rest of the brain, and this was not without its consequences. The experimenters found that the less slow-wave activity an individual’s left hemisphere produced compared to the right, the longer they took to fall asleep. This indicates that our tendency to have more fragmented sleep in new places might have something to do with the fact that parts of the left hemisphere maintain a certain level of vigilance even as the rest of the brain plunges into deep sleep.
This phenomenon likely served a critical threat-monitoring purpose at times when humans lived in far less sheltered environments. Consistent with this possibility, the study found that individuals’ left hemispheres (and the individuals themselves) were surprisingly reactive to unexpected sounds that occurred when they first entered deep sleep in the new place.
To show this, the researchers used a so-called oddball paradigm, which involves playing a sequence of identical beeps roughly one second apart, peppered with the occasional ‘oddball’ beep that stands out from the rest in terms of pitch.
Such rare and unexpected inputs normally produce an electrical brain response whose strength depends on the alertness of the listener. Now, we know that information from one ear is primarily received by the hemisphere on the opposite side of the brain. This allows us to reason that playing such a sequence of beeps separately into the right and left ears of sleeping individuals could help us test just how alert the left hemisphere is compared to the right, which shows greater levels of slow wave activity – a hallmark of deep sleep. As you might have already expected, the experimenters found that the strength of the brain response to oddball beeps was substantially boosted in the ‘light sleeper’ left hemisphere compared to the right. This effect was seen exclusively on the first night that individuals slept in the new place – by the second night, any difference in sound reactivity between the left and right hemispheres was eradicated.
It turned out that the left hemisphere’s heightened sensitivity to unexpected sounds on the first night had some important consequences for behaviour. Before going to bed, some participants were asked to lightly tap their fingers whenever they heard a sound in their sleep. The researchers found that these individuals were significantly more likely and quicker to wake up upon hearing an oddball beep on their first night in the new place compared to the subsequent night – an effect that was even stronger for beeps that were played to the right ear, which sends signals mainly to the ‘light sleeping’ left hemisphere. The fastest awakenings were observed in individuals whose left hemispheres produced the weakest slow wave activity compared to the right.
When we look at this evidence, it appears likely that our tendency to struggle with dozing off and to experience shallow sleep when we first settle in new places might be a by-product of the fact that parts of our left hemisphere don’t enter a state of deep sleep even when the rest of our brain does. This allows some brain regions to remain more responsive to our surroundings, poised to trigger reactions at a moment’s notice when we make ourselves most vulnerable by shutting off our consciousness in unfamiliar and possibly dangerous environments. After all, until very recently humans didn’t have beds to sleep in. In this respect, maybe we’re not so different from ducks. References
- Tamaki, M. et al. (2016). Night watch in one brain hemisphere during sleep associated with the first-night effect in humans. Current Biology.
- Buckner, R. L. et al. (2008). The brain’s default network. Anatomy, function and relevance to disease. New York Academy of Sciences.
- Manoach, D. S. and Stickgold, R. (2016). Sleep: keeping one eye open. Current Biology Dispatches.
- Rattenborg, N. et al. (1999). Half-awake to the risk of predation. Nature.
- Rattenborg, N. et al. (2000). Behavioral, neurophysiological and evolutionary perspectives on unihemispheric sleep. Neuroscience & Biobehavioral Reviews.
- Siegel, J. M. (2005). Clues to the functions of mammalian sleep. Nature.
- Tamaki, M. et al. (2005). Examination of the first-night effect during the sleep-onset period. Sleep.
In 1965, a law was introduced to ban all shipments of the potent psychedelic drug LSD to the US from Sandoz Laboratories, where the drug was first synthesised by Swiss pharmacologist Albert Hoffman. Underground factories largely based in California continued to feed the demand for the drug, but the illegalisation made a massive dent in the freedom of scientific enquiry into psychedelics, causing stagnation that has lasted several decades.
Last week, however, saw the publication of the first ever brain scans made of individuals high on LSD. What these brain scans might mean, and how the media may have distorted their meaning, is the subject of my first piece as contributing writer to the recently launched digital magazine PrimeMind.
* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to open the original reports in a separate tab.
The question of whether we share some of our emotional experiences, such as anxiety, with other species is fascinating for many reasons. Firstly, it intrinsically touches many of us because it forces us to assess our uniqueness as members of the animal kingdom. But aside from these philosophical issues, examining the extent of human-ness in animal experience is important when we consider whether studying animals allows us to learn something about the human condition, including normal mental processes as well as psychiatric illnesses. Our tendency to view most of our mental states as fabrications of highly advanced human brains makes the idea that animals might be experiencing something similar a constantly debated topic. I suppose on an intuitive level, the idea of an anxious fly is especially absurd.
However, a study recently published in the journal Current Biology suggests otherwise. Its findings indicate that the tiny creature might have an unexpected capacity to be anxious, which could actually be used to uncover something about the biology of our own anxiety (10). It certainly doesn’t go to say that flies experience the rumination and discomfort that humans can feel in social situations, at work, or when otherwise stressed. Rather, there appears to be a fundamental property of the fly’s behaviour that can be defined as anxiety, and become dissected for a better understanding of the underpinnings of this phenomenon throughout the animal kingdom.
Clearly, we can’t think like a fly. So how could we measure its anxiety, or that of any other animal for that matter? Over the past few decades, researchers interested in anxiety have come to rely primarily on rodents, using a few classic tests now considered to be the gold standard for measuring anxiety (14,15) . Let’s look briefly into these before examining the fly. In one of the most popular tests, the elevated plus maze, a rodent is placed in a plus-shaped maze that is raised above the ground. While two of the maze’s arms are open walkways, the other two have surrounding walls that shelter the animal from the lights in the room, as well as the view of the height.Why might this test useful for measuring anxiety? We know that mice strongly prefer being sheltered in the dark – presumably because this bears some similarity to their natural burrow homes in the wildlife. In contrast, they avoid brightly lit open spaces that are perhaps more likely to be associated with danger, and have a general dislike of heights. Thus, the idea behind the elevated plus maze is that by placing a mouse into an environment with a choice of both shelter and an open path, it can be forced to waver between its innate fear of exposed places and its interest in exploration that is offered by the open walkway. Ultimately, anxiety is thought to win when it compels the animal to spend more time avoiding the exposed areas of the maze in favour of the dark comforting shelter.
And how do we know that this way of testing behaviour actually reflects a mouse’s anxiety, as opposed to just being a highly artificial measure derived from an unnatural situation? In fact, how do we know a mouse can feel anxious altogether? This is, of course, an essential question. If animals are to be used to ‘model’ human conditions such as anxiety, and investigate their biological causes as well as possible treatments, then we need to ensure that the tasks we are using to supposedly measure emotions such as anxiety do pick up on something legitimate.
One of the most common ways to verify that a particular test is indeed looking at something akin to anxiety involves examining how a mouse’s behaviour on this test is affected by medications that are typically used to treat anxiety in humans. Such drugs could include benzodiazepines such as Diazepam (previously marketed as Valium), and anxiolytics which act on the serotonin neurochemical system, such as sertraline and fluoxetine (aka Prozac). The underpinning logic of this drug test is quite simple. If a mouse’s behaviour on a test can be changed by classic anti-anxiety, or anxiolytic, treatments, then there is good reason to believe that this particular test might be tapping into a psychological state that has something in common with human anxiety. When it comes to the elevated plus maze, for instance, researchers have repeatedly found that mice given typical anxiolytic medications spend less of their time avoiding the exposed walkway of the maze, as well as being quicker to enter it compared to mice given a placebo (6, 11). Of course, no test could ever give us total insight into the inner workings of a mouse’s mind and verify that it truly experiences anxiousness. However, the fact that anxiolytic drugs seem to alleviate mouse behaviours that are considered indicative of anxiety presents a convincing case for the possibility that mice are capable of emotional states that are, on a fundamental level, similar to human anxiety.
Another test commonly used to measure anxiety in mice – the open field test – involves placing them in a large brightly lit enclosure and examining their initial exploration patterns. Here, mice tend to avoid the central area of the enclosure and spend most of their time in contact with the walls, as if seeking their refuge. This tendency, called thigmotaxis (from Greek ‘thigma’ – to touch), is alleviated by several anxiolytic medications, which has led researchers to assume that this phenomenon might reflect a rodent’s state of anxiety (12).
It turns out that flies, much like mice, also spend most of their time pacing close to the walls when placed inside an enclosure. In fact, the recent report in Current Biology reveals that flies are remarkably similar to rodents when we trace the movements they make in an enclosure over a period of time! What’s more, flies are similarly affected by stress, as they spend even more time closer to the walls if they have endured extended social isolation or physical restraint.
But would it be a stretch to suppose that such wall following behaviour might be underpinned be anxiety? Dismissing such a possibility due to intuitive disbelief would be quite unscientific, so the researchers put it to the test with several experiments.
First, they investigated whether a dose of the drug Diazepam (aka Valium) – a tranquiliser commonly used to treat anxiety, agitation, and insomnia in humans – could have an influence on the wall following behaviour of flies. In previous experiments, this drug had been found to decrease the apparent anxiousness that rodents have about venturing away from walls in enclosures (3). It turned out that in flies too, Diazepam markedly reduced the tendency to follow the walls. Is it possible that we are witnessing an emotional state that is in some way comparable across species?
In a further turn of events, the researchers uncovered some fundamental parallels between the genetics of human anxiety and anxious-like behaviours in rodents and flies. In one of their experiments, they engineered flies that mimicked one of the most talked about genotypes in the human population that is associated with increased risk of developing anxiety (7, 8, 9). The diagram below explains the basics of this genotype.
In 2003, a group of researchers that followed the lives of over 1000 individuals published a famous report suggesting that those with 1+ copies of the short version of the serotonin transporter gene have relatively low emotional resilience compared to others (8). Specifically, experiencing life stress was found to make these individuals substantially more likely to develop anxiety or depression symptoms than those who had inherited two copies of the long version of the gene, which is associated with higher levels of serotonin transporter production. I should mention that this finding has not gone without being contested, as it has stimulated hundreds of follow-up studies, some of which failed to support the genotype’s risky reputation (4). Despite this, researchers acknowledge that there appears to be reason to believe that reduced function of the serotonin transporter makes a significant, albeit small, contribution to individuals’ chances of developing anxiety at some point in their lives (9). Interestingly, eliminating the activity of the serotonin transporter in mice, also known as ‘knocking it out’, seems to make mice behave more anxiously on the various tests I have described (5). It seems that the genotype has a role to play in the emotional states of several species.
But does this anxiety-risk genotype have a similar effect in flies? Indeed, it does. Researchers behind the recent publication in Current Biology found that artificially reducing the production of the serotonin transporter in flies further strengthened their tendency to follow walls in enclosures. These results provide us with some food for thought, as wall following might reflect a mental state of anxiety in a creature we probably didn’t imagine having any room for emotions!
The wall following phenomenon of the fly might be an ‘ancient’ form of anxiety that serves as the original blueprint for anxious experiences in species separated by as much as hundreds of millions of years of evolution. It’s possible that higher animals like us have elaborated on this rudimentary emotional scaffold by building ourselves an intricate social world that brings with it an immense diversity of situations that can provoke worry. And in spite of the vast complexity of our subjective anxieties, the core of this experience might be more shared across the species than we suspect.
Indeed, there is good reason to believe that this should be the case. The ability to learn about situations associated with threat and produce emotional reactions that discourage us from approaching such situations is undoubtedly critical for survival. In light of this, researchers have repeatedly argued that the brain mechanisms that underpin the ability to be anxious must have evolved in the earliest mammalian species, if not in even more primitive organisms (13). Ultimately, the existence of some rudimentary commonality between human and fly anxiety might mean that flies could make useful subjects of further study in anxiety research. This could open the door to quicker development of potential new treatments, as well as genetic screens aiming to identify possible new anxiety risk genes.
- Canli, T. and Lesch, K. (2007). Long story short: the serotonin transporter in emotion regulation and social cognition. Nature Neuroscience 10, 1103-1109.
- Eisenberg, D. T. A. and Hayes, M. G. (2011). Testing the null hypothesis: comments on ‘Culture-gene coevolution of individualism-collectivism and the serotonin transporter gene’. Proceedings of the Royal Society 278, 329-332.
- Griebel, G. and Holmes, A. (2013). 50 years of hurdles and hope in anxiolytic drug discovery. Nature Reviews Drug Discovery 12, 667-687.
- Gustavsson, J. P. et al. (1999). No association between serotonin transporter gene polymorphisms and personality traits. American Journal of Medical Genetics 88, 430-436.
- Holmes, A. et al. (2003). Mice lacking the serotonin transporter exhibit 5-HT(1A) receptor-mediated abnormalities in tests for anxiety-like behavior. Neuropsychopharmacology 12, 2077-2088.
- Kurt, M. et al. (2000). The effects of sertraline and fluoxetine on anxiety in the elevated plus-maze test in mice. Journal of Basic and Clinical Physiology and Pharmacology 11, 173-180.
- Lesch, K. P. et al. (1996). Association of anxiety-related traits with a polymorphism in the serotonin transporter gene regulatory region. Science 274, 1527-1531.
- Caspi, A. et al. (2003). Influence of life stress on depression: moderation by a polymorphism in the 5-HTT gene. Science 301, 386-389.
- McGuffin, P. et al. (2011). The truth about genetic variation in the serotonin transporter gene and response to stress and medication. British Journal of Psychiatry 198, 424-427.
- Mohammad, F. et al. (2016). Ancient anxiety pathways influence Drosophila defense behaviors. Current Biology 26, 981-986.
- Pellow, S. et al. (1985). Validation of open: closed arm entries in an elevated plus-maze as a measure of anxiety in the rat. Journal of Neuroscience Methods 14, 149-167.
- Prut, L. and Belzung, C. (2003). The open field as a paradigm to measure the effects of drugs on anxiety-like behaviors: a review. European Journal of Pharmacology 463, 3-33.
- Steimer, T. (2011). Animal models of anxiety disorders in rats and mice: some conceptual issues. Dialogues in Clinical Neuroscience 13, 495-506.
- Varty, G. B. et al. (2002). The gerbil elevated plus-maze I: behavioural characterization and pharmacological validation. Neuropsychopharmacology 27, 357-370.
- Walf, A. A. and Frye, C. A. (2007). The use of the elevated plus maze as an assay of anxiety-related behavior in rodents. Nature Protocols 2, 322-328.
This week, my guest article on the neuroscience education site Knowing Neurons examines why drugs that enhance levels of the neurochemical dopamine make people more impulsive – that is, more likely to choose short-term gratification rather than think in the long-term. This concerns both recreational drugs such as cocaine and alcohol, as well as medical treatments for Parkinson’s disease, which either boost dopamine production (eg. L-DOPA) or stimulate dopamine receptors (eg. apomorphine). In my piece, I also explore the scientific evidence that individuals who are innately more impulsive than others might have stable life-long differences in various components of their brains’ dopamine systems.
* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to access the original reports.
The term birdbrained has long since been equivalent to stupid. It gives away our ingrained assumption that birds are mindless pecking machines whose brains and the thoughts they produce, if any, couldn’t even begin to rival ours in terms of complexity.
This long-held view is partially inspired by the fact that we see fundamental differences in the structures of mammalian and bird brains. Since we know ourselves to be intelligent animals, it’s tempting to suppose that it is exclusively the mammalian brain design that is capable of producing complex thoughts and behaviours. This might be why many of us are captivated by the occasional news story describing researchers’ discovery of yet another surprisingly complex behaviour in laboratory birds. Crows that make innovative tools (2). Scrub jays that make plans for the future (1). Ravens that seem to contemplate the contents others’ minds (3). We have an undeniable fascination with seeing traces of human-like abilities in animals, and as more evidence rolls in, perhaps it’s time to ask what all of it might mean. Have we been wrong about bird brains? Could intelligent bird behaviour be underpinned by true mental complexity, or do we just read into observations through an anthropomorphic lens and delude ourselves into thinking that something human-like might be going on in the bird’s mind? Let’s start with the brain…
Revising a century of misconceptions about the bird brain
I suppose in comparison to most animal species, the organisation of the mammalian brain does indeed inspire awe. In primates, just over 70% of total brain mass is occupied by the cerebral cortex. This is a 2-3 mm thick sheet of densely packed brain cells, or neurons, which coats the entire exterior of our brains and folds into a multitude of grooves that maximise the brain tissue which can fit into our skulls. Most of the cortex, that is the evolutionarily recent neocortex, is divided into six clearly defined layers, characterised by the different types of neurons which populate them and the nature of connections they have with other brain cells.
To see examples of the layered structure of the mammalian cortex, let’s look at two images taken from mice, shown below. To obtain the left image, researchers injected developing mouse embryos with genes which resulted in separate groups of cells producing differently coloured fluorescent proteins. When these cell groups develop into distinct types of neurons and migrate to their resident locations within the cortex, we can see that they primarily inhabit distinct layers. In the right image, we see a clear layered separation between the neurons that receive signals from the mouse’s whiskers, as well as the fibres along which the information is transmitted (green), and neurons that communicate back down (orange).
The layered structure of the cortex appears to be a universal feature of all mammalian brains – one that is proposed to have considerable advantages. Firstly, such segregation likely increases the efficiency with which neurons can perform their functions, and communicate to their target neurons. Why is that? It’s quite reasonable to assume that those brain cells which have access to similar information by virtue of having similar external connections (eg. to the whiskers) must have similar jobs to do, and benefit from being able to closely communicate with each other. Thus, allowing neurons with similar properties to work in proximity with one another would reduce the need for chaotic winding connections between them and boost the speed with which information can be transformed and transmitted elsewhere. Perhaps unsurprisingly, the high degree of organisation in the mammalian brain makes it tempting to assume that the layered cortex is the cradle of sophistication, both for psychological processes and behaviours.
This view has not worked in favour of birds who, in contrast to mammals, don’t have a layered cortex. Instead, their brains are nucleated – that is, their neurons are organised in clusters (nuclei), and thus lack the structure we instinctively associate with complex function. Furthermore, one feature that has perhaps prevented bird brains from being recognised as capable of intelligence is their superficial resemblance to the so-called basal ganglia of the mammalian brain (7). The basal ganglia are a group of deep-brain cell clusters that partially underpin our ability to generate voluntary movements and acquire motor habits, such as driving, riding a bike, or playing piano (skills we often ascribe to ‘muscle memory’).
You might notice in the image above that these clusters aren’t directly connected to each other. They are interrupted by a major pathway of nerve fibres, called the internal capsule, which acts as a highway for transmitting information to and from the cortex. The running of these fibres between the cell clusters gives the basal ganglia a somewhat stripy appearance that happens to be superficially similar to the bird brain. This led the influential 19th century German anatomist, Ludwig Edinger, to assume that bird brains were primarily constructed of tissue that is related in origin and function to the mammalian basal ganglia (7). He proposed a map of the bird brain, in which most areas were named with variations of the root word ‘striatum’ (from Latin ‘striatus’ – striped). As this view became gospel, it added fuel to the already existing assumption that birds were, at most, excellent pecking machines, with the brains suited to having a repertoire of motor skills, but little capacity for thought.
This view, which dominated for about a century, has recently been revised. Researchers found that the profile of gene activity throughout much of the bird brain makes it highly likely that it actually derives from a region of the embryonic brain (pallium) that gives rise to the layered mammalian cortex (7,16). In light of these discoveries, the modern map of the avian brain reveals just how much brain territory once assumed to be striatal (green) is actually consumed by pallium-derived brain tissue (blue).
Thus, our brains and bird brains appear to be closer evolutionary relatives than we had anticipated. Researchers propose that we might have shared a common ancestral structure that was originally nucleated – much like in birds right now. In this view, the cortical layers of mammalian brains are a more recent development, likely selected because it endows the system with greater efficiency (7,8,10). It’s quite likely that this change represents a substantial biological upgrade, affording the mammalian brain an unprecedented level of capacity. And yet researchers have found some remarkable displays of intelligence in certain bird species which, in some respects, seem to place them on equal footing with primates. Could this indicate that a layered organisation does not hold the exclusive key to brainpower? Apparently clever bird behaviour raises the possibility that we could be witnessing the result of convergent evolution, in which two distinct brain designs have both arrived at intelligent solutions to producing complexity. To explore this, let’s look at some tantalising evidence that has convinced some researchers that birds might be capable of mentalising – that is, comprehending the contents of others’ minds.
‘I know what you’re thinking… Or do I?’
During courtship, male European jays like to feed their female love interests – something that researchers have used as an opportunity to test whether a male could adjust his choice of food for his mate depending on what her current preference might be (11). In one experiment, researchers manipulated which food a female might prefer on a given day by satiating her with large amounts of a particular food, assuming that this would make her temporarily tired of it and crave something different. The underlying logic of this assumption is quite solid, as they observed that a female that had just been fed plenty of waxworm and then given a chance to choose between some more waxworm or mealworm larvae, consistently preferred mealworm. In essence, novelty excites. But would her male suitor be able to infer something about her desire after watching her being fed by the researchers?
When it came down do it, the male consistently tried to feed his female with foods that differed from what he had recently seen her eating. Importantly, what he chose for his female was unrelated to what he would personally choose for himself, which makes for a truly convincing argument that he was deliberately catering to the assumed preference of his romantic interest.
The capacity to behave in a way that indicates an understanding that other individuals have their own desires is actually not clearly observed in human children until roughly the age of 18 months. We know this because of experiments in which children are asked to choose one of two food items to give an individual who seems to clearly hate one and enjoy the other (13). These studies have found that children below about 18 months of age consistently give the individual food items that they themselves prefer, with no regard for the individual’s apparent preference. Aside from this, some bird species appear to possess a social skill that humans don’t master until about four years of age – the ability to understand and exploit the fact that others might hold false beliefs. In other words, some birds appear to be skilled liars.
Deceptive intentions are apparent in the food hoarding rituals of birds such as crows and Eurasian jays, who employ various strategies to minimise the risk that the food they hide underground or in a burrow will get stolen. When hiding, or caching, their food in the presence of observers they tend to wait for them to become distracted before going through with placing the food in its hiding spot. Sometimes, these birds actually return to the site on a later occasion and re-cache their food in privacy if they did end up being watched when hiding it the first time (6). The fact that these birds go to such lengths to safeguard their hiding spots from others raises the possibility that they, on some level, understand that others might have the intention to steal.
Some researchers aren’t convinced that this evidence calls for explanations which grant birds almost human-like reasoning abilities (14). In the world of science, it’s frequently assumed that the simplest explanation is likely to be the truest. As such, resorting to anthropomorphic interpretations of seemingly clever animal behaviour is perhaps more wishful thinking than science, as it’s not the simplest possible explanation. This school of thought prefers to argue that when we interpret intelligent behaviour, there’s no need to lean back on assumptions of actual mental processes inside an animal’s head. Such a pessimistic, or just scientifically sound, perspective (depending on your own philosophy) has earned these researchers the nickname ‘killjoys’ in academic discussions of animal intelligence (4).
In an experiment that offered solid support to the ‘killjoys’, researchers found that ‘virtual’ scrub jays could actually mimic the food caching habits of real-life jays, in the absence of any mental capacity to acknowledge another bird’s intention to steal (17). The computer model of the scrub jay was programmed to follow particular simple rules of behaviour including i) a preference to hide food away from other birds and ii) a tendency to cache and re-cache food more often when stressed. The assumption that stress stimulates such behaviour is quite valid, as we have evidence that birds implanted with pellets releasing the stress hormone corticosterone tend to hide and recover food at higher rates than normal (12). Using this computer model, researchers found that their virtual scrub jays returned to relocate their food in privacy after being watched simply because the presence of an onlooker during the initial hiding event stressed them out! In one sweep, this publication seemed to obviate any need to suppose that jays might be aware of other birds’ intentions. But there’s one thing that the computer model was missing…
Researchers have pointed out that scrub jays re-locate their food in privacy only if they had themselves in the past stolen food that was hidden away by others (6). In contrast, those who had never stolen themselves weren’t particularly nervous about being watched when burying their food. In my mind, it’s quite telling that personally experiencing the intention to steal is necessary for these birds to feel nervous about being watched. It’s difficult to resist the conclusion that jays might actually be projecting their experiences onto others to infer their intentions.
Why is anthropomorphism so unorthodox?
The world of birds is brimming with evidence that these animals toy with others’ intentions and beliefs. Occasionally, crows make false hiding spots that are either empty or contain inedible items such as stones (9). In some cases, when they are being watched or followed, they feign interest in sites that they know to be empty, distracting potential thieves from real food sites (5). Interestingly, the ability to deliberately misinform doesn’t fully develop in humans until roughly four years of age. When toddlers are given an object and asked to ensure that an individual doesn’t find it, most of them don’t seem to understand the fact that deceptive ploys can be used to manipulate the internal state of the human seeking that item and trick them into looking elsewhere. In contrast, children older than four are known to be capable of effectively and intentionally laying false trails (15). Once this ability to misinform is observed, researchers are quite confident that they are witnessing the development of a complex social skill – the understanding that others have minds of their own. In light of this, I wonder why some researchers are so reluctant to accept the possibility of such mental prowess in birds that produce these exact same deceptive behaviours.
Claiming that animal behaviours might be underpinned by complex mental processes is largely seen as anthropomorphic – a sinful tendency to see human-like abilities in animals. But why does the possibility of human-like skills in other species seem so illogical? I suspect that the apparent ‘wrongness’ of anthropomorphism might be rooted in our implicit belief that our own mental abilities are a metaphorical leap over an abyss separating humans from other animals. The idea that animals might have traces of human-like mental capacities is often considered some sort of last-resort ‘magical’ explanation for apparently intelligent behaviour. Of course, we are almost certainly the most intelligent species – but is it possible that the pedestal we have built ourselves is too high? After all, both a four year old and a crow can deceive another person, and yet only the human is assumed to do so because they understand that others have minds that can be misinformed. If we believe in evolutionary continuity between various species, perhaps we should be reconsider viewing human-like capacity as an unattainable benchmark for other animals. In the words of Charles Darwin, ‘the difference between man and the higher animals, is one of degree and not kind’. Perhaps I am a romantic when it comes to interpreting clever bird behaviour. Of course, it’s purely my take, and I am curious to hear your own opinions!
P.S. For those, who are interested in further reading, here is a fascinating publication in the journal Science, describing the African fork-tailed drongo which obtains almost a quarter of its daily meals by producing false alarm cries of other species to divert them from their food.
- Clever Eurasian jays plan for the future. BBC Nature News.
- Caught on tape! Wild crows use tiny cameras to film themselves using tools. LA Times.
- Researchers find birds can theorize about the minds of others, even those they cannot see. Phys.org.
- Balter, M. (2012). ‘Killjoys’ challenge claims of clever animals. Science 335.
- Bugnyar, T. and Kotrschal, K. (2003). Leading a conspecific away from food in ravens (Corvus corax)? Animal Cognition 7, 69-76.
- Emery, N. J. and Clayton, N. S. (2001). Effects of experience and social context on prospective caching strategies by scrub jays. Nature 414, 443-446.
- Emery, N. J. and Clayton, N. S. (2005). Evolution of the avian brain and intelligence. Current Biology 15: R946.
- Finlay, B. L. et al. (1991). The Neocortex: Ontogeny and Phylogeny. Springer: US.
- Heinrich, B. (1999). Mind of the Raven. Harper Collins: US.
- Karten, H. J. (1991). Homology and evolutionary origins of the ‘neocortex’. Brain and Behavioral Evolution 38, 264-272.
- Ostoji, C. L. et al. (2013). Evidence suggesting that desire-state attribution may govern food sharing in Eurasian jays. Proceedings in the National Academy of Sciences USA 110, 4123–4128.
- Pravosudov, V. V. (2003). Long-term moderate elevation of corticosterone facilitates avian food-caching behaviour and enhances spatial memory. Proceedings of the Royal Society: Biological Sciences 270, 2599-2604.
- Repacholi, B. M. and Gopnik, A. (1997). Early reasoning about desires: evidence from 14- and 18-month olds. Developmental Psychology 33, 12-21.
- Shettleworth, S. J. (2010). Clever animals and killjoy explanations in comparative psychology. Trends in Cognitive Sciences 14, 477-481.
- Sodian, B. et al. (1991). Early deception and the child’s theory of mind: false trails and genuine markers. Child Development 62, 468-483.
- The Avian Brain Nomenclature Consortium (2005). Avian brains and a new understanding of vertebrate brain evolution. Nature Reviews Neuroscience 6, 151-159.
- Van der Vaart, E. et al. (2012). Corvid re-caching without ‘theory of mind’: a model. PloS ONE.