Intelligent birds: what do we make of the evidence that they can ‘read’ minds, and are we just anthropomorphising?

* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to access the original reports.

The term birdbrained has long since been equivalent to stupid. It gives away our ingrained assumption that birds are mindless pecking machines whose brains and the thoughts they produce, if any, couldn’t even begin to rival ours in terms of complexity.

This long-held view is partially inspired by the fact that we see fundamental differences in the structures of mammalian and bird brains. Since we know ourselves to be intelligent animals, it’s tempting to suppose that it is exclusively the mammalian brain design that is capable of producing complex thoughts and behaviours. This might be why many of us are captivated by the occasional news story describing researchers’ discovery of yet another surprisingly complex behaviour in laboratory birds. Crows that make innovative tools (2). Scrub jays that make plans for the future (1). Ravens that seem to contemplate the contents others’ minds (3). We have an undeniable fascination with seeing traces of human-like abilities in animals, and as more evidence rolls in, perhaps it’s time to ask what all of it might mean. Have we been wrong about bird brains? Could intelligent bird behaviour be underpinned by true mental complexity, or do we just read into observations through an anthropomorphic lens and delude ourselves into thinking that something human-like might be going on in the bird’s mind? Let’s start with the brain…

Revising a century of misconceptions about the bird brain

I suppose in comparison to most animal species, the organisation of the mammalian brain does indeed inspire awe. In primates, just over 70% of total brain mass is occupied by the cerebral cortex. This is a 2-3 mm thick sheet of densely packed brain cells, or neurons, which coats the entire exterior of our brains and folds into a multitude of grooves that maximise the brain tissue which can fit into our skulls. Most of the cortex, that is the evolutionarily recent neocortex, is divided into six clearly defined layers, characterised by the different types of neurons which populate them and the nature of connections they have with other brain cells.

To see examples of the layered structure of the mammalian cortex, let’s look at two images taken from mice, shown below. To obtain the left image, researchers injected developing mouse embryos with genes which resulted in separate groups of cells producing differently coloured fluorescent proteins. When these cell groups develop into distinct types of neurons and migrate to their resident locations within the cortex, we can see that they primarily inhabit distinct layers. In the right image, we see a clear layered separation between the neurons that receive signals from the mouse’s whiskers, as well as the fibres along which the information is transmitted (green), and neurons that communicate back down (orange).

Bird1The layered structure of the cortex appears to be a universal feature of all mammalian brains – one that is proposed to have considerable advantages. Firstly, such segregation likely increases the efficiency with which neurons can perform their functions, and communicate to their target neurons. Why is that? It’s quite reasonable to assume that those brain cells which have access to similar information by virtue of having similar external connections (eg. to the whiskers) must have similar jobs to do, and benefit from being able to closely communicate with each other. Thus, allowing neurons with similar properties to work in proximity with one another would reduce the need for chaotic winding connections between them and boost the speed with which information can be transformed and transmitted elsewhere. Perhaps unsurprisingly, the high degree of organisation in the mammalian brain makes it tempting to assume that the layered cortex is the cradle of sophistication, both for psychological processes and behaviours.

This view has not worked in favour of birds who, in contrast to mammals, don’t have a layered cortex. Instead, their brains are nucleated – that is, their neurons are organised in clusters (nuclei), and thus lack the structure we instinctively associate with complex function. Furthermore, one feature that has perhaps prevented bird brains from being recognised as capable of intelligence is their superficial resemblance to the so-called basal ganglia of the mammalian brain (7). The basal ganglia are a group of deep-brain cell clusters that partially underpin our ability to generate voluntary movements and acquire motor habits, such as driving, riding a bike, or playing piano (skills we often ascribe to ‘muscle memory’).

Bird2

You might notice in the image above that these clusters aren’t directly connected to each other. They are interrupted by a major pathway of nerve fibres, called the internal capsule, which acts as a highway for transmitting information to and from the cortex. The running of these fibres between the cell clusters gives the basal ganglia a somewhat stripy appearance that happens to be superficially similar to the bird brain. This led the influential 19th century German anatomist, Ludwig Edinger, to assume that bird brains were primarily constructed of tissue that is related in origin and function to the mammalian basal ganglia (7). He proposed a map of the bird brain, in which most areas were named with variations of the root word ‘striatum’ (from Latin ‘striatus’ – striped). As this view became gospel, it added fuel to the already existing assumption that birds were, at most, excellent pecking machines, with the brains suited to having a repertoire of motor skills, but little capacity for thought.

This view, which dominated for about a century, has recently been revised. Researchers found that the profile of gene activity throughout much of the bird brain makes it highly likely that it actually derives from a region of the embryonic brain (pallium) that gives rise to the layered mammalian cortex (7,16). In light of these discoveries, the modern map of the avian brain reveals just how much brain territory once assumed to be striatal (green) is actually consumed by pallium-derived brain tissue (blue).

Bird3Thus, our brains and bird brains appear to be closer evolutionary relatives than we had anticipated. Researchers propose that we might have shared a common ancestral structure that was originally nucleated – much like in birds right now. In this view, the cortical layers of mammalian brains are a more recent development, likely selected because it endows the system with greater efficiency (7,8,10). It’s quite likely that this change represents a substantial biological upgrade, affording the mammalian brain an unprecedented level of capacity. And yet researchers have found some remarkable displays of intelligence in certain bird species which, in some respects, seem to place them on equal footing with primates. Could this indicate that a layered organisation does not hold the exclusive key to brainpower? Apparently clever bird behaviour raises the possibility that we could be witnessing the result of convergent evolution, in which two distinct brain designs have both arrived at intelligent solutions to producing complexity. To explore this, let’s look at some tantalising evidence that has convinced some researchers that birds might be capable of mentalising – that is, comprehending the contents of others’ minds.

‘I know what you’re thinking… Or do I?’

During courtship, male European jays like to feed their female love interests – something that researchers have used as an opportunity to test whether a male could adjust his choice of food for his mate depending on what her current preference might be (11). In one experiment, researchers manipulated which food a female might prefer on a given day by satiating her with large amounts of a particular food, assuming that this would make her temporarily tired of it and crave something different. The underlying logic of this assumption is quite solid, as they observed that a female that had just been fed plenty of waxworm and then given a chance to choose between some more waxworm or mealworm larvae, consistently preferred mealworm. In essence, novelty excites. But would her male suitor be able to infer something about her desire after watching her being fed by the researchers?

Bird4

When it came down do it, the male consistently tried to feed his female with foods that differed from what he had recently seen her eating. Importantly, what he chose for his female was unrelated to what he would personally choose for himself, which makes for a truly convincing argument that he was deliberately catering to the assumed preference of his romantic interest.

The capacity to behave in a way that indicates an understanding that other individuals have their own desires is actually not clearly observed in human children until roughly the age of 18 months. We know this because of experiments in which children are asked to choose one of two food items to give an individual who seems to clearly hate one and enjoy the other (13). These studies have found that children below about 18 months of age consistently give the individual food items that they themselves prefer, with no regard for the individual’s apparent preference. Aside from this, some bird species appear to possess a social skill that humans don’t master until about four years of age – the ability to understand and exploit the fact that others might hold false beliefs. In other words, some birds appear to be skilled liars.

Deceptive intentions are apparent in the food hoarding rituals of birds such as crows and Eurasian jays, who employ various strategies to minimise the risk that the food they hide underground or in a burrow will get stolen. When hiding, or caching, their food in the presence of observers they tend to wait for them to become distracted before going through with placing the food in its hiding spot. Sometimes, these birds actually return to the site on a later occasion and re-cache their food in privacy if they did end up being watched when hiding it the first time (6). The fact that these birds go to such lengths to safeguard their hiding spots from others raises the possibility that they, on some level, understand that others might have the intention to steal.

Some researchers aren’t convinced that this evidence calls for explanations which grant birds almost human-like reasoning abilities (14). In the world of science, it’s frequently assumed that the simplest explanation is likely to be the truest. As such, resorting to anthropomorphic interpretations of seemingly clever animal behaviour is perhaps more wishful thinking than science, as it’s not the simplest possible explanation. This school of thought prefers to argue that when we interpret intelligent behaviour, there’s no need to lean back on assumptions of actual mental processes inside an animal’s head. Such a pessimistic, or just scientifically sound, perspective (depending on your own philosophy) has earned these researchers the nickname ‘killjoys’ in academic discussions of animal intelligence (4).

In an experiment that offered solid support to the ‘killjoys’, researchers found that ‘virtual’ scrub jays could actually mimic the food caching habits of real-life jays, in the absence of any mental capacity to acknowledge another bird’s intention to steal (17). The computer model of the scrub jay was programmed to follow particular simple rules of behaviour including i) a preference to hide food away from other birds and ii) a tendency to cache and re-cache food more often when stressed. The assumption that stress stimulates such behaviour is quite valid, as we have evidence that birds implanted with pellets releasing the stress hormone corticosterone tend to hide and recover food at higher rates than normal (12). Using this computer model, researchers found that their virtual scrub jays returned to relocate their food in privacy after being watched simply because the presence of an onlooker during the initial hiding event stressed them out! In one sweep, this publication seemed to obviate any need to suppose that jays might be aware of other birds’ intentions. But there’s one thing that the computer model was missing…

Researchers have pointed out that scrub jays re-locate their food in privacy only if they had themselves in the past stolen food that was hidden away by others (6). In contrast, those who had never stolen themselves weren’t particularly nervous about being watched when burying their food. In my mind, it’s quite telling that personally experiencing the intention to steal is necessary for these birds to feel nervous about being watched. It’s difficult to resist the conclusion that jays might actually be projecting their experiences onto others to infer their intentions.

Why is anthropomorphism so unorthodox?

The world of birds is brimming with evidence that these animals toy with others’ intentions and beliefs. Occasionally, crows make false hiding spots that are either empty or contain inedible items such as stones (9). In some cases, when they are being watched or followed, they feign interest in sites that they know to be empty, distracting potential thieves from real food sites (5). Interestingly, the ability to deliberately misinform doesn’t fully develop in humans until roughly four years of age. When toddlers are given an object and asked to ensure that an individual doesn’t find it, most of them don’t seem to understand the fact that deceptive ploys can be used to manipulate the internal state of the human seeking that item and trick them into looking elsewhere. In contrast, children older than four are known to be capable of effectively and intentionally laying false trails (15). Once this ability to misinform is observed, researchers are quite confident that they are witnessing the development of a complex social skill – the understanding that others have minds of their own. In light of this, I wonder why some researchers are so reluctant to accept the possibility of such mental prowess in birds that produce these exact same deceptive behaviours.

Claiming that animal behaviours might be underpinned by complex mental processes is largely seen as anthropomorphic – a sinful tendency to see human-like abilities in animals. But why does the possibility of human-like skills in other species seem so illogical? I suspect that the apparent ‘wrongness’ of anthropomorphism might be rooted in our implicit belief that our own mental abilities are a metaphorical leap over an abyss separating humans from other animals. The idea that animals might have traces of human-like mental capacities is often considered some sort of last-resort ‘magical’ explanation for apparently intelligent behaviour. Of course, we are almost certainly the most intelligent species – but is it possible that the pedestal we have built ourselves is too high? After all, both a four year old and a crow can deceive another person, and yet only the human is assumed to do so because they understand that others have minds that can be misinformed. If we believe in evolutionary continuity between various species, perhaps we should be reconsider viewing human-like capacity as an unattainable benchmark for other animals. In the words of Charles Darwin, ‘the difference between man and the higher animals, is one of degree and not kind’. Perhaps I am a romantic when it comes to interpreting clever bird behaviour. Of course, it’s purely my take, and I am curious to hear your own opinions!

P.S. For those, who are interested in further reading, here is a fascinating publication in the journal Science, describing the African fork-tailed drongo which obtains almost a quarter of its daily meals by producing false alarm cries of other species to divert them from their food.

References

  1. Clever Eurasian jays plan for the future. BBC Nature News.
  2. Caught on tape! Wild crows use tiny cameras to film themselves using tools. LA Times.
  3. Researchers find birds can theorize about the minds of others, even those they cannot see. Phys.org.
  4. Balter, M. (2012). ‘Killjoys’ challenge claims of clever animals. Science 335.
  5. Bugnyar, T. and Kotrschal, K. (2003). Leading a conspecific away from food in ravens (Corvus corax)? Animal Cognition 7, 69-76.
  6. Emery, N. J. and Clayton, N. S. (2001). Effects of experience and social context on prospective caching strategies by scrub jays. Nature 414, 443-446.
  7. Emery, N. J. and Clayton, N. S. (2005). Evolution of the avian brain and intelligence. Current Biology 15: R946.
  8. Finlay, B. L. et al. (1991). The Neocortex: Ontogeny and Phylogeny. Springer: US.
  9. Heinrich, B. (1999). Mind of the Raven. Harper Collins: US.
  10. Karten, H. J. (1991). Homology and evolutionary origins of the ‘neocortex’. Brain and Behavioral Evolution 38, 264-272.
  11. Ostoji, C. L. et al. (2013). Evidence suggesting that desire-state attribution may govern food sharing in Eurasian jays. Proceedings in the National Academy of Sciences USA 110, 4123–4128.
  12. Pravosudov, V. V. (2003). Long-term moderate elevation of corticosterone facilitates avian food-caching behaviour and enhances spatial memory. Proceedings of the Royal Society: Biological Sciences 270, 2599-2604.
  13. Repacholi, B. M. and Gopnik, A. (1997). Early reasoning about desires: evidence from 14- and 18-month olds. Developmental Psychology 33, 12-21.
  14. Shettleworth, S. J. (2010). Clever animals and killjoy explanations in comparative psychology. Trends in Cognitive Sciences 14, 477-481.
  15. Sodian, B. et al. (1991). Early deception and the child’s theory of mind: false trails and genuine markers. Child Development 62, 468-483.
  16. The Avian Brain Nomenclature Consortium (2005). Avian brains and a new understanding of vertebrate brain evolution. Nature Reviews Neuroscience 6, 151-159.
  17. Van der Vaart, E. et al. (2012). Corvid re-caching without ‘theory of mind’: a model. PloS ONE.
Advertisements

Capturing a beautiful neuron

Today, I thought I’d share this beautiful 3-D reconstruction of two embracing neurons in the mouse brain, made by my labmate Julian Bartram. For those who are interested, I will briefly explain why images like this are useful (it’s not all about aesthetics!) and how they can be made in the lab.

 
Entorhinal cortex neuron

 

Researchers often record from the membranes of neurons to examine how they respond to various events such as exposure to particular drugs. As an example, it could be interesting to investigate how brain cells are influenced by substances such as picrotoxin, which is known to trigger epileptic seizures in animals, as studying this might offer some insights into how epilepsy affects the human brain. For this purpose, researchers might spend several weeks or months recording from the membranes of various neurons in brain slices taken from mice, while applying droplets of picrotoxin to the tissue.

After each recording, it can be immensely useful to verify the identity of the cell from which the information was collected. Is this a cell which releases the neurochemical dopamine? Or perhaps a neuron which sends signals using molecules of GABA – a chemical which inhibits other cells? Does this neuron have connections only with its immediate neighbours, or does it have long branches which project to a distant brain region, allowing the neuron to have a greater sphere of influence? Answering these questions can be important because the brain is chock full of neurons varying in size, shape, and structure, which makes each cell type quite distinct in terms of the functions it fulfils. The image below is just a snapshot of the astonishing diversity of brain cells we can find in the mammalian brain.

Cell diversity

Sometimes, when we insert an electrode into a brain slice to record from a neuron (shown below), we are essentially fishing, as it’s unclear what type of cell we are probing. PatchingIn fact, most often when we examine a brain slice through the microscope to aim for a neuron, they all look pretty much like blobs, as you can see in the image below, which I took last year in a lab where I previously worked.

MypatchThus, producing detailed images of the neurons from which we collect information can be important, since in times of uncertainty, they can help us identify cells by examining their structure. So how is this done?

Before diving into the brain slice with a micro-pipette to make a recording, the pipette is filled with a fluid which contains a particular protein – most often, biocytin. This means that once the micro-pipette reaches the chosen neuron and pricks its membrane to begin recording its activity, the liquid starts leaking into the cell, filling all of its branches within minutes. When the recording is completed, the brain slice is bathed in a liquid containing molecules of another protein which latch onto the biocytin proteins found inside the neuron. Importantly, each molecule of this protein comes with a so-called fluorophore – a chemical compound which can be forced to glow with a coloured light if it is excited by light of a different colour. What we now have is a neuron that is filled with a potentially fluorescent chemical – all we need to do is excite it and make snapshots of the light that it releases to begin making an image of the cell.

One type of microscope which is quite popular for doing this is the two-photon microscope, which works roughly like this:  twophotonAnd voilà – when you stack together the images produced by the fluorescence emitted from within the neuron across multiple depth planes, you end up with a detailed 3-D reconstruction like the one my labmate produced! Before we go on with our days, let’s just have a moment of appreciation for one of the most beautiful and complex types of neuron: the Purkinje cell of the cerebellum, a region in the back of our brain which is responsible, in part, for coordinating movements.

Purkinje

How does alcohol influence the mind? A look into the brain and its genes.

* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to access the original reports.

The holiday season is over, and in the words of Sean Connery, the damage is done. But aside from the myriad ways in which alcohol harms brain function, what most of us perhaps find more interesting is the question of alcohol’s effect on subjective experience. What events does alcohol consumption produce in the brain that might explain why this substance makes us feel the way it does – relaxed, numb to the senses, and uncoordinated? And where does the risk of becoming hooked come from?

Why does alcohol sedate, numb and make us uncoordinated?

The relaxing effect of alcohol is primarily underpinned by the fact that ethanol molecules interact with a class of so-called GABAA receptors – proteins which are embedded in the membranes of brain cells, or neurons (11). Most often, these receptors are activated by the brain’s own neurochemical GABA (short for Gamma Amino-Butyric Acid), which inhibits the ability of neurons to become excited by signals being sent by other cells. As a result, affected neurons become unlikely to accept other neurons’ attempts to communicate, and thus temporarily become ‘deaf’ to incoming information.

The power to suppress other neurons makes GABA-releasing cells fundamentally important in the workings of the brain. Since they are capable of precisely timing when other neurons can and cannot receive signals from various regions of the brain, many neuroscientists consider these cells the architects of neuronal communication (14). So it’s probably unsurprising that substances which influence GABAA receptors interfere with the brain and, by extension, our subjective experience. This is where alcohol comes in.

Experiments have revealed that alcohol strongly enhances the suppressive effect of certain GABAA receptors (1, 3, 11, 16). This occurs because ethanol molecules are able to attach themselves to specific pockets in the receptor, which subtly changes its shape and allows GABA molecules to fit more snugly into the receptor when they are released from communicating cells.

Pic1

This means that, as ethanol floats around and interacts with a neuron’s GABAA receptors, it both enhances the probability that these receptors will become activated and increases the period of time over which they will remain active (32). Ultimately, this produces more powerful suppression of the affected neuron.

The image below comes from an experiment in which researchers recorded from the membranes of mouse neurons while applying droplets of GABA combined with various doses of ethanol (1).

Pic2

It’s clear that stimulating GABA receptors with the natural neurochemical has a negative influence on this neuron’s level of excitability (making it less likely to produce a signal). But quite importantly, we can see that this suppressive effect is boosted by alcohol, more so with increasing concentrations. While alcohol can’t activate GABAA receptors alone, we have solid evidence that it acts in concert with the brain’s own neurochemical to silence neurons more strongly than natural conditions permit. This effect was found in 70% of the studied neurons (1). Indeed, GABAA receptors are some of the most common receptors in the mammalian nervous system, which means that the suppressive effect of alcohol is bound to be quite a common phenomenon.

Now let’s imagine that this occurs all over the brain as we have a drink. Brain regions which normally receive information from the skin about harmful events (eg. dangerous temperatures or cuts) would become slightly less receptive to such signals, which likely numbs our perception of pain (17). Neurons that normally receive constant updates from the retina, ears, tongue, and skin would also become more silent, as signals travelling from these senses become somewhat subdued by the indifference of the many cells which are under the suppressive influence of GABAA receptors modified by ethanol. The senses become a bit blunted and less reliable, as perhaps evidenced by the ‘alcohol jacket’, which the Urban Dictionary elegantly defines as ‘a non-tangible source of warmth, deriving from the mass consumption of alcohol’. Indeed, the brain generally becomes a quieter place under the influence of ethanol. The PET scan below shows how the consumption of glucose (the brain’s energy source) goes downhill as we enter a state of intoxication.Pic3The effects of alcohol are especially potent in the cerebellum – a region at the back of the brain which is particularly enriched in the GABAA receptor types that are maximally sensitive to ethanol (18).

Pic4We know that the cerebellum is critical for coordinating movements. Damage to this region makes individuals less able to control their eye movements, produce the complex tongue and lip movement sequences required for speech, regulate their posture, and correctly estimate the distance their limbs need to travel to reach objects (29). This is why cerebellar patients may have slurred speech, a strange manner of walking, and a tendency to slightly miss items they aim to grab with their hands. If any of these symptoms sound familiar to you, it’s because they also emerge when communication between neurons in the cerebellum becomes muffled when alcohol reaches the brain (10). This is why asking individuals to walk in a straight line or touch their nose is commonly used as a quick test of intoxication.

Even rats experience disorientation at the hands of ethanol meddling with cerebellar GABAA receptors. Interestingly, researchers have found that rats with an extremely low tolerance for alcohol have genetic mutations (DNA changes) which increase the sensitivity of GABAA receptors of the cerebellum to the modifying effects of alcohol (25). Essentially, these rats’ brains undergo more widespread and potent inhibition during alcohol consumption, causing a more dramatic deterioration of motor skills. Such mutations likely also affect how humans handle their drinks.

The key to understanding why alcohol also slows our reflexes lies with the spinal cord. This part of the nervous system generates most of our reflexes, from rapid withdrawal of the hand when we touch something hot or sharp, to the classic knee jerk reflex. When it comes to the effects of alcohol on reflexes, the culprits are receptors which are activated by glycine – the dominant inhibiting neurochemical of our spinal cord. We know that glycine receptors are essential for regulating the intensity of our reflexes, as individuals with genetic mutations which disturb these receptors tend to have highly exaggerated reflexes (12). We also know that the suppressive effect of glycine receptors is boosted by the presence of ethanol similarly to how this occurs with GABAA receptors (2, 9, 13). Now let’s picture an intoxicated person encountering a situation that triggers a reflex. The spinal cord instantly attempts to adjust how strongly this person’s muscles will react by releasing glycine, which gently constrains the muscles by stimulating the glycine receptors on neurons which connect to them. However, the boost that ethanol gives to these suppressive glycine receptors means that the muscles become silenced all too intensely and the reflex is ultimately subdued (35).

Perhaps more interesting is the fact that glycine receptors are also found in the brain (4, 24), where alcohol causes them to produce some important psychological effects. In part, this occurs because glycine interferes with the dopamine neurochemical system, which might shed some light on why people can become hooked on alcohol. Importantly, it might also hold the key to future drug treatments for alcohol addiction.

Why do we crave alcohol? A story of dopamine and genes.

We know that dopamine is critical for inducing addiction and the desperation that comes with it to get a hold of more rewarding substance. When rats are given an opportunity to press a lever resulting in direct injections of cocaine into their own brains, they go on binges during which they press compulsively until they are exhausted. Such injections are known to immediately increase dopamine levels in the nucleus accumbens (NA) – a deep-brain cluster of cells which is a critical component of the brain’s reward system (30).Pic5.pngThat dopamine provokes cravings is widely accepted. It compels individuals to seek the thing which provided that dopamine spike in the first place, often regardless of whether they actually enjoy the thing they crave (7). Researchers have known for a while that ethanol consumption increases dopamine levels in the nucleus accumbens (819). Dopamine appears to be quite important here, since preventing this neurochemical from influencing neurons, by blocking dopamine receptors, makes rats less likely to choose to drink more alcohol (27). Over a decade ago, researchers found that this dopamine rush appears to be largely underpinned by the effects of alcohol on the glycine receptors which are clustered around neurons in the brain’s reward centres (24).

This was discovered when researchers observed that the massive dopamine release that occurred when rats were given alcohol was prevented by a drug which blocked glycine receptors. On the other hand, dripping glycine directly into the brain (which naturally stimulated these receptors) boosted the intoxicated dopamine burst. The fact that activating glycine receptors in the nucleus accumbens excited dopamine release seems a bit of a mystery, given what we know about glycine’s inhibiting effect on neurons. To understand roughly how researchers attempted to explain this somewhat surprising finding, let’s look inside the connections between neurons in the brain’s reward centres:

Pic6Pic7

Glycine is by no means the only neurochemical system known to stimulate dopamine release when we are intoxicated (21, 22, 36). However, it’s increasingly considered to be a key to understanding how alcohol influences the brain both moments after we have a drink and in the long term. Indeed, repeated consumption of alcohol pushes the brain to adjust, and researchers have so far found that this involves altering the activity of an astounding 168 genes involved in several neurochemical systems, including glycine (33).

Repeated dopamine bursts produced during chronic alcohol intake also suppress the activity of a particular gene producing the critical dopamine receptor, D2 (26). This receptor is found in the endings of dopamine-releasing cells, where it keeps a lid on neurochemical release to prevent build-up. The fact that repeated alcohol exposure reduces production of this receptor means that over time, drinking might interfere with the brain’s normal brakes on dopamine release. In rats, disrupting the workings of D2 receptors is known to speed up the development of addiction to substances ranging from cocaine to food (6). All of this points to a somewhat worrying reality – in consuming alcohol, people might be making their brains increasingly sensitive to the experience of reward that comes with substances like alcohol, drugs and food, which gradually makes people more desperate to seek them out.

The importance of glycine receptors in this addictive chain of events has given rise to a new generation of potential drug treatments for alcoholism. Researchers recently found that treating alcohol-addicted rats with a drug that blocks glycine receptors reduced their preference for booze over water, as well as prevented drinking binges which tend to occur after a period of alcohol deprivation (20). Similar drugs have been found to reverse the worrying changes in activity of about a third of the genes known to be affected by chronic drinking (33). In the next few years, more research will reveal how we can harness our knowledge of the glycine system to tackle the problematic effects of alcohol.

Wineeg

References

  1. Aguayo, L. G. (1990). Ethanol potentiates the GABAa-activated Cl current in mouse hippocampal and cortical neurons. European Journal of Pharmacology 187, 127-130.
  2. Aguayo, L. G. et al. (1996). Potentiation of the glycine-activated Cl- current by ethanol in cultured mouse spinal neurons. Journal of Pharmacology and Experimental Therapeutics 279, 1116–1122.
  3. Aguayo, L. G. and Pancetti, F. C. (1994). Ethanol modulation of the gamma-aminobutyric acidA- and glycine-activated Cl- current in cultured mouse neurons. Journal of Pharmacology and Experimental Therapeutics 270, 61–69.
  4. Baer K. et al. (2009). Localisation of glycine receptors in the human forebrain, brainstem, and cervical spinal cord: an immunohistochemical review. Frontiers in Molecular Neuroscience.
  5. Bean, B. P. (2007). The action potential in mammalian central neurons. Nature Reviews Neuroscience 8, 451-465.
  6. Bello, E. P. et al. (2011). Cocaine supersensitivity and enhanced motivation for reward in mice lacking dopamine D2 autoreceptors. Nature Neuroscience 14, 1033-1038.
  7. Berridge, K. C. et al. (2009). Dissecting components of reward: ‘liking’, ‘wanting’, and learning. Current Opinion in Pharmacology 9, 65-73. (Accessible and interesting).
  8. Blomqvist, O. et al. (1993). The mesolimbic dopamine-activating properties of ethanol are antagonized by mecamylamine. European Journal of Pharmacology 249, 207–213.
  9. Burgos, C. F. et al. (2015). Ethanol effects on glycinergic transmission: From molecular pharmacology to behaviour responses. Pharmacological Research 101, 18-29.
  10. Carta, M. et al. (2004). Alcohol enhances GABAergic transmission to cerebellar granule cells via an increase in Golgi cell excitability. Journal of Neuroscience 24, 3746-3751.
  11. Davies, M. (2003). The role of GABAA receptors in mediating the effects of alcohol in the central nervous system. Journal of Psychiatry and Neuroscience 28, 263-274.
  12. Davies, J. S. et al. (2010). The glycinergic system in human startle disease: a genetic screening approach. Frontiers in Molecular Neuroscience.
  13. Engblom, A. C. and Akerman, K. E. (1991). Effect of ethanol on gamma-aminobutyric acid and glycine receptor-coupled Cl- fluxes in rat brain synaptoneurosomes. Journal of Neurochemistry 57, 384–390.
  14. Farrant, M. and Nusser, Z. (2005) Variations on an inhibitory theme: phasic and tonic activation of GABAA receptors. Nature Reviews Neuroscience 6, 218-229.
  15. Fibiger, H. C. et al. (1987). The role of dopamine in intracranial self-stimulation of the ventral tegmental area. Journal of Neuroscience 7, 3888-3895.
  16. Grobin, A. C. et al. (1998). The role of GABAA receptors in the acute and chronic effects of ethanol. Psychopharmacology 139, 2-19.
  17. Gu, L. et al. (2015). Pain inhibition by optogenetic activation of specific anterior cingulate cortical neurons. PLoS One:10.
  18. Hanchar, H. J. et al. (2005). Alcohol-induced motor impairment caused by increased extrasynaptic GABA(A) receptor activity. Nature Neuroscience 8, 339-345.
  19. Koob, G. F. et al. (1994). Alcohol, the reward system and dependence. EXS 71, 103–114.
  20. Li, Y. et al. (2014). Brucine suppresses ethanol intake and preference in alcohol-preferring Fawn-Hooded rats. Acta Pharmacologica Sinica 35, 853-861.
  21. Lof, E. et al. (2007). Nicotinic acetylcholine receptors in the ventral tegmental area mediate the dopamine activating and reinforcing properties of ethanol cues. Psychopharmacology 195, 333-343.
  22. Lovinger, D. M. (1999). The role of serotonin in alcohol’s effects on the brain. Current Separations 18:1.
  23. Molander, A. et al. (2006). The glycine reuptake inhibitor ORG25935 decreases ethanol intake and preference in male Wistar rats. Alcohol & Alcoholism 42, 11-18.
  24. Molander, A. and Söderpalm, B. (2005). Accumbal strychnine-sensitive glycine receptors: access point for ethanol to the brain reward system. Alcoholism: Clinical and Experimental Research 29, 27-37.
  25. Olsen, R. W. et al. (2010). GABAA receptor subtypes: the ‘one glass of wine’ receptors. Alcohol 41, 201-209.
  26. Pascual, M. et al. (2009). Repeated alcohol administration during adolescence causes changes in the mesolimbic dopaminergic and glutamatergic systems and promotes alcohol intake in the adult rat. Journal of Neurochemistry 108, 920-931.
  27. Rassnick, S. et al. (1992). Oral ethanol self-administration in rats is reduced by the administration of dopamine and glutamate receptor antagonists into the nucleus accumbens. Psychopharmacology 109, 92-98.
  28. Rudolph, U. and Antkowiak, B. (2004). Molecular and neuronal substrates for general anaesthetics. Nature Reviews Neuroscience 5, 709-720.
  29. Schmahmann, J. D. (2004). Disorders of the cerebellum: ataxia, dysmetria of thought, and the cerebellar cognitive affective syndrome. Journal of Neuropsychiatry and Clinical Neuroscience 16:3.
  30. Stuber, G. D. et al. (2005). Rapid dopamine signalling in the nucleus accumbens during contingent and noncontingent cocaine administration. Journal of Neuropsychopharmacology 30, 853-863.
  31. Torben-Nielsen, B. and De Schutter, E. (2014). Context-aware modelling of neuronal morphologies. Frontiers in Neuroanatomy 8:92.
  32. Tatebayashi, H. et al. (1998). Alcohol modulation of single GABA(A) receptor-channel kinetics. Neuroreport 9, 1769-1775.
  33. Vengeliene, V. et al. (2010). Glycine transporter-1 blockade leads to persistently reduced replase-like alcohol drinking in rats. Biological Psychiatry 68, 704-711.
  34. Volkow, N. D. et al. (2006). Low doses of alcohol substantially decrease glucose metabolism in the human brain. NeuroImage 29, 295-301.
  35. Williams, K. L. et al. (1995). Glycine enhances the central depressant properties of ethanol in mice. Pharmacology, Biochemistry and Behaviour 50, 199-205.
  36. Wu, J. et al. (2014). Neuronal nicotinic acetylcholine receptors are important targets for alcohol reward and dependence. Acta Pharmacologica Sinica 35, 311-315.

 

The mystery of tetrachromacy: If 12% of women have four cone types in their eyes, why do so few of them actually see more colours?

* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to access the original reports.Tetra1In these paintings, Australian artist Concetta Antico aims to capture her extraordinary visual experiences, which she describes as consisting of a mosaic of vibrant colours. In an interview with the BBC, Concetta reflected on the sight of a pebble pathway, which most people perceive as grey: The little stones jump out at me with oranges, yellows, greens, blues and pinks’ (1).

In 2012, a genetic analysis confirmed that Concetta’s enhanced colour vision can be explained by a genetic quirk that causes her eyes to produce four types of cone cells, instead of the regular three which underpin colour vision in most humans. Four cones give Concetta the potential for what researchers call tetrachromacy (from Greek ‘tetra’ – four, and ‘khrōma’ – colour), instead of normal trichromatic colour vision (from Greek ‘tria‘– three). This means that her eyes can enjoy a diversity of colours that is about 100 times greater than what is accessible to the rest of us.

While tetrachromacy is so rare that it makes headlines every time a new case emerges, it might come as a surprise that women with four cone types in their retinas are actually more common than we think. Researchers estimate that they represent as much as 12% of the female population (4). So why aren’t we surrounded by women with extraordinary colour vision? Researchers have found that only a small fraction of women who possess an extra cone type actually get to enjoy more colours. So what does it take to be a true tetrachromat? How does the human retina come to produce four cone types, and why does it only concern women? More importantly, why don’t all women fulfil their genetic potential? And how do we find the special women who do?

The fourth cone – science fiction? 

The three cone types that most of us have in our retinas allow us to see millions of colours. Each cone’s membrane is packed with molecules, called opsins, which absorb lights of some wavelengths and cause the cone to send electrical signals to the brain. The opsin molecules vary between the three cone types, so that each type is sensitive to different wavelengths from the visible spectrum (3). Together, these cone cells allow the brain to identify the wavelengths of light that our eyes encounter – colour is experienced as a way of registering this information in our consciousness.

Trichro

Individuals who happen to be born with a fourth cone type containing a new light-absorbing opsin molecule technically have the potential to distinguish a greater number of wavelengths, and thus perceive more colours. So are these extra colours like something taken from a sci-fi movie?

So far, there are no documented cases of humans with a fourth cone that captures light beyond the wavelength range of 400-700 nanometres, which is the normal visible spectrum. Thus, the existence of four cones isn’t quite the epic sci-fi scenario in which the eye becomes a hybrid between a human and some other species, like a bee or snake that can see ultraviolet light (7,8). Instead, the most common cause of a fourth cone is if an individual inherits a subtle change in DNA sequence (mutation) in one of the already existing genes for the light-absorbing opsin molecules that fill either the M- or L- cones. The human eye gains slightly superhuman abilities within the visible spectrum.

The genetic origins of four-cone retinas

An extra cone might come about if a mutation in one of the opsin genes affects the physical structure of the resulting opsin molecule in a way which influences its sensitivity to light. This change can essentially create a new cone type, because cone cells which contain the altered molecule react differently to various wavelengths compared to cones which contain the original opsin made from the non-mutated gene.

Since the M- and L- cone opsin genes are located on the X-chromosome, only women could possibly enjoy the benefits of such a mutation. A male inherits only one X-chromosome. Thus, if the single X-chromosome he receives from his mother carries a change in the M-cone opsin gene, his retina will ultimately produce three cone types: normal S-cones with opsins from a gene chromosome 7, and regular L-cones as well as abnormal M-cones containing mutated opsins from the same X-chromosome. This man would be classified as an anomalous trichromat since, like in most humans, his three cone types allow him to experience roughly the same number of colours, albeit slightly differently.

A woman, on the other hand, has the potential to produce four cone types because she inherits two X-chromosomes. So if one of them contains a mutated opsin gene, she will have one X-chromosome to provide the normal M- and L-cone opsins, and an additional chromosome to produce the mutated ‘new’ opsin. The illustration below provides some more detail.TetragenesAs mentioned, researchers estimate that women born with four cones are quite common, while the actual capacity to see more colours is exceptionally rare. So how do we objectively test whether women with four cones experience a greater range of colours? And once we identify those who indeed see more hues, how do we explain why some, but not others, can enjoy the genetic potential of tetrachromacy?

Testing for tetrachromacy with different colours designed to seem identical to the rest of us

Researchers aiming to investigate how many women actually have superior colour vision first need to fish for potential tetrachromats in the massive human population. Since women with four cones have one mutated X-chromosome, they have a 50% chance of passing that X-chromosome to their sons. This makes them much more likely than other women to have sons who are anomalous trichromats, which I described earlier. Researchers use this when seeking candidates for tetrachromacy, as they advertise for female participants whose sons have colour vision anomalies (4). The next important dilemma is to figure out how to objectively measure these women’s visual abilities. Where do we even begin searching for hues that seem identical to us but might seem distinct to tetrachromats? This challenge is by no means trivial – if we were to test for tetrachromacy by asking women if they see differences between randomly selected colour mixtures, we’d have a ridiculously long experiment.

Conveniently enough, the anomalous trichromats born to these women provide a useful starting point. While they are poorer than most people at discriminating some colours that seem obviously different to us (which is why they are often considered ‘colour-deficient’), they can in fact distinguish some colours that we perceive as identical (2). Researchers assume that if a woman with four cones sees extra colours, they must be the same ones that her sons see, given that her retina possesses the same mutated cone type (although the mother also has a fourth cone type and thus avoids the impairment her sons have with some other colours).

The surprising existence of extra colours that are visible to anomalous trichromats means that we can test for tetrachromacy by asking women if they see differences between colours that appear identical to normal trichromats, but seem different to their sons. How do we design these colours? For starters, we can use valuable findings from scientific experiments.

In 1992, researchers used bits of human DNA to produce the S-, M- and L-cone opsins inside cells and study their reactions to lights of different wavelengths (5). This experiment showed that we can easily calculate the signal that each cone type will produce when stimulated with various wavelengths. As an example, let’s take the M-cone, shown below.

Tetra4

Knowing what we do about how different cones respond to various lights, we can design mixtures of wavelengths that would produce the exact same signals across the three cone types in the normal human eye, but not in the eye of an anomalous trichromat. These mixtures would seem identical to an individual with three regular cone types, but not to one with a mutated cone.  Here’s a scenario where a normal trichromat can’t see the difference between two physically distinct colours, while an anomalous trichromat can.

Let’s start with the normal trichromat:

Tetra5The signals that the regular cones ultimately produce when stimulated with 590 nm light are exactly the same for a mixture of 540 nm + 670 nm light! When the brain receives these identical signals, it has no way of distinguishing between the two types of light, and the trichromat perceives them as identical.

Now let’s look at an anomalous trichromat who has a mutated M-cone with a light sensitivity profile that, compared to the original M-cone, falls slightly closer to the regular L-cone.

Tetra6Notice that the signals produced by these three cone types are quite different for 590 nm light and the mixture of 540 nm + 670 nm lights. This means that the anomalous trichromat’s brain can sense a distinction between these two types of light, and so the man himself can experience the difference in colour. As mentioned, this man’s mother has the same mutated M-cone in addition to three regular cones, making these types of colour mixtures ideal for testing if she can experience more colours.

This is exactly what researchers did in 2010 (4). They presented women with pairs of colour mixtures designed to appear identical to regular trichromats, but which their anomalous trichromat sons could distinguish. They were then asked to rate how similar these mixtures appeared on a scale of 1 to 10, and their answers were compared to those of normal trichromats’ mothers, who were unlikely to have four cones.

Here, the first signs appeared that four cones don’t automatically grant you superior colour vision. The mothers of regular trichromats and most mothers of anomalous trichromats behaved similarly in this experiment. The similarity ratings they gave to various pairs of colour mixtures on one occasion were not the same ones they gave when asked about the same pairs some other time. These women seemed to be giving pretty random responses, making it doubtful that any of them really saw differences between the colour mixtures. Genetic analyses confirmed that at least seven of the nine anomalous trichromats’ mothers did in fact have four distinct cone types in their retinas. And yet, their colour vision wasn’t any better than that of women with three cones. Quite the enigma.

Only one of the seven women with four cones behaved as if she actually perceived differences between the colour mixtures that were invisible to everyone apart from her sons. For any given pair of colour mixtures that she was asked to rate in terms of similarity, she gave the same number when asked on separate occasions. She clearly wasn’t just picking a random number every time, but seemed to actually see the colour differences. What makes her different from the other women with four cone types?

If having four cone types isn’t enough, what does it take to see more colours?

When it comes to genetic mutations, some are insignificant, as they produce molecules that differ only slightly, or not at all, from those made by non-mutated genes. Other mutations can have a dramatic effect on the structure of the protein that a gene goes on to produce.  With opsin genes, some mutations cause massive shifts in the light sensitivity of the resulting opsin molecule, while other mutations make a smaller difference.

The challenge for most women with four cones is that their extra cone is simply not different enough from an already existing cone type to be useful to the brain. Let’s look at two women with four cone types.

CandidatesThe light sensitivity profile of the first woman’s extra cone overlaps heavily with the profile of the normal L-cone. So, when her retina is stimulated by lights of different wavelengths, the signals that the fourth cone sends to the brain don’t really differ from what the L-cone already provides. Remember – the only way cones allow us to see colours is by sending the brain different signals for different wavelengths. If cone signals remain the same for various wavelengths, how could the brain, and so the brain’s owner, possibly see a difference?  Unfortunately, this woman’s fourth cone is so similar to the L-cone that the visual system doesn’t even notice its existence.

On the other hand, the light sensitivity profile of the second woman’s extra cone is comfortably couched between the normal M- and L-cone profiles. This cone is different enough from the rest that when the retina is stimulated by lights of various wavelengths, all four cone types produce different signals. This fourth cone becomes useful for discriminating more wavelengths, and its owner might see 100 times more colours than the rest of us. This is exactly what researchers found with the only true tetrachromat they discovered in their experiment. Analyses of the opsin genes on her X-chromosomes revealed that the light sensitivity of her fourth cone type was ideally separated from the neighbouring M- and L-cones by a comfortable 12 nanometers (4)!  In most other candidates, the fourth cone was too similar to the closest existing cone, making it incapable of enhancing colour vision.

Ultimately, experiments teach us that cones are necessary tools for seeing colour. But if one tool is no different from the next, the brain simply discards it and settles for what it has. Of the millions of women in the world whose eyes have four cone types, only a few will have won the ‘ideal’ mutation lottery that allows them to experience a seashore of colours like the tetrachromat artist Concetta Antico.

Tetra8

PS. If you are interested in learning more about how regular trichromatic vision works, have a look my previous article.

References

  1. BBC article: The women with superhuman vision. September, 2014.
  1. Bosten, J. M. et al. (2005). Multidimensional scaling reveals a color dimension unique to ‘color-deficient’ observers. Current Biology 15, R950.
  1. Hofer, H. et al. (2005). Organization of the human trichromatic cone mosaic. Journal of Neuroscience 25, 9669-9679.
  1. Jordan, G. et al. (2010). The dimensionality of color vision in carriers of anomalous trichromacy. Journal of Vision, 10.
  1. Merbs, S. L. and Nathans, J. (1992). Absorption spectra of human cone pigments. Nature 356, 433-435.
  1. Ray, P. F. et al. (1997). XIST expression from the maternal X chromosome in human male preimplantation at the blastocyst stage. Human Molecular Genetics 6, 1323-1327.
  1. Sillman, A. J. et al. (1999) The photoreceptors and visual pigments in the retina of a boid snake, the ball python (Python Regius). Journal of Experimental Biology 202, 1931-1938.
  1. Townson, S. M. et al. (1998). Honeybee blue- and ultraviolet-sensitive opsins: cloning, heterologous expression in Drosophila, and physiological characterization. Journal of Neuroscience 18, 2412-2422.

 

 

How do our colour-blind cones achieve colour vision, and how does this explain the existence of three primaries?

* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to access the original reports.

Everywhere we go, our experience of colour is fundamentally made up of mixtures of three primary hues. A printer generates all the colours we see just by blending three coloured inks in varying quantities – these are the colours we see on anything from photographs and shampoo bottle labels, to book covers and juice boxes. Each patch of colour on a TV display or computer screen is actually a tiny cluster of three dots – blue, green, and red – which can produce all the colours visible to the human eye by varying the relative intensities of blue, green, and red light that they emit.

Image1

So why is it that specifically three primaries are so central to the way we see and create colour?  After all, primaries don’t exist in the natural world, and the number three has no particular meaning. Instead, all the objects and surfaces that surround us reflect lights of millions of different wavelengths, of which our eyes can only see a tiny fraction.

Image2

Why is this visible range so limited compared to the true physical diversity of light, and how does this relate to the apparent existence of three primaries?

The wavelengths of light that bounce off objects that we encounter can tell us something useful about these objects, so it’s quite certain that the small spectrum of light that is visible to us is the one that contains wavelengths that give us the most important pieces of information. For example, as an apple ripens, and thus becomes more nutritious, it undergoes chemical changes that make its skin reflect more light of longer wavelengths (around 550-700 nanometres) than before.Image3If your eye has a way of telling apart the wavelengths of light that are reflected by fruit skins as they ripen, then you are able to visually experience the fruit’s transition from green to ripe as a difference in colour. This allows you to decide if a fruit is edible without having to smell or taste it! Animals that benefit from this information (primarily fruit-eaters) have evolved visual systems that are dedicated to discriminating different wavelengths precisely within this useful spectrum of light. Amongst them, the eyes of humans and many primates native to Africa and Asia identify various wavelengths using three types of specialised cone cells (shown below), which allow us to see hues ranging from blue to red.

TrichromacyKnowing this, it might be tempting to assume that the existence of three primaries in our colour vision can be explained by each of our cones seeing a primary colour (blue, green, or red) and somehow blending signals to produce a colour experience. In truth, this isn’t quite accurate, because a cone sees no colour at all. In fact, it’s completely colour-blind. Understanding how cones work, and how multiple cone types work together to overcome each other’s colour-blindness, is key to understanding why all the colours we see can be made with three primary colours.

How cone cells work, and why they’re actually colour-blind

The retina, which is located in the back of the eye,  is densely packed with millions of light-sensitive cells – known as rods and cones – which pick up light that enters the eye and begin the process of vision.

Image5These cells are light-sensitive because their membranes are packed with molecules, called opsins, which absorb light as it hits the retina. This light absorption triggers a series of events inside the cell, which regulates how much neurochemical the cell releases to communicate with other cells in the retina that eventually signal to the brain. This enables the brain to deduce how much light is stimulating the eye.

Importantly, the light-absorbing opsin molecules found in cones aren’t equally sensitive to all wavelengths of light in the visible spectrum. Their likelihood of absorbing a packet of light (or photon) and triggering a signal to the brain differs when we look at light of different wavelengths. As an example, let’s take the S-cone, which responds to visible light of the shortest wavelengths that are associated with indigo and violet colour experiences.

The image below is a curve which describes how the S-cone’s sensitivity to light changes depending on the wavelength of light on the retina. Looking at it, we can see that the cone is most sensitive to light of about 455 nanometres wavelength, which means that the opsin molecules in this cone are most likely to absorb light and produce a neurochemical signal to the brain when they’re stimulated by this particular wavelength. The cone is less sensitive to other wavelengths,  which means that the further a light’s wavelength is from the cone’s preferred 455 nm, the lower the probability that the cone will absorb and react to a packet of this light. To activate the cone and trigger a signal to the brain, lights of these less preferred wavelengths need to be more intense (so, more packets of light need to be present).

Singlecone

But here, the cone faces a serious dilemma. What if we first stimulate it with some 455 nm light, to which the cone is most sensitive, and then twice as much 470 nm light, to which the cone is half as sensitive? The cone will produce the exact same signal in both cases. When the brain receives these identical signals, it will have no way of finding out what wavelength of light was responsible for activating this cone, and so will see these two lights as identical! This phenomenon is called metamerism.

MetamerismNo matter what cone cell we look at, hundreds of different wavelengths can emulate each other by tricking the cone into producing the same signal, just by changing in intensity. Any time a cone sends the brain a different signal, the brain has no way of seeing if the signal was caused by light being of a different wavelength, or simply becoming more or less intense. A cone’s signal thus tells you nothing about wavelength – it is effectively colour-blind. Even a million cone cells would still be colour-blind, if all of them have the same light sensitivity profile, because all these cones will respond to different wavelengths in the exact same way and thus be equally tricked by different lights. While the colour-blindness of cones might sound somewhat surprising, researchers have known for several decades that extremely rare individuals with only one cone type are usually completely colour-blind (5). An individual whose retina contains only M-cones (previously called green cones) wouldn’t actually see the world in shades of green, but in greyscale. This makes it difficult to say that a single cone in our retina represents a primary colour.

Why colour vision requires at least two cone types to exist

If the retina contains two cone types, each of which is sensitive to a different part of the visible spectrum, then the colour-blindness of each cone type becomes less problematic. This is because light of each wavelength causes the two cone types to produce different signals, as shown below.Dichromacy

Now if the eye encounters lights of the two different wavelengths that trick the first cone into producing an identical signal, the second cone produces different signals for these lights. If we look at how much stronger the signal produced by cone 1 is compared to cone 2, by taking the ratio of the two signals, we can easily tell the difference between when the eye was given 1 packet of 455 nm light and when it was given twice as much 470 nm light.

Image9In fact, this appears to be how the visual system identifies lights of different wavelengths, as the retina harbours specialised cells that calculate the ratios of signals produced by different cone types (1). For anyone interested in understanding this in some detail, check out the image below, which shows how such a ratio might be calculated using two hypothetical cone types (yellow and green).

OpponencySince this ratio of cone signals gets communicated to the brain, it is the main source of information that the brain has to help it identify the wavelengths of light that stimulate the eye. This ability to discriminate different wavelengths allows the brain to produce a basic form of colour vision, which is otherwise impossible with just one type of cone.

With two cone types, there is no single wavelength that can be mistaken for another. That’s great, but using two cones is still less than perfect. This is because for any single wavelength, we can find several mixtures of two wavelengths that will produce the same cone signal ratio as that single wavelength. This would prevent the brain from telling the difference between the single wavelength and the mixtures, so that they will all appear identical in colour, despite being physically different! Let’s look at an example below.Dimetamerism

Thus, in any visual system with two cone types, the colour experience produced by a light of any wavelength can be precisely matched by several mixtures of two wavelengths. Such a visual system requires two primary colours to produce all the visible hues. Eyes with more cones types are able to distinguish a greater diversity of light mixtures, and are thus capable of experiencing a greater number of colours. Now let’s examine our own visual system, where the retina is a mosaic of millions of densely packed cone cells belonging to three types, each with a slightly different light sensitivity profile.

TrichromacyRetinaThis three-cone system, known as trichromatic (from Greek tria – three, and khrôma – colour) is at the root of our ability to experience millions of hues. It allows the brain to distinguish lights of single wavelengths and two-wavelength mixtures that would confuse the visual system which possesses only two cones. However, the reason for three primary colours becomes evident when we consider the fundamental limitation of trichromatic vision.

A mixture of just three wavelengths is enough to fool our brains into perceiving it as identical to a single wavelength

Just like a two-cone system perceives plenty of single wavelengths and 2+ wavelength mixtures as identical, the three-cone system is equally confused by mixtures of three wavelengths. Thus, for all wavelengths of visible light, there are multiple three-wavelength mixtures that produce the exact same ratio of signals in our three cone types. As a consequence, the brain, which receives information about these signal ratios, is incapable of telling the difference between the various lights which originally prompted the cones to produce their signals. This means that there is a vast range of physically distinct wavelengths and mixtures of 3+ wavelengths that we perceive as identical in colour.

In light of this, the existence of three primary colours is ultimately rooted in our brain’s inability to discriminate light mixtures that are more complex than three wavelengths – thus, mixing three colours becomes enough to reproduce all visible hues. This is exactly what colour printers and computer screens exploit, as they generate colours by blending three different coloured inks or lights. More primaries are unnecessary, since our retinas couldn’t pick up on the difference anyway.

It is important to keep in mind that the three primaries are only used in the world of colour that we humans create for each other when we paint, print, and display on screens. In the natural world, surfaces reflect an enormous variety of light mixtures composed of numerous wavelengths that we could not even begin to imagine perceiving. While a trichromat might see the bark of the tree below as consisting of subtle variations of beige and pink, rare individuals with four cones in their retinas (such as Concetta Antico, the artist who painted this image) could distinguish the patches of bark that reflect distinct mixtures of multiple wavelengths. To these eyes, tree bark can come alive as a flickering blend of orange, green, brown, and violet patches that we just couldn’t see. The colour world of such humans is instead composed of four primaries, and in my next article, I will examine the fascinating phenomenon of the four-cone observer.

Image12

References

  1. Diller, L. et al. (2004). L and M cone contributions to the midget and parasol ganglion cell receptive fields of macaque monkey retina. Journal of Neuroscience 24, 1079-1088.
  1. Dudley, R. (2004). Ethanol, fruit ripening, and the historical origins of human alcoholism in primate frugivory. Integrative and Comparative Biology 44, 315-323.
  1. Hofer, H. et al. (2005). Organization of the human trichromatic cone mosaic. Journal of Neuroscience 25, 9669-9679.
  1. Regan, B. C. et al. (2001). Fruits, foliage and the evolution of primate colour vision. Philosophical Transactions of the Royal Society of London 356, 229-283.
  1. Weale, R. A. (1953). Cone-monochromatism. Journal of Physiology 121, 548-569.

 

 

 

Is serotonin the happy brain chemical, and do depressed people just have too little of it?

* Scientific study references are indicated in (). These can be found at the bottom of the article, and can be clicked on to access the original reports.

Serotonin is somewhat of a media darling, often nicknamed ‘the happy brain chemical’. Articles such as ‘Hacking into your happy chemicals’ and ‘Chemicals that activate happiness, and how to gamify them’ (1,2) offer tips on ways to boost happiness by increasing serotonin levels, using anything from yoga to bananas.

This line of reasoning tends to underpin the claim that individuals with depression have a ‘chemical imbalance’ in their brains – a serotonin deficiency – and that their mood can get back on track with drugs that increase serotonin levels. While this story is quite convenient, it simply does not make sense.

Serotonin is fundamentally very much like the other 100+ neurochemicals that are found in the nervous system. It is a molecule that brain cells, or neurons, release in order to communicate with other cells by stimulating their receptor molecules. The ultimate purpose of any neurochemical signal is to carry some form of information. For instance, the amount of neurochemical released by photoreceptor cells in the retinas of our eyes tells the brain something about the lighting in our environment. In light of this, we might question the purpose of having a generic ‘feel-good chemical’ such as serotonin. Of course, you could argue that a neurochemical that makes us feel good when we have done something beneficial for survival is an extremely useful teaching signal. This point is valid enough, but there is no evidence that serotonin directly underpins happiness.

How we know that serotonin doesn’t make you happy

Many studies have explored whether rapidly lowering serotonin levels makes healthy individuals feel less happy, and have consistently found that this is not the case. One common method used for this purpose is acute tryptophan depletion (ATD), which effectively and temporarily lowers brain serotonin levels by 50-90% over the course of several hours. Participants of ATD studies are given a drink containing a variety of essential amino-acids except for tryptophan – an essential molecule which is processed in the central nervous system to produce serotonin. These consumed amino-acids compete with the relatively fewer tryptophan molecules to access the brain from the bloodstream, and naturally win. As a result of this, serotonin synthesis goes down and levels of the neurochemical drop dramatically within 5-7 hours. Using this method, researchers find that healthy individuals don’t actually report feeling any less happy when deprived of serotonin (9,20).

We also have no evidence that increasing brain serotonin levels actually improves people’s mood. Researchers have studied the effects of giving healthy individuals single doses of common anti-depressants, such as citalopram. This drug works by blocking the serotonin transporter, a tiny pump that rapidly takes serotonin back into the cell which originally released it, and thus shuts off the neurochemical signal. Ultimately, citalopram and other similar drugs called selective serotonin reuptake inhibitors (SSRIs), enable serotonin to float around in synapses (spaces between neurons) for longer periods of time.

SSRIS

One study using the SSRI citalopram found that temporarily raising individuals’ serotonin levels did not actually increase their reported level of happiness (11).  The evidence that anti-depressants don’t automatically make people feel happier might sound surprising – after all, that is on some level what we expect these drugs to do. But this finding lies at the root of the problem that most depressed individuals taking SSRIs don’t actually feel better until roughly six weeks after beginning treatment, while roughly 40% of individuals don’t improve at all (5). In essence, it’s clear that lower serotonin levels don’t necessarily make you sadder, while medications that increase serotonin levels don’t necessarily provide a ‘happy fix’. So what is serotonin doing in the brain, if not making you feel good?

An alternative view: serotonin regulates learning

Currently, we have some solid evidence that serotonin generally regulates how the brain learns about the environment. As an example, researchers have found that serotonin may be responsible for enabling animals that have lost one sense, such as due to blindness, to become more sensitive to information coming from another sense.

One brain region where this serotonin-driven effect has been found is the whisker cortex of rodents. The cortex is the folded external structure of the brain, composed of several layers of tightly packed cells, shown in the image below in a darker purple.

BrainsThe function of a particular bit of cortex depends on where its cells are getting their information. For example, neurons in the cortical region in the backs of our heads receive signals triggered by light activating cells in our retinas – thus, this cortex processes and allow us to experience visual information. As the name suggests, the whisker cortex of the rodent brain is a region that receives touch information from the whiskers as the animals explore their environment.

One study published several years ago found that when juvenile rats spent most of their time in the dark (where vision was of little use), their whisker cortex became more sensitive to touch information coming from the whiskers than when these rats also had plenty of visual information (13). In the dark, serotonin-producing neurons released more of the neurochemical onto cells in the whisker cortex, which ultimately made them more easily excited when information arrived from nerves attached to the whiskers. How did serotonin achieve this?

Let’s look closely at a synapse, or the connecting space, between a few neurons found in this region – in the picture below, we have one neuron arriving from the whiskers and another neuron receiving and passing on this information. Receiving cells typically contain a variety of receptors that respond to neurochemical molecules being released from communicating neurons. Most commonly, their membranes are peppered with receptors for the neurotransmitter glutamate (marked in yellow and brown in the image below), which makes cells become excited, and closer to passing on signals that they receive. They also house particular serotonin receptors (marked in red), which receive signals from serotonin-releasing neurons that find their way into the whisker cortex. Strong stimulation of these receptors when rats lived in the dark activated diverse molecules inside the cells, which transported and inserted more glutamate receptors into their membranes. Remember that glutamate is a neurochemical that excites cell membranes. The result of this mass insertion of glutamate receptors was that, in the future, the changed neurons became more sensitive to stimulation of the whiskers!

Barrelserotonin

Essentially, serotonin encouraged the whisker cortex to learn that it was better for the rats to focus more strongly on touch when visual information was unavailable to help them explore their surroundings. This effect is quite unrelated to happiness, and reveals that serotonin has a much wider role in enabling brain cells to learn from the environment.

Serotonin might actually be helping us learn to be anxious 

One brain structure where learning is associated with depression and anxiety disorders is the amygdala – a collection of nuclei (cell clusters) buried deep in the brain. This is where serotonin plays its role in a fascinating way.

Amygdala

Researchers consider the amygdala as the primary storage site for memories about negative and dangerous events, since both animals and humans with amygdala damage show remarkable difficulties learning that particular stimuli are associated with upsetting consequences, such as electric shock, loud noise, or bad social encounters (10,14). We can thus view the feeling of anxiousness, which is linked to amygdala activity, as a useful emotional response that encourages us to avoid situations that we have learnt to be potentially threatening. Indeed, patients with amygdala damage generally fail to experience anxiety or discomfort about things that humans normally find uncomfortable or threatening – spiders, snakes, or confrontational situations. One particular patient from a case study published several years ago looked back at a previous experience of being held at knifepoint, and described feeling essentially no discomfort at the time (10).

Serotonin is known to have a powerful effect on fear learning in the amygdala.  Already decades ago, researchers observed that when rodents are trained that particular sounds predict unpleasant outcomes, certain amygdalar neurons begin responding more strongly to signals from nerves coming from the auditory system (18). In essence, these animals’ experiences teach them that auditory information can predict whether something threatening might occur, and thus amygdalar neurons become sensitive to such information when it arrives. Serotonin is known to be partially responsible for the physiological changes that underpin this learning effect (8,19), which means that it also influences how well the animal itself learns to expect and avoid the bad outcome.

Indeed, when it comes to the anxiety and avoidance that animals show when they have learnt to fear something, the impact of serotonin is quite profound. Studies found that increasing brain serotonin levels in rats by giving them single doses of SSRI anti-depressants such as citalopram or fluoxetine (aka Prozac) actually increases how intensely they express their fear (4). This doesn’t fit with serotonin’s reputation as the happy neurochemical, and instead shows that this neurochemical seems to teach us to feel anxious when in bad situations.

Depressed and anxious people might simply be too good at learning about bad outcomes

Over the past few decades, researchers have been discussing that mental health issues such as depression and anxiety might fundamentally be disorders of learning, rather than outcomes of a ‘chemical imbalance’ that requires correction by a serotonin boost. Specifically, certain individuals which have atypical function of the serotonin system (which might be caused by genetic factors or stressful lives) may be at risk of developing depression or anxiety because they are too good at learning about negative outcomes, and thus are more likely to feel that the world is a bad place if they experience negative life events. One of the most talked-about studies in the psychiatric literature supports this possibility (6). It found that individuals with a particular genotype affecting the serotonin system were more likely than others to develop depression or anxiety only if they had experienced stressful life events, such as child abuse, unemployment, or loss of a loved one. Clearly, having an atypical serotonin system alone wasn’t enough – it had to be combined with negative experiences.

This perspective might explain why treatment with SSRI anti-depressants doesn’t tend to increase happiness until weeks after depressed patients have begun taking medication. Serotonin changes produced by the drug, combined with appropriate therapy, might work by allowing patients to learn that the world is not such a bad place rather than simply making them happy. This might work by making negative emotional events less effective at exciting emotional brain regions such as the amygdala, and thus allowing individuals to learn less from and gradually take less notice of such negative events. Such learning takes time.

Similarly, the learning perspective on serotonin might go some way towards explaining why some depressed patients generally do not improve when undergoing SSRI treatment. Consuming pills in the absence of improvement in life conditions or appropriate therapy might in fact make it easier for already depressed individuals to ‘learn’ more about the negative events surrounding them, and consequently feel worse.

Finally, we are all still wondering – are these learning effects which likely contribute to depression and anxiety produced by low serotonin levels after all? Is there any truth to the serotonin deficit hypothesis? Alas, the evidence is unclear. Many studies indicate that depressed patients have lowered serotonin synthesis in their brains, so in this respect, there does seem to be reason to believe that such individuals have serotonin deficiencies. However, it has also been found that single doses of SSRI anti-depressants increase tissue serotonin levels primarily in the short-term, while chronic treatment ( >21 days) either has no effect on serotonin levels or might actually reduce them by 30-40% in multiple brain regions (12, 16). Given that improvements in depressive or anxious symptoms tend to take place weeks after an individual commences drug treatment, it appears that we are treating the supposedly serotonin-deprived depressive state by lowering serotonin levels.  Of course, this doesn’t quite make sense, which is something that researchers are currently well aware of. It goes to show that we are still far from figuring out the mystery of serotonin. However, we can say with some certainty that serotonin is not simply the happy neurochemical.

5HT

   References

  1. http://technologyadvice.com/podcast/blog/activate-chemicals-gamify-happiness-nicole-lazzaro/
  2. http://theutopianlife.com/2014/10/14/hacking-into-your-happy-chemicals-dopamine-serotonin-endorphins-oxytocin/
  3. Albright, M. J. et al. (2007). Increased thalamocortical synaptic response and decrease layer IV innervation in GAP-43 knockout mice. Journal of Neurophysiology 98, 1610-1625.
  4. Burghardt, N. S. et al. (2007). Acute selective serotonin reuptake inhibitors increase conditioned fear expression: blockade with a 5-HT(2C) receptor antagonist. Biological Psychiatry 62, 1111-1118.
  5. Carvalho, A. F. et al. (2007). Augmentation strategies for treatment-resistant depression: A literature review. Journal of Clinical Pharmacy and Therapeutics 32, 415–428.
  6. Caspi, A. et al. (2003). Influence of life stress on depression: moderation by a polymorphism in the 5-HTT gene. Science 301, 386-389.
  7. Castren, E. (2005). Is mood chemistry? Nature Reviews Neuroscience 6, 241-246.
  8. Chen, A. et al. (2003). Serotonin type II receptor activation facilitates synaptic plasticity via N-methyl-D-aspartate-mediated mechanism in the rat basolateral amygdala. Neuroscience 119, 53-63.
  9. Evers, E. A. et al. (2005). Effects of a novel method of acute tryptophan depletion on plasma tryptophan and cognitive performance in healthy volunteers. Psychopharmacology 178, 92-99.
  10. Feinstein, J. S. et al. (2011). The human amygdala and the induction and experience of fear. Current Biology 21, 34-38.
  11. Harmer, C. J. et al. (2003). Acute SSRI administration affects the processing of social cues in healthy volunteers. Neuropsychopharmacology 28, 148-152.
  12. Muck-Seler et al. (2002). Influence of fluoxetine on regional serotonin synthesis in the rat brain. Journal of Neurochemistry 67, 2434-2442.
  13. Jitsuki, S. et al. (2011). Serotonin mediates cross-modal reorganization of cortical circuits. Neuron 69, 780-792.
  14. Kazama, A. M. et al. (2012). Effects of neonatal amygdala lesions on fear learning, conditioned inhibition, and extinction in adult macaques. Behavioral Neuroscience 126, 392-403.
  15. Lesch, K. and Waider, J. (2012). Serotonin in the modulation of neural plasticity and networks: implications for neurodevelopmental disorders. Neuron 76, 175-191.
  16. Nakayama, K. et al. (2003). Possible alteration of tryptophan metabolism following repeated administration of sertraline in the rat brain. Brain Research Bulletin 59, 293-297.
  17. Rakic, P. (2009). Evolution of the neocortex: a perspective from developmental biology. Nature Reviews Neuroscience 10, 724-735.
  18. Rogan, M. T. et al. (1997). Fear conditioning induces associative long-term potentiation in the amygdala. Nature 390, 604-607.
  19. Tran, L. et al. (2013). Depletion of serotonin in the basolateral amygdala elevates glutamate receptors and facilitates fear-potentiated startle. Translational Psychiatry 3: e298.
  20. Van der Veen, F. M. et al. (2007). Effects of acute tryptophan depletion on mood and facial emotion perception related brain activation and performance in healthy women with and without a family history of depression. Neuropsychopharmacology 32, 216-224.