This is the first fortnightly column I’ll be writing for The Conversation, a creative commons news and opinion website that launched today. The site has been set up by a number of UK universities and bodies such as the Wellcome Trust, Nuffield Foundation and HEFCE, following the successful model of the Australian version of the site. Their plan is to unlock the massive amount of expertise held by UK academics and inject it into the public discourse. My plan is to give some critical commentary on headlines from the week's news which focus on neuroscience and psychology. If you've any headlines like you'd critiquing, let me know!
A picture of a large pair of eyes triggers feelings of surveillance in potential thieves, making them less likely to break the rules.What they actually did
Researchers put signs with a large pair of eyes and the message “Cycle thieves: we are watching you” by the bike racks at Newcastle University.
They then monitored bike thefts for two years and found a 62% drop in thefts at locations with the signs. There was a 65% rise in the thefts from locations on campus without signs.How plausible is it?
A bunch of studies have previously shown that subtle clues which suggest surveillance can alter moral behaviour. The classic example is the amount church-goers might contribute to the collection dish.
This research fits within the broad category of findings which show our decisions can be influenced by aspects of our environment, even those which shouldn’t logically affect them.
The signs are being trialled by Transport for London, and are a good example of the behavioural “nudges” promoted by the Cabinet Office’s (newly privatised) Behavioural Insight Unit. Policy makers love these kind of interventions because they are cheap. They aren’t necessarily the most effective way to change behaviour, but they have a neatness and “light touch” which means we’re going to keep hearing about this kind of policy.Tom’s take
The problem with this study is that the control condition was not having any sign above bike racks – so we don’t know what it was about the anti-theft sign that had an effect. It could have been the eyes, or it could be message “we are watching you”. Previous research, cited in the study, suggests both elements have an effect.
The effect is obviously very strong for location, but it isn’t very strong in time. Thieves moved their thefts to nearby locations without signs – suggesting that any feelings of being watched didn’t linger. We should be careful about assuming that anything was working via the unconscious or irrational part of the mind.
If I were a bike thief and someone was kind enough to warn me that some bikes were being watched, and (by implication) others weren’t, I would rationally choose to do my thieving from an unwatched location.
Another plausible interpretation is that bike owners who were more conscious about security left their bikes at the signed locations. Such owners might have better locks and other security measures. Careless bike owners would ignore the signs, and so be more likely to park at unsigned locations and subsequently have their bikes nicked.Read more
Nettle, D., Nott, K., & Bateson, M. (2012) “Cycle Thieves, We Are Watching You”: Impact of a Simple Signage Intervention against Bicycle Theft. PloS one, 7(12), e51738.
Tom Stafford does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
The Boston Globe has a short but fascinating interview on the history of swearing where author Melissa Mohr describes how the meaning of the act of swearing has changed over time.
IDEAS: Are there other old curses that 21st-century people would be surprised to hear about?
MOHR: Because [bad words] were mostly religious in the Middle Ages, any part of God’s body you could curse with. God’s bones, nails, wounds, precious heart, passion, God’s death—that was supposedly one of Queen Elizabeth I’s favorite oaths.
IDEAS: Have religious curses like that lost their power as the culture becomes increasingly secular?
MOHR: We still use them a lot, but we just don’t think of them as bad words. They’re very mild. If you look at lists of the top 25 swear words, I think “Jesus Christ” often makes it in at number 23 or something….The top bad words slots are all occupied by the racial slurs or obscene—sexually or excrementally—words…
IDEAS: Are blasphemy, sexuality, and excrement the main themes all over the world?
MOHR: As far as I know, they’re mostly the same with a little bit of regional variation. In Arab and Spanish-speaking Catholic countries, there’s a lot of stuff about mothers and sisters. But it’s pretty much the same.
Interesting, there is good evidence that swear words are handled differently by the brain than non-swear words.
In global aphasia, a form of almost total language impairment normally caused by brain damage to the left hemisphere, affected people can still usually swear despite being unable to say any other words.
Author Melissa Mohr has just written a book called Holy Sh*t: A Brief History of Swearing which presumably has plenty more for swearing fans.
Here’s my BBC Future column from last week. It’s about the so-called Autonomous Sensory Meridian Response, which didn’t have a name until 2010 and I’d never heard of until 2012. Now, I’m finding out that it is surprisingly common. The original is here.
It’s a tightening at the back of the throat, or a tingling around your scalp, a chill that comes over you when you pay close attention to something, such as a person whispering instructions. It’s called the autonomous sensory meridian response, and until 2010 it didn’t exist.
I first heard about the autonomous sensory meridian response (ASMR) from British journalist Rhodri Marsden. He had become mesmerised by intentionally boring videos he found on YouTube, things like people explaining how to fold towels, running hair dryers or role-playing interactions with dentists. Millions of people were watching the videos, reportedly for the pleasurable sensations they generated.
Rhodri asked my opinion as a psychologist. Could this be a real thing? “Sure,” I said. If people say they feel it, it has to be real – in some form or another. The question is what kind of real is it? Are all these people experiencing the same thing? Is it learnt, or something we are born with? How common is it? Those are the kind of questions we’d ask as psychologists. But perhaps the most interesting thing about the ASMR is what happened to it before psychologists put their minds to it.
Presumably the feeling has existed for all of human history. Each person discovered the experience, treasured it or ignored it, and kept the feeling to themselves. That there wasn’t a name for it until 2010 suggests that most people who had this feeling hadn’t talked about it. It’s amazing that it got this far without getting a name. In scientific terms, it didn’t exist.
But then, of course, along came the 21st Century and, like they say, even if you’re one in a million there’s thousands of you on the internet. Now there’s websites, discussion forums, even a Wikipedia page. And a name. In fact, many names – “Attention Induced Euphoria”, “braingasm”, or “the unnamed feeling” are all competing labels that haven’t caught on in the same way as ASMR.
This points to something curious about the way we create knowledge, illustrated by a wonderful story about the scientific history of meteorites. Rocks falling from the sky were considered myths in Europe for centuries, even though stories of their fiery trails across the sky, and actual rocks, were widely, if irregularly reported. The problem was that the kind of people who saw meteorites and subsequently collected them tended to be the kind of people who worked outdoors – that is, farmers and other country folk. You can imagine the scholarly minds of the Renaissance didn’t weigh too heavily on their testimonies. Then in 1794 a meteorite shower fell on the town of Siena in Italy. Not only was Siena a town, it was a town with a university. The testimony of the townsfolk, including well-to-do church ministers and tourists, was impossible to deny and the reports written up in scholarly publications. Siena played a crucial part in the process of myth becoming fact.
Where early science required authorities and written evidence to turn myth into fact, ASRM shows that something more democratic can achieve the same result. Discussion among ordinary people on the internet provided validation that the unnamed feeling was a shared one. Suddenly many individuals who might have thought of themselves as unusual were able to recognise that they were a single group, with a common experience.
There is a blind spot in psychology for individual differences. ASMR has some similarities with synaesthesia (the merging of the senses where colours can have tastes, for example, or sounds produce visual effects). Both are extremes of normal sensation, which exist for some individuals but not others. For many years synaesthesia was a scientific backwater, a condition viewed as unproductive to research, perhaps just the product of people’s imagination rather than a real sensory phenomenon. This changed when techniques were developed that precisely measured the effects of synaesthesia, demonstrating that it was far more than people’s imagination. Now it has its own research community, with conferences and papers in scientific journals.
Perhaps ASMR will go the same way. Some people are certainly pushing for research into it. As far as I know there are no systematic scientific studies on ASMR. Since I was quoted in that newspaper article, I’ve been contacted regularly by people interested in the condition and wanting to know about research into it. When people hear that their unnamed feeling has a name they are drawn to find out more, they want to know the reality of the feeling, and to connect with others who have it. Something common to all of us wants to validate our inner experience by having it recognised by other people, and in particular by the authority of science. I can’t help – almost all I know about ASMR is in this column you are reading now. For now all we have is a name, but that’s progress.
I’ve got an article in today’s Observer about how disaster response mental health services are often based on the erroneous assumption that everyone needs ‘treatment’ and often rely on single-session counselling sessions which may do more harm than good.
Unfortunately, the article has been given a rather misleading headline (‘Minds traumatised by disaster heal themselves without therapy’) which suggests that mental health services are not needed. This is not the case and this is not what the article says.
What it does say is that the common idea of disaster response is that everyone affected by the tragedy will need help from mental health professionals when only a minority will.
It also says that aid agencies often use single-session counselling sessions which have been found to raise the risk of long-term mental health problems. This stems from a understandable desire to ‘do something’ but this motivation is not enough to actually help.
Disaster, war, violence and conflict, raise the number of mental health problems in the affected population. The appropriate response is to build or enhance high-quality, long-term, culturally relevant mental health services – not parachuting in counsellors to do single counselling sessions.
Link to article on disaster response psychology in The Observer.
Quick links from the past week in mind and brain news:
I can’t recognise my own face! In my case, it’s because the Botox has worn off but for person described in the New Scientist article it’s because of prosopagnosia.
The Guardian reports that the UK Government’s ‘Nudge Unit’ is set to become a commercial service. Nudge mercenaries!
A greater use of “I” and “me” as a mark of interpersonal distress. An interesting study covered by the BPS Research Digest.
Pacific Standard has an interesting piece about gun registers, felons and interrupting the contagion of gun violence.
Brain Voodoo Goes Electric. The mighty Neuroskeptic on how a previously common flaw in fMRI brain imaging research may also apply to EEG and MEG ‘brain wave’ studies.
A Médecins Sans Frontières psychologist writes about her work with in the Syrian armed conflict.
The latest social priming evidence and replication story at Nature causes all sorts of academic acrimony. The fun’s in the comments section.
Slate asks Is Psychiatry Dishonest? And if so, is it a noble lie?
With all the ‘everyone will be traumatised and needs to see a psychologist’ nonsense to hit the media after the Boston bombing, this interview with Boston psychiatry prof Terence Keane gets it perfectly. Recommended.
The two ‘All in the Minds’ have just started a new series.
BBC Radio 4′s All in the Mind has just started a new series with the first programme including end-of-the-world hopefuls and psychologist and journalist Christian Jarrett.
ABC Radio National’s All in the Mind new series has also just begun – kicking off with a programme on the social brain.
BBC Radio 4′s brilliant online sociology series The Digital Human started a new series a few weeks ago.
The latest Nature NeuroPod just hit the wires a few days ago.
The Neuroscientists Talk Shop podcast is technical but ace and has a big back catalogue.
Any mind and brain podcasts you’re into at the moment? Add them in the comments.
In a potentially seismic move, the National Institute of Mental Health – the world’s biggest mental health research funder, has announced only two weeks before the launch of the DSM-5 diagnostic manual that it will be “re-orienting its research away from DSM categories”.
In the announcement, NIMH Director Thomas Insel says the DSM lacks validity and that “patients with mental disorders deserve better”.
This is something that will make very uncomfortable reading for the American Psychiatric Association as they trumpet what they claim is the ‘future of psychiatric diagnosis’ only two weeks before it hits the shelves.
As a result the NIMH will now be preferentially funding research that does not stick to DSM categories:
Going forward, we will be supporting research projects that look across current categories – or sub-divide current categories – to begin to develop a better system. What does this mean for applicants? Clinical trials might study all patients in a mood clinic rather than those meeting strict major depressive disorder criteria. Studies of biomarkers for “depression” might begin by looking across many disorders with anhedonia or emotional appraisal bias or psychomotor retardation to understand the circuitry underlying these symptoms. What does this mean for patients? We are committed to new and better treatments, but we feel this will only happen by developing a more precise diagnostic system.
As an alternative approach, Insel suggests the Research Domain Criteria (RDoC) project, which aims to uncover what it sees as the ‘component parts’ of psychological dysregulation by understanding difficulties in terms of cognitive, neural and genetic differences.
For example, difficulties with regulating the arousal system might be equally as involved in generating anxiety in PTSD as generating manic states in bipolar disorder.
Of course, this ‘component part’ approach is already a large part of mental health research but the RDoC project aims to combine this into a system that allows these to be mapped out and integrated.
It’s worth saying that this won’t be changing how psychiatrists treat their patients any time soon. DSM-style disorders will still be the order of the day, not least because a great deal of the evidence for the effectiveness of medication is based on giving people standard diagnoses.
It is also true to say that RDoC is currently little more than a plan at the moment – a bit like the Mars mission: you can see how it would be feasible but actually getting there seems a long way off. In fact, until now, the RDoC project has largely been considered to be an experimental project in thinking up alternative approaches.
The project was partly thought to be radical because it has many similarities to the approach taken by scientific critics of mainstream psychiatry who have argued for a symptom-based approach to understanding mental health difficulties that has often been rejected by the ‘diagnoses represent distinct diseases’ camp.
The NIMH has often been one of the most staunch supporters of the latter view, so the fact that it has put the RDoC front and centre is not only a slap in the face for the American Psychiatric Association and the DSM, it also heralds a massive change in how we might think of mental disorders in decades to come.
Link to NIMH announcement ‘Transforming Diagnosis’.
Last week’s column for BBC Future describes a neat social psychology experiment from an unlikely source. Three evolutionary psychologists reasoned that that claims that we automatically categorise people by the ethnicity must be wrong. Here’s how they set out to prove it. The original column is here.
For years, psychologists thought we instantly label each other by ethnicity. But one intriguing study proposes this is far from inevitable, with obvious implications for tackling racism.
When we meet someone we tend to label them in certain ways. “Tall guy” you might think, or “Ugly kid”. Lots of work in social psychology suggests that there are some categorisations that spring faster to mind. So fast, in fact, that they can be automatic. Sex is an example: we tend to notice if someone is a man or a woman, and remember that fact, without any deliberate effort. Age is another example. You can see this in the way people talk about others. If you said you went to a party and met someone, most people wouldn’t let you continue with your story until you said if it was a man or a woman, and there’s a good chance they’d also want to know how old they were too.
Unfortunately, a swathe of evidence from the 1980s and 1990s also seemed to suggest that race is an automatic categorisation, in that people effortlessly and rapidly identified and remembered which ethnic group an individual appeared to belong to. “Unfortunate”, because if perceiving race is automatic then it lays a foundation for racism, and appears to put a limit on efforts to educate people to be “colourblind”, or put aside prejudices in other ways.
Over a decade of research failed to uncover experimental conditions that could prevent people instinctively categorising by race, until a trio of evolutionary psychologists came along with a very different take on the subject. Now, it seems only fair to say that evolutionary psychologists have a mixed reputation among psychologists. As a flavour of psychology it has been associated with political opinions that tend towards the conservative. Often, scientific racists claim to base their views on some jumbled version of evolutionary psychology (scientific racism is racism dressed up as science, not racisms based on science, in case you wondered). So it was a delightful surprise when researchers from one of the world centres for evolutionary psychology intervened in the debate on social categorisation, by conducting an experiment they claimed showed that labelling people by race was far less automatic and inevitable than all previous research seemed to show.
The research used something called a “memory confusion protocol”. This works by asking experiment participants to remember a series of pictures of individuals, who vary along various dimensions – for example, some have black hair and some blond, some are men, some women, etc. When participants’ memories are tested, the errors they make reveal something about how they judged the pictures of individuals – what sticks in their mind most and least. If a participant more often confuses a black-haired man with a blond-haired man, it suggests that the category of hair colour is less important than the category of gender (and similarly, if people rarely confuse a man for a woman, that also shows that gender is the stronger category).
Using this protocol, the researchers tested the strength of categorisation by race, something all previous efforts had shown was automatic. The twist they added was to throw in another powerful psychological force – group membership. People had to remember individuals who wore either yellow or grey basketball shirts, and whose pictures were presented alongside statements indicating which team they were in. Without the shirts, the pattern of errors were clear: participants automatically categorised the individuals by their race (in this case: African American or Euro American). But with the coloured shirts, this automatic categorisation didn’t happen: people’s errors revealed that team membership had become the dominant category, not the race of the players.
It’s important to understand that the memory test was both a surprise – participants didn’t know it was coming up – and an unobtrusive measure of racial categorising. Participants couldn’t guess that the researchers were going to make inferences about how they categorised people in the pictures – so if they didn’t want to appear to perceive people on the basis of race, it wouldn’t be clear how they should change their behaviour to do this. Because of this we can assume we have a fairly direct measure of their real categorisation, unbiased by any desire to monitor how they appear.
So despite what dozens of experiments had appeared to show, this experiment created a situation where categorisation by race faded into the background. The explanation, according to the researchers, is that race is only important when it might indicate coalitional information – that is, whose team you are on. In situations where race isn’t correlated with coalition, it ceases to be important. This, they claim, makes sense from an evolutionary perspective. For most of ancestors age and gender would be important predictors of another person’s behaviour, but race wouldn’t – since most people lived in areas with no differences as large as the ones we associate with “race” today (a concept, incidentally, which has little currency among human biologists).
Since the experiment was published, the response from social psychologists has been muted. But supporting evidence is beginning to be reported, suggesting that the finding will hold. It’s an unfortunate fact of human psychology that we are quick to lump people into groups, even on the slimmest evidence. And once we’ve identified a group, it’s also seems automatic to jump to conclusions about what they are like. But this experiment suggests that although perceiving groups on the basis of race might be easy, it is far from inevitable.
We tend to think of Prozac as the first ‘fashionable’ psychiatric drug but it turns out popular memory is short because a tranquilizer called Miltown hit the big time thirty years before.
This is from a wonderful book called The Age of Anxiety: A History of America’s Turbulent Affair with Tranquilizers by Andrea Tone and it describes how the drug became a Hollywood favourite and even inspired its own cocktails.
Miltown was frequently handed out at parties and premieres, a kind of pharmaceutical appetizer for jittery celebrities. Frances Kaye, a publicity agent, described a movie party she attended at a Palm Springs resort. A live orchestra entertained a thousand-odd guests while a fountain spouted champagne against the backdrop of a desert sky. As partiers circulated, a doctor made rounds like a waiter, dispensing drugs to guests from a bulging sack. On offer were amphetamines and barbituates, standard Hollywood party fare, but guests wanted Miltown. The little white pills “were passed around like peanuts,” Kaye remembered. What she observed about party pill popping was not unique. “They all used to go for ‘up pills’ or ‘down pills,’” one Hollywood regular noted. “But now it’s the ‘don’t-give-a-darn-pills.’”
The Hollywood entertainment culture transformed a pharmaceutical concoction into a celebrity fetish, a coveted commodity of the fad-prone glamour set. Female entertainers toted theirs in chic pill boxes designed especially for tranquilizers, which became, according to one celebrity, as ubiquitous at Hollywood parties as the climatically unnecessary mink coat…
Miltown even inspired a barrage of new alcoholic temptations, in which the pill was the new defining ingredient. The Miltown Cocktail was a Bloody Mary (vodka and tomato juice) spiked with a single pill, and a Guided Missile, popular among the late night crowd on the Sunset Strip, consisted of a double shot of vodka and two Miltowns. More popular still was the Miltini, a dry martini in which Miltown replaced the customary olive.
Andrea Tone’s book is full of surprising snippets about how tranquilisers and anti-anxiety drugs have affected our understanding of ourselves and our culture.
It’s very well researched and manages to hit that niche of being gripping for the non-specialist while being extensive enough that professionals will learn a lot.
Link to details for The Age of Anxiety book.
Quick links from the past week in mind and brain news:
Psychiatry needs its Higgs boson moment says and article in New Scientist which describes some interesting but disconnected findings suggesting it ‘aint going to get it soon.
Wall Street Journal has an overenthusiastic article on how advances in genetics and neuroscience are ‘revolutionizing’ our understanding of violent behavior. Not quite but not a bad read in parts.
The new series of BBC Radio 4 wonderful series of key studies in psychology, Mind Changers, has just started. Streamed only because the BBC think radio simulations are cute.
Reuters reports that fire kills dozens in Russian psychiatric hospital tragedy.
Author and psychologist Charles Fernyhough discusses how neuroscience is dealt with in literary fiction in a piece for The Guardian.
Nature profiles one of the few people doing gun violence research in the US – the wonderfully named emergency room doctor Garen Wintemute.
The Man With Uncrossed Eyes. Fascinating case study covered by Neuroskeptic.
Wired reports that scientists have built a baseball-playing robot with 100,000-neuron fake brain. To the bunkers!
“Let’s study Tamerlan Tsarnaev’s brain” – The now seemingly compulsory article that argues for some sort of pointless scientific investigation after some horrible tragedy appears in the Boston Globe. See also: Let’s study the Newtown shooter’s DNA.
Wired report from a recent conference on the medical potential of psychedelic drugs.
Adam Phillips, one of the most thoughtful and interesting of the new psychoanalyst writers, is profiled by Newsweek.
For my recent Observer article I discussed how genetic findings are providing some of the best evidence that psychiatric diagnoses do not represent discrete disorders.
As part of that I spoke to Michael Owen, a psychiatrist and researcher based at Cardiff University, who has been leading lots of the rethink on the nature of psychiatric disorders.
As a young PhD student I sat in on lots of Prof Owen’s hospital ward rounds and learnt a great deal about how science bumps up against the real world of individuals’ lives.
One of the things that most interested me about Owen’s work is that, back in the day, he was working towards finding ‘the genetics of’ schizophrenia, bipolar and so on.
But since then he and his colleagues have gathered a great deal of evidence that certain genetic differences raise the chances of developing a whole range of difficulties – from epilepsy to schizophrenia to ADHD – rather these differences being associated with any one disorder.
As many of these genetic changes can affect brain development in subtle ways, it is looking increasingly likely that genetics determines how sensitive we are to life events as the brain grows and develops – suggesting a neurodevelopmental theory of these disorders that considers both neurobiology and life experience as equally important.
I asked Owen several questions for the Observer article but I couldn’t reply the answers in full, so I’ve reproduced them below as they’re a fascinating insight into how genetics is challenging psychiatry.
I remember you looking for the ‘genes for schizophrenia’ – what changed your mind?
For most of our genetic studies we used conventional diagnostic criteria such as schizophrenia, bipolar disorder and ADHD. However, what we then did was look for overlap between the genetic signals across diagnostic categories and found that these were striking. This occurred not just for schizophrenia and bipolar disorder, which to me as an adult psychiatrist who treats these conditions was not surprising, but also between adult disorders like schizophrenia and childhood disorders like autism and ADHD.
What do the current categories of psychiatric diagnosis represent?
The current categories were based on the categories in general use by psychiatrists. They were formalized to make them more reliable and have been developed over the years to take into account developments in thinking and practice. They are broad groupings of patients based upon the clinical presentation especially the most prominent symptoms and other factors such as age at onset, and course of illness. In other words they describe syndromes (clinically recognizable features that tend to occur together) rather than distinct diseases. They are clinically useful in so far as they group patients in regard to potential treatments and likely outcome. The problem is that many doctors and scientists have come to assume that they do in fact represent distinct diseases with separate causes and distinct mechanisms. In fact the evidence, not just from molecular genetics, suggests that there is no clear demarcation between diagnostic categories in symptoms or causes (genetic or environmental).
There is an emerging belief which has been stimulated by recent genetic findings that it is perhaps best to view psychiatric disorders more in terms of constellations of symptoms and syndromes, which cross current diagnostic categories and view these in dimensional terms. This is reflected by the inclusion of dimensional measures in DSM5, which, it is hoped, will allow these new views to stimulate research and to be developed based on evidence.
In the meantime the current categories, slightly modified, remain the focus of DSM-5. But I think that there is a much greater awareness now that these are provisional and will replaced when the weight of scientific evidence is sufficiently strong.
The implications of recent findings are probably more pressing for research where there is a need to be less constrained by current diagnostic categories and to refocus onto the mechanisms underlying symptom domains rather than diagnostic categories. This in turn might lead to new diagnostic systems and markers. The discovery of specific risk genes that cut across diagnostic groupings offers one approach to investigating this that we will take forward in Cardiff.
There is a lot of talk of endophenotypes and intermediate phenotypes that attempt to break down symptoms into simpler form of difference and dysfunction in the mind and brain. How will we know when we have found a valid one?
Research into potential endophenotypes has clear intuitive appeal but I think interpretation of the findings is hampered by a couple of important conceptual issues. First, as you would expect from what I have already said, I don’t think we can expect to find endophenotypes for a diagnostic group as such. Rather we might expect them to relate to specific subcomponents of the syndrome (symptoms, groups of symptoms etc).
Second, the assumption that a putative endophenotype lies on the disease pathway (ie is intermediate between say gene and clinical phenotype) has to be proved and cannot just be assumed. For example there has been a lot of work on cognitive dysfunction and brain imaging in psychiatry and widespread abnormalities have been reported. But it cannot be assumed that an individual cognitive or imaging phenotype lies on the pathway to a particular clinical disorder of component of the disorder. This has to be proven either through an intervention study in humans or model systems (both currently challenging), or statistically which requires much larger studies than are usually undertaken. I think that many of the findings from imaging and cognition studies will turn out to be part of the broad phenotype resulting from whatever brain dysfunction is present and not on the causal pathway to psychiatric disorder.
Using the tools of biological psychiatry you have come to a conclusion often associated with psychiatry’s critics (that the diagnostic categories do not represent specific disorders). What reactions have you encountered from mainstream psychiatry?
I have found that most psychiatrists working at the front line are sympathetic. In fact psychiatrists already treat symptoms rather than diagnoses. For example they will consider prescribing an antipsychotic if someone is psychotic regardless of whether the diagnosis is schizophrenia or bipolar disorder. They also recognize that many patients don’t fall neatly into current categories. For example many patients have symptoms of both schizophrenia and bipolar disorder sometimes at the same time and sometimes at different time points. Also patients who fulfill diagnostic criteria for schizophrenia in adulthood often have histories of childhood diagnoses such as ADHD or autistic spectrum.
The inertia comes in part from the way in which services are structured. In particular the distinction between child and adult services has many justifications but it leads to patents with long term problems being transferred to a new team at a vulnerable age, receiving different care and sometimes a change in diagnosis. Many of us now feel that we should develop services that span late childhood and early adulthood to ensure continuity over this important period. There are also international differences. So in the US mood disorders (including bipolar) are often treated by different doctors in different clinics to schizophrenia.
There is also a justifiable unwillingness to discard the current system until there is strong evidence for a better approach. The inclusion of dimensional measures in DSM5 reflects the acceptance of the psychiatric establishment that change is needed and acknowledges the likely direction of travel. I think that psychiatry’s acknowledgment of its diagnostic shortcomings is a sign of its maturity. Psychiatric disorders are the most complex in medicine and some of the most disabling. We have treatments that help some of the people some of the time and we need to target these to the right people at the right time. By acknowledging the shortcomings of our current diagnostic categories we are recognizing the need to treat patients as individuals and the fact that the outcome of psychiatric disorders is highly variable.
Matter magazine has an amazing article about the world of underground surgery for healthy people who feel that their limb is not part of their body and needs to be removed.
The condition is diagnosed as body integrity identity disorder or BIID but it has a whole range of interests and behaviours associated with it and people with the desire often do not feel it is a disorder in itself.
Needless to say, surgeons have not been lining up to amputate completely healthy limbs but there are clinics around the world that do the operations illegally.
The Matter article follows someone as they obtain one of these procedures and discusses the science of why someone might feel so uncomfortable about having a working limb they were born with.
But there is a particularly eye-opening bit where it mentions something fascinating about the first scientific article that discussed the condition, published in 1977.
One of the co-authors of the 1977 paper was Gregg Furth, who eventually became a practising psychologist in New York. Furth himself suffered from the condition and, over time, became a major figure in the BIID underground. He wanted to help people deal with their problem, but medical treatment was always controversial — often for good reason. In 1998, Furth introduced a friend to an unlicensed surgeon who agreed to amputate the friend’s leg in a Tijuana clinic. The patient died of gangrene and the surgeon was sent to prison. A Scottish surgeon named Robert Smith, who practised at the Falkirk and District Royal Infirmary, briefly held out legal hope for BIID sufferers by openly performing voluntary amputations, but a media frenzy in 2000 led British authorities to forbid such procedures. The Smith affair fuelled a series of articles about the condition — some suggesting that merely identifying and defining such a condition could cause it to spread, like a virus.
Undeterred, Furth found a surgeon in Asia who was willing to perform amputations for about $6,000. But instead of getting the surgery himself, he began acting as a go-between, putting sufferers in touch with the surgeon.
Link to Matter article on the desire to be an amputee.
Short answer: surprisingly little. Continuing the theme of revisiting classic experiments in psychology, last week’s BBC Future column was on Tajfel’s Minimal Group Paradigm. The original is here. Next week we’re going to take this foundation and look at some evolutionary psychology of racism (hint: it won’t be what you’d expect).
How easy is it for the average fair-minded person to form biased, preconceived views within groups? Surprisingly easy, according to psychology studies.
One of the least charming but most persistent aspects of human nature is our capacity to hate people who are different. Racism, sexism, ageism, it seems like all the major social categories come with their own “-ism”, each fuelled by regrettable prejudice and bigotry.
Our tendency for groupness appears to be so strong there seems little more for psychology to teach us. It’s not as if we need it proven that favouring our group over others is a common part of how people think – history provides all the examples we need. But one psychologist, Henri Tajfel, taught us something important. He showed exactly how little encouragement we need to treat people in a biased way because of the group they are in.
Any phenomenon like this in the real world comes entangled with a bunch of other, complicating phenomenon. When we see prejudice in the everyday world it is hard to separate out psychological biases from the effects of history, culture and even pragmatism (sometimes people from other groups really are out to get you).
As a social psychologist, Tajfel was interested in the essential conditions of group prejudice. He wanted to know what it took to turn the average fair-minded human into their prejudiced cousin.
He wanted to create a microscope for looking at how we think when we’re part of a group, even when that group has none of the history, culture or practical importance that groups normally do. To look at this, he devised what has become known as the “minimal group paradigm”
The minimal group paradigm works like this: participants in the experiment are divided into groups on some arbitrary basis. Maybe eye-colour, maybe what kind of paintings they like, or even by tossing a coin. It doesn’t matter what the basis for group membership is, as long as everyone gets a group and knows what it is. After being told they are in a group, participants are divided up so that they are alone when they make a series of choices about how rewards will be shared among other people in the groups. From this point on, group membership is entirely abstract. Nobody else can be seen, and other group members are referred to by an anonymous number. Participants make choices such as “Member Number 74 (group A) to get 10 points and Member 44 (group B) to get 8 points”, versus “Member Number 74 (group A) to get 2 points and Member 44 (group B) to get 6 points”, where the numbers are points which translate into real money.
You won’t be surprised to learn that participants show favouritism towards their own group when dividing the money. People in group A were more likely to choose the first option I gave above, rather than the second. What is more surprising is that people show some of this group favouritism even when it ends up costing them points – so people in group B sometimes choose the second option, or options like it, even though it provides fewer points than the first option. People tend to opt for the maximum total reward (as you’d expect from the fair-minded citizen), but they also show a tendency to maximise the difference between the groups (what you’d expect from the prejudiced cousin).
The effect may be small, but this is a situation where the groups have been plucked out of the air by the experimenters. Every participant knows which group he or she is in, but they also know that they weren’t in this group before they started the experiment, that their assignment was arbitrary or completely random, and that the groups aren’t going to exist in any meaningful way after the experiment. They also know that their choices won’t directly affect them (they are explicitly told that they won’t be given any choices to make about themselves). Even so, this situation is enough to evoke favouritism.
So, it seems we’ll take the most minimal of signs as a cue to treat people differently according to which group they are in. Tajfel’s work suggests that in-group bias is as fundamental to thinking as the act of categorisations itself. If we want to contribute to a fairer world we need to be perpetually on guard to avoid letting this instinct run away with itself.
In 1983 psychiatrist Giles Brindley demonstrated the first drug treatment for erectile dysfunction in a rather unique way. He took the drug and demonstrated his stiff wicket to the audience mid-way through his talk.
Scientific journal BJU International has a pant-wettingly hilarious account of the events of that day which made both scientific and presentation history.
Professor Brindley, still in his blue track suit, was introduced as a psychiatrist with broad research interests. He began his lecture without aplomb. He had, he indicated, hypothesized that injection with vasoactive agents into the corporal bodies of the penis might induce an erection. Lacking ready access to an appropriate animal model, and cognisant of the long medical tradition of using oneself as a research subject, he began a series of experiments on self-injection of his penis with various vasoactive agents, including papaverine, phentolamine, and several others. (While this is now commonplace, at the time it was unheard of). His slide-based talk consisted of a large series of photographs of his penis in various states of tumescence after injection with a variety of doses of phentolamine and papaverine. After viewing about 30 of these slides, there was no doubt in my mind that, at least in Professor Brindley’s case, the therapy was effective. Of course, one could not exclude the possibility that erotic stimulation had played a role in acquiring these erections, and Professor Brindley acknowledged this.
The Professor wanted to make his case in the most convincing style possible. He indicated that, in his view, no normal person would find the experience of giving a lecture to a large audience to be erotically stimulating or erection-inducing. He had, he said, therefore injected himself with papaverine in his hotel room before coming to give the lecture, and deliberately wore loose clothes (hence the track-suit) to make it possible to exhibit the results. He stepped around the podium, and pulled his loose pants tight up around his genitalia in an attempt to demonstrate his erection.
At this point, I, and I believe everyone else in the room, was agog. I could scarcely believe what was occurring on stage. But Prof. Brindley was not satisfied. He looked down sceptically at his pants and shook his head with dismay. ‘Unfortunately, this doesn’t display the results clearly enough’. He then summarily dropped his trousers and shorts, revealing a long, thin, clearly erect penis. There was not a sound in the room. Everyone had stopped breathing.
But the mere public showing of his erection from the podium was not sufficient. He paused, and seemed to ponder his next move. The sense of drama in the room was palpable. He then said, with gravity, ‘I’d like to give some of the audience the opportunity to confirm the degree of tumescence’. With his pants at his knees, he waddled down the stairs, approaching (to their horror) the urologists and their partners in the front row. As he approached them, erection waggling before him, four or five of the women in the front rows threw their arms up in the air, seemingly in unison, and screamed loudly. The scientific merits of the presentation had been overwhelmed, for them, by the novel and unusual mode of demonstrating the results.
The screams seemed to shock Professor Brindley, who rapidly pulled up his trousers, returned to the podium, and terminated the lecture. The crowd dispersed in a state of flabbergasted disarray. I imagine that the urologists who attended with their partners had a lot of explaining to do. The rest is history. Prof Brindley’s single-author paper reporting these results was published about 6 months later.
I’ve got an article in The Observer on how some of the best evidence against the idea that psychiatric diagnoses like ‘schizophrenia’ describe discrete ‘diseases’ comes not from the critics of psychiatry, but from medical genetics.
I found this a fascinating outcome because it puts both sides of the polarised ‘psychiatry divide’ in quite an uncomfortable position.
The “mental illness is a genetic brain disease” folks find that their evidence of choice – molecular genetics – has undermined the validity of individual diagnoses, while the “mental illness is socially constructed” folks find that the best evidence for their claims comes from neurobiology studies.
The evidence that underlies this uncomfortable position comes recent findings that genetic risks that were originally thought to be specific for individual diagnoses turn out to risks for a whole load of later difficulties – from epilepsy, to schizophrenia to learning disability.
In other words, the genetic risk seems to be for neurodevelopmental difficulties but if and how they appear depends on lots of other factors that occur during your life.
The neurobiological evidence has not ‘reduced’ human experience to chemicals, but shown that individual life stories are just as important.
Here’s my column for BBC Future from last week. It was originally titled ‘Why money can’t buy you happiness‘, but I’ve just realised that it would be more appropriately titled if I used a “won’t” rather than a “can’t”. There’s a saying that people who think money can’t buy happiness don’t know where to shop. This column says, more or less, that knowing where to shop isn’t the problem, its shopping itself.
Hope a lottery win will make you happy forever? Think again, evidence suggests a big payout won’t make that much of a difference. Tom Stafford explains why.
Think a lottery win would make you happy forever? Many of us do, including a US shopkeeper who just scooped $338 million in the Powerball lottery – the fourth largest prize in the game’s history. Before the last Powerball jackpot in the United States, tickets were being snapped up at a rate of around 130,000 a minute. But before you place all your hopes and dreams on another ticket, here’s something you should know. All the evidence suggests a big payout won’t make that much of a difference in the end.
Winning the lottery isn’t a ticket to true happiness, however enticing it might be to imagine never working again and being able to afford anything you want. One study famously found that people who had big wins on the lottery ended up no happier than those who had bought tickets but didn’t win. It seems that as long as you can afford to avoid the basic miseries of life, having loads of spare cash doesn’t make you very much happier than having very little.
One way of accounting for this is to assume that lottery winners get used to their new level of wealth, and simply adjust back to a baseline level of happiness –something called the “hedonic treadmill”. Another explanation is that our happiness depends on how we feel relative to our peers. If you win the lottery you may feel richer than your neighbours, and think that moving to a mansion in a new neighbourhood would make you happy, but then you look out of the window and realise that all your new friends live in bigger mansions.
Both of these phenomena undoubtedly play a role, but the deeper mystery is why we’re so bad at knowing what will give us true satisfaction in the first place. You might think we should be able to predict this, even if it isn’t straightforward. Lottery winners could take account of hedonic treadmill and social comparison effects when they spend their money. So, why don’t they, in short, spend their winnings in ways that buy happiness?
Picking up points
Part of the problem is that happiness isn’t a quality like height, weight or income that can be easily measured and given a number (whatever psychologists try and pretend). Happiness is a complex, nebulous state that is fed by transient simple pleasures, as well as the more sustained rewards of activities that only make sense from a perspective of years or decades. So, perhaps it isn’t surprising that we sometimes have trouble acting in a way that will bring us the most happiness. Imperfect memories and imaginations mean that our moment-to-moment choices don’t always reflect our long-term interests.
It even seems like the very act of trying to measuring it can distract us from what might make us most happy. An important study by Christopher Hsee of the Chicago School of Business and colleagues showed how this could happen.
Hsee’s study was based around a simple choice: participants were offered the option of working at a 6-minute task for a gallon of vanilla ice cream reward, or a 7-minute task for a gallon of pistachio ice cream. Under normal conditions, less than 30% of people chose the 7-minute task, mainly because they liked pistachio ice cream more than vanilla. For happiness scholars, this isn’t hard to interpret –those who preferred pistachio ice cream had enough motivation to choose the longer task. But the experiment had a vital extra comparison. Another group of participants were offered the same choice, but with an intervening points system: the choice was between working for 6 minutes to earn 60 points, or 7 minutes to earn 100 points. With 50-99 points, participants were told they could receive a gallon of vanilla ice cream. For 100 points they could receive a gallon of pistachio ice cream. Although the actions and the effects are the same, introducing the points system dramatically affected the choices people made. Now, the majority chose the longer task and earn the 100 points, which they could spend on the pistachio reward – even though the same proportion (about 70%) still said they preferred vanilla.
Based on this, and other experiments , Hsee concluded that participants are maximising their points at the expense of maximising their happiness. The points are just a medium – something that allows us to get the thing that will create enjoyment. But because the points are so easy to measure and compare – 100 is obviously much more than 60 – this overshadows our knowledge of what kind of ice cream we enjoy most.
So next time you are buying a lottery ticket because of the amount it is paying out, or choosing wine by looking at the price, or comparing jobs by looking at the salaries, you might do well to remember to think hard about how much the bet, wine, or job will really promote your happiness, rather than simply relying on the numbers to do the comparison. Money doesn’t buy you happiness, and part of the reason for that might be that money itself distracts us from what we really enjoy.
In real life the institution was Oregon State Hospital and the article is accompanied by a slide show of images from the hospital and museum.
The piece also mentions some fascinating facts about the film – not least that some of the actors were actually genuine employees and patients in the hospital.
But the melding of real life and art went far beyond the film set. Take the character of John Spivey, a doctor who ministers to Jack Nicholson’s doomed insurrectionist character, Randle McMurphy. Dr. Spivey was played by Dr. Dean Brooks, the real hospital’s superintendent at the time.
Dr. Brooks read for the role, he said, and threw the script to the floor, calling it unrealistic — a tirade that apparently impressed the director, Milos Forman. Mr. Forman ultimately offered him the part, Dr. Brooks said, and told the doctor-turned-actor to rewrite his lines to make them medically correct. Other hospital staff members and patients had walk-on roles.
Link to NYT article ‘Once a ‘Cuckoo’s Nest,’ Now a Museum’.
Andrea Letamendi is a clinical psychologist who specialises in the treatment and research of traumatic stress disorders but also has a passionate interest in how psychological issues are depicted in comics.
She puts her thoughts online in her blog Under the Mask which also discuss social issues in fandom and geek culture.
I’ve always been of the opinion that comics are far more psychologically complex than they’re given credit for. In fact, one of my first non-academic articles was about the depiction of madness in Batman.
It’s also interesting that comics are now starting to explicitly address psychological issues. It’s not always done entirely successfully it has to be said.
Darwyn’s Cooke’s Ego storyline looked at Batman’s motivations through his traumatic past but shifts between subtle brilliance and clichés about mental illness in a slightly unsettling way.
Andrea Letamendi has a distinctly more nuanced take, however, and if you would like to know more about her work with superheroes do check the interview on Nerd Span.
Oliver Sacks has just published an article on ‘Hallucinations of musical notation’ in the neurology journal Brain that recounts eight cases of illusory sheet music escaping into the world.
The article makes the interesting point that the hallucinated musical notation is almost always nonsensical – either unreadable or not describing any listenable music – as described in this case study.
Arthur S., a surgeon and amateur pianist, was losing vision from macular degeneration. In 2007, he started ‘seeing’ musical notation for the first time. Its appearance was extremely realistic, the staves and clefs boldly printed on a white background ‘just like a sheet of real music’, and Dr. S. wondered for a moment whether some part of his brain was now generating his own original music. But when he looked more closely, he realized that the score was unreadable and unplayable. It was inordinately complicated, with four or six staves, impossibly complex chords with six or more notes on a single stem, and horizontal rows of multiple flats and sharps. It was, he said, ‘a potpourri of musical notation without any meaning’. He would see a page of this pseudo-music for a few seconds, and then it would suddenly disappear, replaced by another, equally nonsensical page. These hallucinations were sometimes intrusive and might cover a page he was trying to read or a letter he was trying to write.
Though Dr. S. has been unable to read real musical scores for some years, he wonders, as did Mrs. J., whether his lifelong immersion in music and musical scores might have determined the form of his hallucinations.
Sadly, the article is locked behind a paywall. However you can always request it via the #icanhazpdf hashtag on twitter .
Link to locked article on ‘Hallucinations of musical notation’.
A new artform has emerged – the post-mortem neuroportrait. Its finest subject, Phineas Gage.
Gage was a worker extending the tracks of the great railways until he suffered the most spectacular injury. As he was setting a gunpowder charge in a rock with a large tamping iron, the powder was lit by an accidental spark. The iron was launched through his skull.
He became famous in neuroscience because he lived – rare for the time – and had psychological changes as a result of his neurological damage.
His story has been better told elsewhere but the interest has not died – studies on Gage’s injury have continued to the present day.
There is a scientific veneer, of course, but it’s clear that the fascination with the freak Phineas has its own morbid undercurrents.
The first such picture was constructed with nothing more than pen and ink. Gage’s doctor John Harlow sketched his skull which Harlow had acquired after the patient’s death.
This Gage is forever fleshless, the iron stuck mid-flight, the shattered skull frozen as it fragments.
Harlow’s sketch is the original and the originator. The first impression of Gage’s immortal soul.
Gage rested as this rough sketch for over 100 years but he would rise again.
In 1994, a team led by neuroscientist Hannah Damasio used measurements of Gage’s skull to trace the path of the tamping iron and reconstruct its probable effect on the brain.
Gage’s disembodied skull appears as a strobe lit danse macabre, the tamping iron turned into a bolt of pure digital red and Gage’s brain, a deep shadowy grey.
It made Gage a superstar but it sealed his fate.
Every outing needed a more freaky Phineas. Like a low-rent-celebrity, every new exposure demanded something more shocking.
A 2004 study by Peter Ratiu and Ion-Florin Talos depicted Gage alongside his actual cranium – his digital skull screaming as a perfect blue iron pushed through his brain and shattered his face – the disfigurement now a gory new twist to the portrait.
In contrast, his human remains are peaceful – unmoved by the horrors inflicted on their virtual twin.
But the most recent Gage is the most otherworldly. A study by John Darrell Van Horn and colleagues examined how the path of the tamping iron would have affected the strands of white matter – the “brain’s wiring” – that connects cortical areas.
Gage himself is equally supernatural.
Blank white eyes float lifelessly in his eye sockets – staring into the digital blackness.
His white matter tracts appear within his cranium but are digitally dyed and seem to resemble multi-coloured hair standing on end like the electrified mop of a fairground ghoul.
But as the immortal Gage has become more horrifying over time, living portraits of the railwayman have been discovered. They show an entirely different side to the shattered skull celebrity.
He has gentle flesh. Rather than staring into blackness, he looks at us.
Like a 19th century auto-whaler holding his self-harpoon, he grips the tamping iron, proud and defiant.
I prefer this living Phineas.
He does not become more alien with every new image.
He is at peace with a brutal, chaotic world.
He knows what he has lived through.
Fuck the freak flag, he says.
I’m a survivor.