NO.
Is NLP supported by a body of evidence-based peer reviewed psychological research?
Reply
NO.
NO.
What’s wrong with ‘Measuring Sexual Identity: An Evaluation Report’ (2010) from the Office for National Statistics?
Well for starters, the authors admit that its methodology is fundamentally flawed by using a single measure and if they had they used a more appropriate measure then the figures for gay, lesbian and bisexual would most likely have been higher! The report misleads over refusal rates and does not adequate addresses the age, emplyment status, education, and ethnicity biases in its figures. In spite of evidence to the contrary the authors slap themselves on the back for a success’ whereas they should be slapping each other in the face for this unmitigated disaster.
Having taught statistics in several UK universities at all levels and with a PhD in gender stereotypes and attitudes to sexuality (from an accredited university), I recognise, in this report, many of pitfalls I’ve taught undergraduate psychology students to avoid. If it was an undergraduate report, I’d struggle to give it a ‘pass’. So let’s look at some of the main problems:
Attraction, Behaviour, Orientation, Identity
The report distinguishes between sexual attraction, sexual identity, sexual orientation and sexual behaviour. So the expectation is that the report will take a multidimensional approach. However, it does not. It states that whereas legislation focuses on sexual orientation, the report choses to look at sexual identity. It also clearly states that behaviour may not form a basis for identity but goes on to argue:
‘Research during the development of the question also deemed sexual identity the most relevant dimension of sexual orientation to investigate given its relation to experiences of disadvantage and discrimination’ (p4).
Unfortunately, the research on which this conclusion is based is by the Office for National Statistics. It’s a classic example of groupthink. Much cutting edge research would dispute this assumption. I certainly dispute it. It became clear very early on in my research that a single question to measure sexual identity or sexual orientation, at best would be misleading. So I didn’t. It could be considered irresponsible to publish research based on an imappropriate tool. In fact, the curremt report probably would not be published in any peer reviewed journal.
This report warns:
“[N}o single question would capture the full complexity of sexual orientation. A suite of questions would be necessary to collect data on the different dimensions of sexual orientation, and to examine consistency between them at the individual level (p4).”
The report goes ahead and uses a single question anyway! Whether or not a person labels their sexual orientation should not be the issue where ‘experiences of disadvantage and discrimination’ are concerned. Clearly, the reluctance to feel able to or be comfortable in declaring one’s sexuality is also a form of disadvantage and an aspect of discrimination. The current approach distorts the issue through over simplification.
So what’s the difference between attraction, behaviour, orientation and identity. Well, identity is how we describe ourselves or how others label us. Well people who are attracted to the same gender might not act upon it. People who act upon sexual attraction may do so in very specific circumstances. Orientation may be indicated by attraction, behaviour or a label with which a person identifies. I would argue that behaviour and attraction are more important than the label a person uses. For instance, in sexual health there is a recognised category of ‘men who have sex with men’ (MSM), who do not identify as gay. They see themselves as straight men who occasionally have sex with other men. The rest of the time they lead ‘straight’ lives. In sexual health services and health promotions, ‘men who have sex with men’ are at risk from sexual transmitted infections, and are targeted as a specific group. However, assumptions made by this report concluded:
“Testing showed that respondents were not in favour of asking about sexual behaviour in a social survey context, nor would it be appropriate in general purpose government surveys (p4)”.
Again, this conclusion was based on reports from the Office for National Statistics. In the present study, worryingly, the authors state:
‘As in the UK, deriving an individuals sexual orientation from a suite of questions results in higher LGB estimates in the US compared with using a single sexual identity question (p15)’
It is accepted in attitude measurement that single item responses are unsuitable. Using a multiple response measure, properly administered would produce a more accurate figure. Furthermore, research suggests that this figure would have been higher.The figures cited in the report for more methodological sound research range from 5% to 9% ( Joloza, Evans and O’Brien, 2010, p15). The current report says 1.5%.
In a survey with such political impact, the decision to use a single item is ill-advised and arguably reckless. Research convenience should not compromise validity. In this instance, it does.
Sensitivity of Measurement
Having abandoned the methodologically sound approach of using a ‘suite’ of questions, one might hope that at least the report would use a sensitive measure beyond crude, simplistic nominal categories. Actually no. In the 1940s, sex researcher Alfred Kinsey developed a more sensitive ‘sliding scale’ of sexuality. Instead, the present researchers ignore this and opt for the bluntest of instruments: Straight, gay or bi. This report didn’t even bother to include transgender in its analysis.
Firstly, consider the approach of ‘measuring’ ethnicity based on a ‘Black, White or Mixed’ categorisation. How accurately would this categorisation produce a representation of ethnicity in the UK? Any reputable survey offers a whole range of options for ethnicity with quite subtle distinctions. Even then, people may declare ‘other’. I would argue that sexuality is more complex than ethnicity. So why is measurement tool in the present report, measurably more simplistic? The answers is: ‘because the study has not been properly designed to fit the subject matter’. It has little or no ecological validity, that is, it means very little in the real world, except perhaps to fuel prejudice.
The Kinsey scale requires a respondent to use a zero to six scale. Where zero equals ‘exclusively heterosexual’ and six equals ‘exclusively gay’. This gives varying degrees of bisexuality, that is, from one to five. Now clearly, these sensitive data can be collapsed into cruder categories if needs be. The problem with collecting crude data from the outset, is that we can do little else with it. It offers nothing very meaningful just the willingless of people to use a limited set of labels.
Now imagine, we take three measures of attraction, behaviour and identity all of the sliding ‘zero to six’ scale. Wouldn’t this be a far more accurate reflection of a person’s sexuality orientation? It’s just a pity they didn’t do it in their report for the Office of National Statistics. Would it produce a higher percentage of lesbian, gay and bisexual people? Well, Joloza, Evans and O’Brien (2010, p15) would probably say ‘yes’. So why didn’t they do it?
So what exactly does this study measure? Well it doesn’t measure sexual identity in the UK. It measures the percentage of people sampled who are willing to declare a sexual identity label from a limited choice in interviews, with or without others present, in a particular time frame (for a study with poor ecological validity).
But the problems don’t end there. The report also reveals questionable interpretation from by its authors.
Confidentially and the Willingness to Respond
One statistic almost jumps out of the page to indicate that there’s something wrong with this study:
[P]eople (aged 16 and over) who identified as LGB had a younger age distribution than heterosexuals – 64.9 per cent were aged under 45 compared with 48.6 per cent of people who identify as heterosexual (p16).
In other words, younger people are more likely to report a ‘non-heterosexual’ identity than are older people. With no evidence to support the notion that older people are less likely to be gay, it has to be an artifact of this research. That is, older people either don’t identify with a ‘gay, lesbian or bisexual’ so readily, or are not so predisposed to tell a stranger with a clipboard. As such, the one-shot sexual identity is not fit for purpose. It’s possible that older people are more likely to remember the days before the 1967 Sexual Offences Act, and police entrapment strategies. It’s possible that they don’t use the word ‘gay’ and may prefer ‘homosexual’. It’s possible that they don’t like to divulge personal information to strangers unless absolutely necessary. There are definitely confounding variables at play and not just age.
According to this study gay, lesbian and bisexual have better jobs and are better educated. Again, the myth of the pink surfaces. Could it not be that young, well-educated, finnacially secure people are more likely to divulge their LGB sexual identity to a stranger? This means that less-empowered people more in need of support and services are not. Again the one-shot measure doesn’t appear to do its job.
The report makes a claim in the face of its own evidence that confidentiality basing this on the refusal rate:
There is no evidence of an adverse impact on response rates confirming the general acceptance of the question. Our analysis suggests response rates are broadly in line with earlier quantitative testing. Non response to the question was low with less than 4 per cent of eligible respondents refusing to answer, saying they did not know the answer or not providing a response (p26).
Perhaps this should read ‘no adverse impact on response rates, except for older, less financially secure, not-so-well educated, non-professionals’. For those in routine and manual occupations, the most frequent response to the sexual identity question was ‘other’ at 31.1%, more ‘popular’ than heterosexual at 29.4%. Almost a half (49.1%) of those who identified as gay and lesbian had managerial or professional occupations, compared to less than a third (30.6%) who identified as heterosexual/straight? Furthermore, 38.1% of Gay/Lesbian had a degree compared with only 21.9% of Heterosexual/Straight. Doesn’t all this seem odd? Yes! It suggests a significant bias in the sampling, the method and the results. In short, the flaws are evident but largely overlooked by the authors. Failure to do this in an undergraduate report would be severely penalised. But far more is at stake here. This report may inform social policy!
Looking at ethnicity, there’s a bias here too. For Heterosexual/Straight people 90.7% are White, whereas for Gay/Lesbian/Bisexual its 93.5%. However, for the ‘Other’ category for sexual identity, 14.1% are ‘other ethnic group’. For the ‘Don’t knows’, the figure for ‘other ethnic group’ is 18.2%. People from ‘Other ethnic groups’ were almost twice as likely to say ‘Don’t Know’ as say ‘Heterosexual’ (9.3%). They were almost three times more likely to say ‘Don’t Know’ as ‘Gay/Lesbian/Bisexual’ (6.5%). This suggests either a reluctance to declare sexuality or that they did not understand what the terms meant. Either way, it’s a shortcoming of this research.
It’s interesting to note that option one on the interviewers card (market research style) was ‘Heterosexual/Straight’ and option two was the less formal ‘Gay or Lesbian’, with option three as ‘Bisexual’. It’s interesting that whereas ‘both terms in option one and three can be applied to either men or women. For option two, you can’t have a lesbian man!The options do not use comparable terminology. If different terminology had been used, would the results have been different? If the options had been re-ordered, would the results have been different?Why is heterosexual the first option? Did this slightly increase the heterosexual figure. Research into research and experimter bias suggests it might. Had the survey not been carried out in a market research format would the results have been different?Did interviewer the tone of voice affect the way in which the questions were answered. I’s done endless market research interviewers on the street and most of the time I can work out what the researcher ‘wants’ me to say. Are you heterosexual <smiles with rising intonation? or gay <frowns, with falling intonation> or bisexual <spits>? It’s a slight exageration but it does happen.
Now let’s turn to the ‘less than four per cent refusal rate’ that caused the authors to discard the other evidence.
The authors state:
‘Prior to developing and testing work on the sexual identity question, the expectation was that the higher the number of adults in the household, the higher the proportion of item non response. This is be because some household members might be reluctant to disclose their sexual identity in the presence of others. However, the results from the IHS do not indicate this (p12).’
However they don’t make the connection between non-response rates and the willingness to declare a true label:
‘Another observation here is that the proportion of people reporting to be LGB in a household decreases as the number of adults in the household increases. There is currently no explanation why this is the case but this is something that could be considered for further investigation in future (p12)’.
One explanation might be that, the more people in the house the less likely people are to declare themselves to be ‘Gay/Lesbian/Bisexual’. They didn’t necessarily refuse they may have felt the need to protect their own privacy and lied or said ‘don’t know’.
So for people identifying as the as the number of people living in the house increases, the figure for ‘Heterosexual/Straight’ increases slightly. For Gay/Lesbian it falls from 1.3% in a single person household to only 0.3% in a four person plus household. The figures for Bisexual remain roughly the same. For ‘Don’t Know/Refusal’ the figures increase slightly as the number of people in the house increases. This suggests that there is an element of self-censorship in responses.
Think about it logically. If you want to keep your sexual identity secret from other members of the house, do you ‘refuse’ and cause the other house members to ask why, or do you just lie? Or if you live a ‘heterosexual’ life for 95% of the time and have recreational sex, exactly how do you respond to the stranger on the door step with the crude market research question?
What’s clear is that the current report has not adequately addressed the numerous problems it has generated with an inappropriate methodology for a complex subject. White, Black and Mixed would not be good enough for ethicoity, so why is it good enough here, for a subject arguable more sensitive and complex?
Improvements
It’s important to remember that the Kinsey Team in he 1940s put the gay and bisexual figure as high as 37%. Of course the sampling has been criticised over the years. It probably did lead to an overestimation. Nevertheless the measure on which the Kinsey team based their research was exemplary. A one-shot question does not work for something as complex as human sexuality. The Kinsey measurement was complex and fit for purpose. It is not good enough to side step the issue of instrument accuracy with protests of convenience and acceptability to researchers. Rather than go for the easy, convenient option, get better researchers and design a better study where appropriate measures can be used. Otherwise all you get is conveniently produced meaningless results. Garbage in, garbage out.
Conclusion
So is this report, fundamentally flawed, practically worthless, irresponsible or dangerous? In my professional opinion, considering the plitical climate, I’d have to say that it’s all of those things. The ONS needs to stop engaging in groupthink and stop treating the complex notion of sexual orientation as some crass market research exercise. Patting themselves on the back, the authors conclude:
‘The introduction of the sexual identity question. . . in January 2009 followed rigorous testing and feasibility testing by ONS. The findings of this report suggest its implementation on the IHS in the first year has been a success (p26)’.
A success why what standards? Certainly not of academic rigour. We need high quality research data on which to make sense of our world and inform our social policy decisions. Sadly, this report fails to deliver and cannot be treated as anything other than a pilot study from which serious lessons need to be learned. The simplistic method does not work evidenced by the reports own figures. It fails to meet the standards of an undergraduate report, on which one can only conclude ‘must do better next time’. Sadly, the decision not to consider the ethical ramifications of publishing a flawed report is inexcusable and sheds light on the ability of the ONS to produce high quality data. It’s argubale negligence. Researchers have a responsiblity to consider how their research will be used. The ONS has failed to recognise its responsibility.
Maybe it does not commit the sin of commission of homophobia but it does commit the sin of omission in that it justifies the heterosexist ideology of rendering invisible sexual diversity.
So if we add in the refusals and the don’t knows, if we adjust the figures for age, ethnicity, education, profession and number of people in the household, what exactly would the figue be for ‘non-heterosexuals’? Well, your guess is as good as mine. It’s disappointing that this overblown, expensive pilot study has thrown up more questions than it answers, and we are back to simply ‘guessing’.
Links:
Throughout the analysis to the run up to the 2010 UK General Election, the subject of ‘body language‘ or non-verbal communication has dominated. Faced with the first presidential style leaders’ debates, it’s often the simplest most televisual form of analysis. So, why discuss politics or policies when we can be discussing ties, smiles and hand gestures? The whole spectacle seems to have placed hair-line recession far higher on out list of priorities than global recession. So as the three leading men took to the stage and sound like the new cast of ‘The Last Of the Summer Wine’ it’s all became rather ‘X-Factored’. When when faced with a buffet of mediocrity, the one with the nice smile gets the vote. It doesn’t matter that they have a voice that sounds like the wind whistling through an aardvark’s rectum. Better than the rest is not always that much of an endorsement when there’s not much on offer.
Part of the problem with the media’s obsession with body language is that it easily passes for ‘scientific’ analysis. Unfortunately this is at the expense of more serious, evidence-based analysis. It’s also partly due to fakesperts who have either not read or not understood the research available on non-verbal communication. What happens is that a misunderstanding is so routinely and frequently passed off as ‘fact’ that it has been accepted. I refer of course to the 7% myth. I’ve blogged about this on several occasions and there’s not a week goes by tha some ‘expert’ repeats it on twitter, with all the originality of a bigot, who regurgitates, parrot-fashion, the old unfounded, unsupported myths of prejudice.
So let’s be clear. Non-verbal communication does NOT account for just 7% of any communication. Just try watching a foreign language film without subtitles. Would you really understand 93% of the film? Non-verbals take precedent when we are forming a first impression. So for instance, in the first leaders’ debate, Nick Clegg’s non-verbal communication was probably more important than Brown’s or Cameron’s. This is mainly because he was the least known of the three due to lesser media coverage. It helps to explain why he did so well in the first debate. He’d made a really good first impression. In the following weeks, we’d already formed a first impression and so his words became more important, and the ‘nice bloke’ style wasn’t as impressive.
Non-verbal communication is also important when trying to decide whether someone is lying. If there’s a mismatch between words and gestures we suspect that someone is lying or trying to hide something. Now the cynical might argue that using body language to try to decide whether a politician is lying is a pretty redundant activity.
Non-verbal communication is also very context dependent. So for instance, we tend to behave quite differently with family and friends as we do with work colleagues or at an interview. Now put on the spot-light, turn on a few cameras, invite an audience and realise that you won’t be seeing natural non-verbal indicators of private thoughts or personality traits. Instead you will see the different levels of ability in media training. But coping well in front of the camera doesn’t necessarily make a good Prime Minister. However, it is a good skill for would be politicians. Far from helping us to see the truth, good media training can help to control and obscure it.
If you’ve ever seen those confessional chat shows you’ll notice that the guests are often placed centre stage on a chair without arms. So they are forced to do something with their hands. If they fold their arms to feel more comfortable, it doesn’t mean they are being defensive and lying. It may just mean that they feel at a loss what to do with their arms because there are no arms on the chairs. The fact that they are caught out lying has little to do with ‘reading the body language’. Of course someone on the stage is lying. That’s the whole point orf the show. But let’s not pretend that the ‘expert’could tell from a producer-contrived defensive geature. Now consider the leaders’ debates. All three stood at a podium and could grip the sides. This certainly helps control the upper body. So people who want to present themselves as truthful or calmer will make fewer and smaller upper body gestures. Too little moving of the arms and it comes across as disinterest. Too much waving of the arms and it looks like someone who needs to get a grip (on themselves, and on the podium). Analysing the three leaders and David Cameron was more controlled in his upper body, compared to when he is out on the streets in his shirt sleeves. Gordon Brown and Nick Clegg used bigger gestures so that their hands were visible in close-up shots. Cameron’s were not. Now how you read this depends on your politic beliefs since you interpret everything through the filters of your attitudes.
Smiling often increases likability but only if it’s a genuine smile. Gordon Brown’s smile looks forced or nervous. Or else it was attempt to seem less dour and serious as he has been portrayed in the media. So we saw lots of Gordon Brown’s teeth. However, we barely got to peek inside David Cameron’s mouth. He was quite tight-lipped. Clearly smiling wasn’t so important in this case. So whereas Brown did more smiling or shaking his head when challenged, Cameron did more brow furrowing, which could mean he didn’t agree or he didn’t understand. Again the interpretations come down to your political persuasion.
Nick Clegg perhaps came across as the most ‘human’ and natural of the three. He was less evasive and did answer questions the most directly. However, none of that was by chance. There were lot’s of techniques involved designed to create that impression. Although by the third debate there were shades of ‘game show host’ in his performance. By contrast Cameron throughout each debate avoided answering direct questions put to him. Brown’s often resorted to repeating facts and figures almost like as mantra. I suspect some people will never want to heat the phrase ‘tax credits’ ever again. A key strength of both Clegg and Cameron was that they both used simpler terminology whereas Brown was more wordy. For instance, Brown referred to ‘remuneration’ when they other two were more likely to refer to ‘pay’. In a fast paced debated, people often don’t listen, they scan for key words that match or conflict with existing attitudes.
Post-debate analysis showed that those surveyed in the studio responded favourably when key words were mentioned. So for instance when Cameron mentioned ‘discipline in the classroom’, there was a peak in audience ratings. In some ways it showed that people were voting with their attitudes. If you ask someone to rate a like or dislike or something then an attitude is formed on limited information very quickly. Key buzz words and phrases are far easier than statitistics to process in the context of existing attitudes. Except when the figures were soundbyte simplifications such as ‘£700 back in your pocket’.
In the first debate Nick Clegg was very diligent in remembering names and making visual context with the audience. However after having established contact he made contact with the TV audience by looking into the camera. This made his approach appear more personal. Cameron followed this lead and adopted this approach more after the first debate although his demeanor was more formal than Clegg’s. By contrast Gordon Brown addressed the studio audience and his opponents on the stage, which although this would have been more personal for the studio audience it was less so for the TV audience. Simply put both Clegg and Cameron made more ‘eye contact’ with the TV audience.
Another interesting point that I have not seen discussed is the stage positions throughout the debates. Gordon Brown was the only leader not to occupy the centre stage. He appeared in the same place throughout the three debates. He also moved his upper body from side to side more that the other two. It’s possible that Brown did not move position from week to week because having his opponents on his right was better for him on account of his blindness in the left eye. During the first debate, relative newcomer Clegg occupied centre stage which again may have contributed to his high ratings. Context is everything when interpreting non-verbal communication.
Finally, we need to consider the attitudes we held prior to the debates. This will have coloured our expectations and perceptions. It’s become a common phrase in everyday conversation that ‘we need a change’ and Clegg and Cameron in their opposition roles were better placed to work the word ‘change’ into their answers. Brown begun from a defensive position although he did ‘go on the offensive’ throughout the three debates. The problem is that he appealed to ‘finish the job’ and to some this may have been interpreted as ‘more of the same’. It was also notable during the post-debate analysis that those surveyed liked it least when the leaders ‘attacked’ each other. So Brown’s strategy didn’t resonate with the audience whereas Clegg’s ‘let’s work together’ did. Common perceptions of the House of Commons is of a bunch of school children fighting in the playground (and stealing from the tuck shop). Clegg’s appeal to work together to ‘sort out the mess we’re in’ struck a chord that things could be a real difference. However, ‘working together’ and ‘hung parliament’ have very different connotations following lots of media scaremongering.
So did the ‘Browny, Cammy, Cleggy’ show really enlighten or inform or did it merely entertain? Was it all about the style and soundbyte substance? Although there were appeals to values during the debates, nothing was particularly well articulated instead relying on the old chestnut of ‘family values’. Anyone who actually belongs to a family will know that families aren’t all they are cracked up to be. It’s just a short-hand way of saying ‘wholesome and decent’ and often a back-door to sneak in sexism and homophobia.
Values are important. They are certainly far more important than body language debates. Out attitudes support our values and they in turn should inform our politics. Our opinion that they have the X Factor (or not) shouldn’t be the defining quality. We don’t even have to like them, we just have to chose the candidate that represents the party that most closely matches our vision of the world – our values. And if we happen to face a parliament that’s well hung, let’s not get too excited! And as for your vote, it’s not just having one that matters, it’s what you do with it that counts.
For quizzes to help you decide how to use your vote see:
For more on the 7% myth see:
In my previous post, The Myth-Busting Sexual Anatomy Quiz, one of the answers in particular prompted comments and questions. I stated that the clitoris is not a mini-penis as it is often described but rather, biologically speaking, the penis, is an enlarged clitoris? But how can this be and does it really matter?
The Psychology of Gender looks at our biology, history and culture to consider the impact of gender roles and stereotypes, and addresses the ‘dilemmas’ we have regarding gender in a post-modern world (see UK / USA).
Of course, the statement was meant to be contentious and spark discussion. And, I discuss it fully in my book The Psychology of Gender (see UK / USA). When we talk about sex and gender we are storytelling. And, how we set the scene for our stories is key. So, by describing the clitoris as a ‘mini-penis’ we set up a chain of assumptions, By describing the clitoris ‘in terms of the penis’ we assume that the penis comes first (pause for sniggering). There’s also the not-so-subtle implication that the clitoris is an underdeveloped penis and therefore an inferior organ. These assumptions are biologically incorrect.
The part of the story often omitted is that male development requires hormones to suppress female development and further hormones to enhance male development. This makes female anatomy the platform for male development and so technically the penis is an enlarged clitoris. It sounds provocative because it goes against the ‘received wisdom’ or ‘gender spin’ – the story that gives primacy to the penis.
If we compare the female and male genitalia we can see how the embryonic tissue developed down the two routes:
ovaries = testes
labia majora (outer lips) =scrotum
labia minora (inner lips) = underside of the penis
glans (head of clitoris) = glans (head of penis)
shaft (erectile tissue) of clitoris = shaft (erectile tissue) of penis)
vagina = no comparable structure in male.
It’s notable that the word ‘vagina’ is used for female genitals where in fact this only applies to the birth canal. So in describing the female anatomy in everyday language, we put the emphasis on reproduction. The collective term for female genitalia is the vulva, which includes the clitoris, the only organ in the human body solely for sexual pleasure. The everyday use of ‘vagina’ for female genitalia is more gender spin as it keeps the emphasis on penetration and again ‘sidelines’ the clitoris. Again, it’s how we edit the story.
Then there’s the G-Spot to contend with. That’s it, let’s get the emphasis back up the vagina in a quest for the orgasmic grail. There is certainly not universal agreement that the G-Spot really exists. Supposedly located on the anterior wall of the vagina, no structure has been identified and evidence is largely anecdotal. Academic research suggests that:
the special sensitivity of the lower anterior vaginal wall could be explained by pressure and movement of clitoris’ root during a vaginal penetration and subsequent perineal contraction.
This research counters the story of the ‘clitoris as tiny penis’. In fact, its root extends deep into the body. So what some women experience as the G-Spot may be a by-product of the movement of the clitoris. More evidence, if any were needed, that the clitoris is not an inferior penis, and females are not ‘incomplete’ males.
For a fuller discussion of how to tell better (and more accurate) gender stories, see The Psychology of Gender (For US click, For UK click ).
Post updated: 29 May 2019
If you found this post interesting:
Other popular sex and gender posts by Gary Wood include:
Link:
I was invited to offer my thoughts on the recent news story regarding gender differences and the seven deadly sins for a short piece on BBC Radio Leicester’s morning show.
I hadn’t heard about the story and was surprised to learn that the ‘research’ came from The Vatican. Apparently the ‘researchers’ had collected statistics from the confessional booth.
Now I don’t know much about the ‘ins and outs’ of the confessional booth but a key principle of research is informed consent. I wonder if the people entering the booth were told that their confessions would form part of a headline grabbing bit of research. I assume not.
Aside from this ‘cardinal research sin’, there’s a big question over how the sins were defined. Was ‘pride’ for women comparable to that of men? Who decided which confession should be lumped into which ‘deadly sin’. All in all it’s not really research at all but sadly it’s more likely to get into the newspapers than the real stuff.
Hello and welcome to my blog. As you may have gathered, the key theme running through it will be psychology. I’ve recently become a ‘tweeter’ on Twitter.com and to be honest I’m still a bit bemused by it all. However, I noticed that a friend had integrated his blog and his ‘tweets’ and was inspired to do the same. So I’ve set up the blog in the hope it will inspire my tweets. Hey, it’s not the greatest of motivations but it’s got me started.
The goal for the blog is to discuss news stories that have a psychological angle and ‘critique’ a few of the nonsense bits of research that do psychology a great disservice. Performance artist Laurie Anderson has derided blogging as ‘Me-search’ (as opposed to ‘research’), so I will be bearing this in mind and keep the emphasis on evidence-based research. Although, I’m not ruling out the odd rant or a bit of ‘thinking out loud’.
Bright Moments
Gary Wood