'What’s fascinating about the show is that you get to see something very rare.' The Traitors / BBC
Dishonesty is a notoriously difficult thing to research. That may be why it is also, hilariously, the subject of some of the larger recent scientific fraud scandals. Francesca Gino, a significant figure in that area, was fired this year for allegedly falsifying data in studies about cheating — the first tenured Harvard professor to be removed in 80 years. I can’t help but admire Gino’s chutzpah in writing a book called Rebel Talent: Why It Pays to Break the Rules at Work and in Life while breaking the rules at work and in life.
It’s not just Gino. Her co-author Dan Ariely, who is even more famous, was also accused of making up data, including in one of the same papers. The study found that people were less likely to lie about their car mileage on insurance forms if they signed an honesty pledge at the top, rather than the bottom, of the form. Although Ariely denies fabricating it himself, the mileage data on that study had been very obviously made up with a random number generator.
There is something deeply splendid, as the data sleuths who uncovered the problems noted, about all this: “We were, like, Holy shit, there are two different people independently faking data on the same paper. And it’s a paper about dishonesty.”
If we can’t trust the scientists, then who can we trust? How about reality TV? The Traitors is back again this week, and what’s fascinating about the show is that you get to see something very rare: people lying, in high-stakes situations, where you know that they’re lying. It is, I would argue, an excellent research opportunity.
[su_unherd_related fttitle=”More from this author” author=”Tom Chivers”]https://unherd-wpml-test.go-vip.net/2024/04/climate-science-is-making-you-miserable/[/su_unherd_related]
The hugely successful game show is essentially a parlor-game murder mystery: there are 25 or so people in a Scottish castle. A subset of the 25, perhaps three, are designated “traitors”; the rest are the “faithful”. The traitors must remain undetected, while, every so often, “murdering” — i.e. evicting — one of the faithful. The faithful, meanwhile, must winkle out the traitors; in daily roundtable meetings, they vote on who they think the liars are, and the person with the most votes is also evicted, whether faithful or traitor. At the end, if only faithfuls remain, they share a pot of up to £120,000; but if there are any traitors left, they take it all. Still with me?
It is compelling viewing. We, the audience, know who the traitors are, so we get to see the faithful flailing around, making weird leaps of logic and spreading unfounded suspicion around like a virus. But more interesting is how we get to see the traitors themselves — if they’re good — wearing their two faces: playing the faithful off against each other, being subtle little Machiavels, twisting words and pulling strings. Telling, basically, bare-faced lies. Honestly, sometimes it’s hard to watch.
It’s dishonesty on open display, which is surprisingly rare and surprisingly uncomfortable. It’s shocking. All the more so when you see how the excellent liars build the trust of the faithful. Perhaps our fascination with the program is because we spend so much of our lives trying to work out whether someone is being straight with us, whether we can trust that salesman or that builder or that would-be lover; it is refreshing to see someone unambiguously lying. And it gives us a chance to learn about how dishonesty manifests.
This isn’t something we can really do under research conditions — partly because it’s so difficult to get ground truth. Here’s what I mean by that. If you’re researching, say, the efficacy of a cancer screening test, you run your test on a bunch of people who either have cancer or don’t, and you see whether the test correctly identifies them. But you can only do that because you know, separately from what the test says, who really has cancer and who doesn’t. That’s ground truth. Without that, your ability to assess the accuracy of your test is severely limited.
For example, in the US, law enforcement agencies still regularly use polygraph tests – “lie detectors”, in popular parlance. The FBI and CIA use them to screen employees; local and federal police use them to interrogate suspects, although the evidence is rarely admissible in court. But there’s a problem, which is that we don’t know if they work.
[su_unherd_related fttitle=”Suggested reading” author=”Lily Isaacs”]https://unherd-wpml-test.go-vip.net/2025/08/the-rise-of-the-trauma-star/[/su_unherd_related]
The idea is that they measure stress: heart rate, breathing rate, blood pressure, and “galvanic skin response” (that is, how well your skin conducts electricity, which is a proxy for how much you’re sweating). And, in theory, someone who’s lying is going to be more stressed than someone who is telling the truth.
There are all sorts of problems with this idea, of course, not least the fact that being interrogated by the police is probably pretty stressful, whether or not you’re lying. But one really crucial one is that we don’t really know whether lying is more stressful, because most of the time, we don’t really know who’s lying, and when we do know, the lies aren’t very stressful ones.
Real-world research into polygraph testing relies on confessions as ground truth. That is, when a suspect fails the polygraph test, they are confronted with the results. If they then confess to the crime, the polygraph is considered to have been vindicated. But there’s a problem: the polygraph’s results are used to pressure the suspect into confessing. So if they pass the test, they won’t be asked to confess; and if they fail the test, they might get pressured into confessing even if innocent. The method is “virtually guarantee[d]” to find that polygraphs work, even if they don’t, a 2008 paper said, because the test result and its verification are not independent. It’s as though the accuracy of breast cancer screening was verified by whether it convinced the patient that she had cancer, rather than whether she actually did.
The other way of assessing polygraphs is in the lab. You might get half your subjects to commit a fake crime — stealing money from a closed room, for instance. Then you polygraph the participants, and say that if they can convince the test that they’re innocent (whether they are or not), they will win some money, say £50; the idea is to create genuine motivation to beat the test. You have your ground truth then; you know who’s lying, because you told them to lie. But now it’s only £50. It’s not going to change anyone’s life. The stakes are low, so the stress is low.
So we have two opposite problems, a kind of Heisenberg’s uncertainty principle of lying. Either you can have a realistic, high-stress situation but no ground truth, or you can have ground truth but no real stress.
I’ve used polygraphs as the example here, but the same problems apply to all research into how we lie. Do people behave differently when they are lying? There are lots of claims about eye contact, about voice pitch and tension, about language choice — do liars use hedging language like “to be honest” or “as far as I can remember” more?
There’s an entire field of research into “micro-expressions”, fleeting facial movements we apparently make when we’re concealing an emotion. (There’s another problem, which is that like a lot of the research in this field, when it’s not fraud, it’s mainly garbage. The guy behind the “micro-expressions” idea claimed to have discovered people he called “wizards” who could tell liars just by looking at them, but it was basically rubbish.)
This matters. People’s dishonesty has huge consequences! How many people have, for instance, trusted a builder to renovate their house and then learned that they’re a cowboy? How many people are taken in by Ponzi schemes or crypto scams? More obviously the entire legal system turns on whether or not people are telling the truth. Some more reliable way of knowing whether someone believes what they say might, for instance, be very relevant in the notoriously he-said-she-said world of sexual assault cases. Politics… well. There’s a reason why there’s a book about politics called Why Is This Lying Bastard Lying To Me?
But again, this is hard to research, because when we do that research, either we don’t know whether people are lying, or we do know they’re lying, but only about some inconsequential thing in the lab which probably isn’t that stressful to lie about. I imagine lying in court, or lying in the dispatch box, is extremely stressful.
What researchers need is some sort of situation where people are known to be lying, on camera, for high stakes: perhaps for tens or hundreds of thousands of pounds. Obviously enough, that’s where The Traitors comes in.
[su_pullquote]”What researchers need is some sort of situation where people are known to be lying, on camera, for high stakes.”[/su_pullquote]
Imagine if the contestants were hooked up to a polygraph and asked to beat it, in the knowledge that their results would be shared with the group. You couldn’t ask for a more direct, real-world test. It would be a small sample size, sure, but you could do several tests throughout the series and over several different series. And at least it would be valid. Better a small, but good, test than a huge one that’s entirely uninformative. You could do all sorts of other tests: linguistic and facial analysis, thermal imaging to detect changes in blood flow, eye-tracking to see where people look when they lie.
Of course, the whole idea only works when the stakes are genuinely high. And that’s why it won’t work at all for the current series — a celebrity series where they’re all playing for charity money and public goodwill, which I’m worried might render the whole thing a bit silly. Does Stephen Fry or Paloma Faith really care enough about the outcome to publicly stab somebody in the back? They’d only be hamming it up in any case. Will the other contestants care enough about the backstabbing for it to really hurt, anyway? Probably not. The whole point is that it needs to be nobodies for whom the money really matters, and so they are willing to debase themselves to win it.
Learning about dishonesty — actually learning about it, properly, not doing the fake crap research that is most of what is out there — could have profound impacts. Obviously not all of them would be good; imagine what governments and corporations would do with a ministry of truth. But it’s a huge part of human life, and we don’t really understand it. So, dishonesty researchers: when you’ve finished faking the data for your current studies, please do go and ask the BBC if you can do some proper research.




I used to play a browser game called Mush. There were 16 named characters on a spaceship, fleeing from the Mush-overrun Sol system. Hunter spacecraft from Sol came to attack them; the too-early launched spaceship Daedelus disintegrated more every day. It was necessary to locate planets and send crew to investigate to find extra oxygen, food and fuel.
Two of the characters were randomly selected as agents of Mush; they had special powers (including with some considerable effort over days to turn humans into Mush) but also some restrictions.
It was played in real-time; with limited moves and actions every day it was mostly debate and discussion, with various secret cabals meeting on multiple conversation channels. The Mush had to pretend to be human, defending against hunter attacks or on-board fires and such-like; but mistakes could happen, right?
There were win conditions; for the humans, eliminating the Mush and returning to Sol with the cure – for the Mush, destroying the mission, or returning incognito amongst thoughtfully triumphant humans.
It could end in many ways (a mission typically took a week or two). A rampage by Mush under suspicion which would expose them but probably not destroy the mission unless well-planned. Humans desperately murdering the top suspects one by one (when it got to that, the victims were more often than not human.) Mush waiting for a chance for a decisive blow – or slowly infiltrating the ship.
Unlike Traitors, we all just did it for fun!
Ok, now do one about gratuitous defamation.
Poor Claudia.
The key to identifying lies is understanding patterns.
For instance, if you have faced enough delayed deliveries, you know when a driver is lying or what are the most common excuses (vehicle breakdown).
Or if people work for you, you have to learn to know them as individuals and when they are prone to fibbing.
Naturally, detecting lies is the hardest when you are dealing with strangers or are in unfamiliar situations.
When you have travelled a lot in cabs in many cities, you know when a cab driver is literally or metaphorically trying to take you for a ride.
Generally speaking, I am on my guard when people try to be overly friendly or are standoffish.
As for the hedging language the author mentions, I haven’t personally found it to be a great filter. People tend to use such verbiage for emphasis.
Completely agree about not trusting people who’re trying to be over-friendly. Standoffish can be something else entirely, such as shyness, natural reticence or “British reserve” which is often mistaken for being standoffish.
The default with strangers, surely, is not to trust them to be telling the truth. Trust has to be earned, and that takes time. And even then…
I have no interest in her or her TV productions, other than noting she’s married into the Freud family. Draw what you like from that…
Can’t even look at the woman myself
Beast Games, a show in the USA, might fit the bill. 1000 contestants, all competing for $5 million. Last one standing wins. Contestants have to form teams, act like they care, etc., all to eliminate others. It is fascinating and at times so crass it is nearly unwatchable.
“Please do go and ask the BBC if you can do some proper research.”
Not sure you’ve picked the right organisation.
Thank you, Tom, I laughed out loud (honestly, I did) or did I? . . .
This reminds me of those photographic tests used to evaluate people’s ability to recognize emotions in others’ faces, like the Brief Affect Recognition Test and the Reading the Mind in the Eyes Test, often used in research on autism and on social cognition. The tests involve photos of faces that are supposed to display emotions like sadness and anger, but the people in the photographs are actors, and to me, many of the emotions they show are obviously faked. Someone who fails the test would be rated as worse at detecting others’ emotions but may actually be better at disregarding faked emotions. To create an accurate test, you’d need to use candid photos of people experiencing real emotions, and you’d need confirmation of what the person was really feeling.
Another issue with polygraphs is that some of the people tested would be experienced criminals who are no longer stressed by lying. If someone scams others for a living, for example, lying might be easy or even enjoyable for them. People participating in an experiment, or even in a game show for money, would have different levels of experience and different motivations for lying.
Career politicians are another group who seem fairly unfazed by lying.
She would walk into a top editorial job at the BBC!
I think it is fascinating just how many studies and long established ‘truths’ in the social sciences have turned out on closer inspection to be complete BS. It’s a long list. Probably half of the studies referenced in “thinking fast and slow’ by Kahenman have been shown to be non-replicable, an awful lot of Malcolm Gladwell’s studies and ideas have been found to be flawed, oversimplified and inaccurate. The (in)famous Implicit Association Test developed by Greenwald and Banaji (which spawned a multi-billion dollar industry in unconscious bias and anti-discrimination and ultimately DEI) turned out to be be, well, worthless; it wasn’t clear what it measured, wasn’t replicable, and didn’t predict anything. That was debunked in 2006 but by that stage the ball was rolling. Criminal profiling – another area of bunkum.
The other challenge with truth and lies is how does one sift for the sub-category of bullshit? There’s a lot of that around, and whilst lying is a deliberate attempt to mislead or conceal the truth, bullshitters just say any old thing simply to sound or look impressive; they don’t especially care about the truth, per se, but it still suffers. And most of the BS-ers I’ve run into have never appeared remotely stressed about the guff they spout.
The final thought is that actually lying isn’t all bad. The majority of people tell multiple “white lies” a day, mostly to spare others feelings and avoid conflict. How many relationships would last if people answered “does my bum look big in this?” truthfully; the survival of the human race might actually be dependant on a bit of daily fibbing!
If I remember this correctly, a study on children who lie and children who are honest indicated that the dishonest group did better socially and were better strategic thinkers generally.
You may know more about this kind of research. But it does look as if ‘fibbing’ has survival value.
When it comes to dishonesty, and bullshit, in politics commonly called spin, I think we’re looking at a spectrum of dishonesty, from ‘white’ lies intended to express empathic care, right through to the blackest of lies, involving murder etc, with regular bullshitting somewhere in the middle, depending on how lavish it is.
Even expressions like ‘gilding the lily’ are a mainstay of our daily lives. How many CVs paint a rosy picture, I wonder? Indeed, we have lots of expressions for what might be called ‘forms of bullshit.’
In Buddhism, even white lies are a no no. Wise speech is fundamentally and unambiguously honest. Guess what? It’s incredibly hard to do.
Fibbing is just part of the game of living: if you never did it you are probably fibbing yourself..or losing