woman looking at another woman

AI and the Future of Lie Detection

"We live in a world now where we know how to lie. With advances in AI, it is very likely that we will soon live in a world where we know how to detect truth. The potential scope of this technology is vast — the question is how should we use it?"

Some people are naturally good liars, and others are naturally good lie detectors. For example, individuals who fit the latter description can often sense lies intuitively, observing fluctuations in pupil dilation, blushing, and a variety of micro-expressions and body movements that reveal what’s going on in someone else’s head. This is because, for the vast majority of us who are not trained deceivers, when we lie, or lie by omission, our bodies tend to give us away.

For most of us, however, second guessing often overtakes intuition about whether someone is lying. Even if we are aware of the factors that may indicate a lie, we are unable to simultaneously observe and process them in real time — leaving us, ultimately, to guess whether we are hearing the truth.

Now suppose we did not have to be good lie detectors, because we would have data readily available to know if someone was lying or not. Suppose that, with this data, we could determine with near-certainty the veracity of someone’s claims. We live in a world now where we know how to lie. With advances in AI, it is very likely that we will soon live in a world where we know how to detect truth. The potential scope of this technology is vast — the question is how should we use it?

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services

The Future of AI Lie Detection  

Imagine anyone could collect more than just wearable data showing someone’s (or their own) heartbeat, but continuous data on facial expressions from video footage, too. Imagine you could use that data, with a bit of training, to analyze conversations and interactions from your daily life — replaying ones you found suspicious with a more watchful gaze. Furthermore, those around you could do the same: imagine a friend, or company, could use your past data to reliably differentiate between your truths and untruths, matters of import and things about which you could not care less.

This means a whole new toolkit for investigators, for advertisers, for the cautious, for the paranoid, for vigilantes, for anyone with internet access. Each of us will have to know and understand how to manage and navigate this new data-driven public record of our responses.

The issue for the next years is not whether lying will be erased — of course it will not —but rather, how these new tools should be wielded in the pursuit of finding the truth. Moreover, with a variety of potential ways of mis-reading and misusing these technologies, in what contexts should they be made available, or promoted?

The Truth About Knowing the Truth

Movies often quip about the desire to have a window into someone else’s brain, to feel assured that what they say describes what they feel, that what they feel describes what they will do, and what they will do demonstrates what everything means for them. Of course, we all know the world is not so neat, and one might fall prey to searching for advice online. What happens when such advice is further entrenched in a wave of newly available, but poorly understood, data?

What will happen, for example, when this new data is used in the hiring process, with candidates weeded out by software dedicated to assessing whether and about what they’ve lied during an interview? What will happen when the same process is used for school selection, jury selection, and other varieties of interviews, or when the results are passed along to potential employers. As the number of such potential scenarios grows, the question we have to ask is when is our heartbeat private information?

Is knowledge of our internal reactions itself private, simply because until now only a small segment of perceptive people could tell what was happening? Communities often organize around the paths of least resistance, creating a new divide between those who understand and can navigate this new digital record, and those who cannot.

Imagine therapists actively recording cognitive dissonance, news shows identifying in real time whether or not a guest believes what they are saying, companies reframing interviews with active facial analysis, quick border security questioning. The expanding scope of sensors is pushing us away from post-truth to an age of post-lying, or rather, an end to our comfort with the ways in which we currently lie. As with everything, the benefits will not be felt equally.

We might even be able to imagine the evolution of lie detection moving towards brain-computer interfaces — where one’s right to privacy must then be discussed in light of when we can consider our thoughts private.

In court rooms, if we can reliably tell the difference between reactions during a lie and during the truth, do witnesses have a right to keep that information private? Should all testimonials be given in absolute anonymity? Researchers at the University of Maryland developed DARE, the Deception Analysis and Reasoning Engine, which they expect to be only a few years away from near perfect deception identification.

How then should we think about the 5th amendment of the US constitution, how should we approach the right to not incriminate oneself? With the advent of these technologies, perhaps the very nature of the courtroom should change. Witnesses are not given a polygraph on the stand for good reason: it’s unreliable — but there may be little stopping someone with a portable analytics system to tell their vitals or analyze a video feed from a distance, and publish the results for the court of public opinion. How should our past behavior be recorded and understood?

The AI Governance Challenge book
eBook

The AI Governance Challenge

How we design nudges, how we design public spaces, how we navigate social situations, job offers, personal relationships, all depend on a balance of social convention by which we allow ourselves — and others — to hide information. Yet, what should we do with a technology that promises to expose this hidden information? Is a world based on full-truths preferable to the one we have now? Will we have a chance to decide?

Advances in AI and the democratization of data science are making the hypothetical problem of what kind of world we prefer an all too real discussion, one we need to have soon. Otherwise we’ll have no say in determining what lies ahead.

About the Authors

Josh Entsminger

Josh Entsminger

Virginia Tech

Josh Entsminger is an applied researcher at Nexus Frontier Tech. He additionally serves as a senior fellow at Ecole Des Ponts Business School’s Center for Policy and Competitiveness, a research associate at IE business school’s social innovation initiative, and a research contributor to the world economic forum’s future of production initiative.

Mark Esposito

Harvard

Mark Esposito is a member of the Teaching Faculty at the Harvard University's Division of Continuing, a Professor of business and economics, with an appointment at Hult International Business School. He is an appointed Research Fellow in the Circular Economy Center, at the University of Cambridge's Judge Busines School. He is also a Fellow for the Mohammed Bin Rashid School of Government in Dubai.

Terence Tse

Terence Tse

ESCP Europe Business School

Terence is a co-founder & managing director of Nexus Frontier Tech: An AI Studio. He is also an Associate Professor of Finance at the London campus of ESCP Europe Business School. Terence is the co-author of the bestseller Understanding How the Future Unfolds: Using DRIVE to Harness the Power of Today’s Megatrends. He also wrote Corporate Finance: The Basics.

Danny Goh

Danny Goh

Oxford

Danny is a serial entrepreneur and an early stage investor. He is the partner and Commercial Director of Nexus Frontier Tech, an AI advisory business with presence in London, Geneva, Boston and Tokyo to assist CEO and board members of different organisations to build innovative businesses taking full advantage of artificial intelligence technology.
 


Read Next

Insight

Unpacking the Stats: Digital Mental Health Interventions

​​In 2023, The Decision Lab conducted a comprehensive survey with over 700 participants. Questions spanned across our focus areas, including emerging technology, mental health, and personal and professional growth. Let's delve into the findings.

Insight

Political Persuasion: Rethinking The Rhetoric That Resonates

Words can be tricky. Especially in political conversations.  

We may think we’re communicating clearly, but the fact that online environments are proven hotbeds for hostility should cause us to pause and reconsider: What if we’re incorrectly detecting disagreement because someone uses words differently than we do?

Notes illustration

Eager to learn about how behavioral science can help your organization?