Political Persuasion: Rethinking The Rhetoric That Resonates

read time - icon

0 min read

May 18, 2024

In a previous article for TDL, I explored evidence that social media environments lead people to over-perceive political polarization and the level of hostility held by those “on the other side” of an ideological issue. 

One of the most crucial distinctions between social media and face-to-face interactions is that online discussions typically lack social cues like vocal tone and facial expression, forcing us to rely almost entirely on our choice of words. 

And words, words can be tricky. Especially in political conversations.  

We may think we’re communicating clearly, but the fact that online environments are proven hotbeds for hostility should cause us to pause and reconsider: What if we’re incorrectly detecting disagreement because someone uses words differently than we do? What if our attachment to our rhetoric is reducing our ability to understand, connect with, and persuade others? 

Language resonates differently across demographics

In 2021, a group of researchers conducted the Civic Language Perceptions Project, a survey measuring the reactions of 5000 Americans to 21 political terms like “liberty”, “justice,” and “patriotism.” 

The survey categorized the respondents into demographics like political affiliation, age, gender, race, and socio-economic status, revealing that many terms elicit disparate reactions. 

For example, people under 35 years old responded more favorably to the term “diversity” than those over 35. Likewise, those who identified as politically liberal showed a stronger preference for the term “activism” than did political conservatives. Additionally, the survey identified terms like “unity” that were widely favored across demographic groups.1

The organization funding the study then supported the development of “Pluralytics,” an artificial intelligence tool utilizing the study’s data to evaluate how content will be perceived by specific audiences and assist in the generation of more effective messaging. A similar project called “DepolarizeGPT” also leverages AI to identify political language that bridges ideological divides.

Along with these new tools, studies like the Civic Language Perceptions Project offer opportunities to reflect on instances where we are so repelled by someone else’s choice of words that we misjudge their perspective. 

We can also recognize when our chosen terminology may be limiting our ability to successfully communicate—our words may not be resonating with others, even when the heart of our argument has the potential to. Politicians, organizations, and individuals attempting to reach new or wider audiences may be able to use this information to select better words. 

But we can’t always be expected to switch up our political terminology. We won’t always know the best term to use in a given situation, and asking people to use political words that don’t feel authentic during an impassioned discussion may not be realistic or ethical. 

Interestingly, many of the guidelines developed out of the Civic Language Perceptions Project did not require people to adopt different political terminology. Actually, given that they discovered a lack of universal favorability for many political terms, researchers recommended that people make their commentary more accessible through linguistic tactics like paraphrasing; explaining a concept with everyday non-technical words instead of using political terms and assuming that those terms will be universally received (See what I did there? That's paraphrasing!).2

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services

Ordinary words hold more power than we think   

Ordinary words can often sufficiently communicate our ideas, and without the pitfalls of political terminology. Linguistic interventions for successful cross-ideological dialogue needn’t focus solely on neutralizing polarizing terms, but also on harnessing the unrealized power of the “regular” words all around them. 

Harvard linguistic researchers called these techniques “content- and topic-agnostic” interventions. They conducted a series of social experiments using artificial intelligence to identify rhetorical trends in the most effective cross-ideological conversations. 

Their data determined a set of 4 linguistic markers that were proven to…

A) Boost a speaker’s persuasiveness

B) Reduce the probability of conflict escalation in their discussions

C) Increase the likelihood that a discussion partner with opposing views would like to collaborate with them in the future

They called their 4-pronged approach the “receptivity recipe”:3, 4  

  1. Positive statements, rather than negations; “It would be so wonderful if…”
  2. Explicit acknowledgement of understanding; “What I think you are saying is…”
  3. Finding points of agreement; “We are both concerned with…”
  4. Hedging to soften claims; “This might happen because…” 

Regardless of discussion partner or ideological topic, researchers could improve a participant’s persuasiveness by teaching them simple syntactic shifts like hedging; making a statement sound less definite through the use of phrases like “it’s possible that…”, which signal openness to alternative viewpoints. 

It’s important to note that the “receptivity recipe” does not coach participants to actually become more cognitively receptive to their discussion partner’s opinions in order to gain their approval. It solely encourages participants to display dispositional receptivity in order to boost their effectiveness within a conversation.  

Like the studies on political terminology, the research on “content-agnostic” linguistic intervention provides opportunities for self-reflection. Does our discussion partner deeply disagree with us, or are we failing to adequately demonstrate that we’re open to a two-way conversation? Unfortunately, the data suggests that most of us are terrible at assessing how we’re being perceived. Luckily, these skills can be learned quickly, and if we teach just one conversation partner, the other is likely to follow suit. People mimic each other’s phrasing, so altering the behavior of one person can positively impact the behavior of their discussion counterpart.

Do we want to be heard?

Not every cross-ideological discussion is an attempt at civil discourse. We may think we want to be heard, but further research is exploring the numerous, often subconscious, and potentially conflicting motivations we experience when communicating.5 Researchers also acknowledge that while words matter, so does the messenger: what we believe about a person impacts how we receive what they have to say.6,7 

Still, for those who find value in persuasion or building trust, it’s important to thoughtfully consider how both political and everyday language can act as a bridge or a barrier. Behavioral science and the application of artificial intelligence can help us rethink the type of rhetoric that truly resonates. 

References

  1. America + Civic Language, PACE (Philanthropy for Active Civic Engagement) (2022). Civic Language Perceptions Project, National Survey collected November 2021. Retrieved via PACEfunders.org/Language. https://app.box.com/s/5blrkyrtf1apmko9dbduwz2mw473js1e
  2. “Civic Language Guidance: Wisdom From the Field.” Philanthropy for Active Civic Engagement (PACE). , 1 Feb. 2023, www.pacefunders.org/civic-language-guidance/. 
  3. Yeomans, M, Minson, J, et al. “Conversational receptiveness: Improving engagement with opposing views.” Organizational Behavior and Human Decision Processes, vol. 160, Sept. 2020, pp. 131–148, https://doi.org/10.1016/j.obhdp.2020.03.011.  
  4. Minson, J. “To Be Heard, Listen.” The Society for Personality and Social Psychology , 18 Mar. 2022, https://spsp.org/news-center/character-context-blog/be-heard-listen
  5. Yeomans, M, Schweitzer, M et al. “The conversational circumplex: Identifying, prioritizing, and pursuing informational and relational motives in conversation.” Current Opinion in Psychology, vol. 44, Apr. 2022, pp. 293–302, https://doi.org/10.1016/j.copsyc.2021.10.001
  6. See 1
  7. See 3

About the Author

Kaya Foster's portrait

Kaya Foster

Kaya Foster has over a decade of experience designing and implementing engagement programs and campaigns for nonprofits, community groups, and institutions of higher education. She is interested in how behavioral science can empower everyday people to make a difference, and guide organizations shaping public policy. Kaya is a graduate of the Sustainability & Behavior Change program at UCSD, a robust professional certification grounded in "Community Based Social Marketing", an internationally utilized approach to "selling" altruistic behavior adoption & encouraging community engagement. She also holds a B.A. from UCLA.

Read Next

Insight

Unpacking the Stats: Digital Mental Health Interventions

​​In 2023, The Decision Lab conducted a comprehensive survey with over 700 participants. Questions spanned across our focus areas, including emerging technology, mental health, and personal and professional growth. Let's delve into the findings.

Insight

Political Persuasion: Rethinking The Rhetoric That Resonates

Words can be tricky. Especially in political conversations.  

We may think we’re communicating clearly, but the fact that online environments are proven hotbeds for hostility should cause us to pause and reconsider: What if we’re incorrectly detecting disagreement because someone uses words differently than we do?

Notes illustration

Eager to learn about how behavioral science can help your organization?