Using phone while driving

Freeing Cognitive "Bottleneck Congestion" in Autonomous Vehicles

read time - icon

0 min read

Jun 05, 2018

In March 2018, Tesla’s second fatal crash involving its autopilot self-steering system happened on highway 101 in Mountain View, California (The Guardian Staff 2018). Collision reports showed that the driver, Apple software engineer Wei Huang, had received both visual and auditory cues from the self-steering system prior to his vehicle crashing into a concrete median, which tragically killed him. Apparently, Huang had 150 meters of the median in view, or five seconds to react and avoid the barrier if he had been paying full attention to the situation at hand.

Although autonomous vehicle systems have saved more lives than shed (Marshall 2017), should we expect more incidents like these to occur during their continued production? What is more, does the fact that accidents still occur in autonomous self-steering systems (which are designed to improve driver safety) necessitate a deeper investigation into the relationship between hazard perception, automated cues, and multi-tasking?

Although they represent an important part of technological advancement, autonomous vehicles still  introduce disturbances for drivers, who may otherwise view them as a way to kick back and direct their attention elsewhere. Putting such trust into driver assistance design can introduce drivers to a dangerous amount of risk, instead of making driving easier and safer. According to behavioral science, this increased capacity to multitask behind the wheel may bring further problems for other drivers and road safety in general, as studies show that our cognitive decision-making systems aren’t as sophisticated as we may think.

To mitigate these risks, the autonomous vehicle industry may benefit from these behavioral science insights, and uncover more about the driver’s cognitive architecture and decision-making processes. By understanding when, where, and how drivers most optimally multitask, the industry can help design policies and technological interventions that enhance synchrony in the autonomous transportation realm.

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services

The Cognitive Bottleneck

To understand the risks behind autonomous driving systems, the relevant dynamics of attention processing and multitasking must first be understood. This is where the field of behavioral science comes in. The discipline’s scientific analysis of human decision-making provides the autonomous transportation sector important inferences about how people act when their attention is divided among multiple environmental stimuli, which can often be the case in a vehicle that promotes multi-tasking between things like watching the road and safety cues whilst writing an email or answering a phone call.

Even though our cognitive system often allows us to process multiple components of a task in such circumstances, this is only to a limited extent. It is not until these parallel streams of incoming information must narrow and converge at a central “bottleneck” that we are unable to react to one or both of the sources of information, which in the worst case could include hazards in the road ahead.

Research infers that we aren’t hardwired to do both things at once in all circumstances. Knowles (1963) proposed that the human mind can only perceive as much as their pool of cognitive resources allows, and that we get much worse at reacting to additional stimuli when we near full cognitive capacity. In the framework of driving an autonomous vehicle, fully attending to a secondary task, such as looking at a phone or laptop screen, uses cognitive resources that should otherwise be allocated for the primary task of driving, particularly in critical situations.

A Question of Time?

So why is the human brain less efficient at making decisions when the cognitive bottle neck is busy processing parallel streams of incoming information? One approach asks if it has something to with the time difference in which the different streams are being processed. Welford (1952) pondered the relationship between time and decision-making within the context of a response bottleneck, and sought to understand why he found that people were unable to respond to two discrete stimuli when they were separated by an interval of less than 500ms.

He attributed this brain lag to the “psychological refractory period”, a theoretical segment of time when we’re unable to respond to a second task until we’ve finished responding to the first one. This model theorizes that our brains use what is known as serial processing (where we process one stimuli at a time), which depletes our cognitive resource pool much faster than parallel processing (where we can process multiple stimuli at a time).

Here, multitasking in autonomous tasks hazards the possibility of dangerous accidents, as drivers on their phones or looking elsewhere will not be able to respond to any hazard warnings until they have finished processing the primary task. In many cases, where dangers arise quickly, it may already be too late before the driver’s attention is free to focus on the road ahead once they’ve completed the first task.

However, this model of serial processing disregards multitasking as a possibility, and states that in order to pay attention to a second task, we must always finish attending to the first task. This makes serial processing rather streamlined, so it is entirely inefficient when it comes to multitasking. A model of parallel processing, on the other hand, states that our brains can efficiently process dual tasks – rather than allocating resources in an “all or nothing” way, we can share them under the guise of multiple resources with multiple bottlenecks (Fischer and Plessow 2015).

This alternative model accounts for the extent to which we can actually process multiple tasks in a way that we perceive as being seamlessly simultaneous. For instance, we can listen to the radio while merging lanes or have a conversation with a friend when making a left turn. This type of processing – be it serial or parallel, can only take us so far, though. New insights suggest that it becomes more difficult to switch tasks or multitask when mappings between stimuli and responses are mismatched, or “incongruent.” Parallel processing, which is the type of processing used for multitasking, and serial processing, the type of processing used for task switching, both occur at optimal levels when stimuli and/or responses are congruent with one another.

Solving the Issue of “Crosstalk”

To get an idea of what congruence means in this context, Hommel (1998), demonstrated the consequences of responses being incongruent with one another, referred to as crosstalk effects. His experiment involved a dual-task paradigm, designed to investigate the effect of crosstalk on processing efficiency. The stimuli for the first task were colors of letters, to which people had to respond manually on a keyboard (left arrow for red and right arrow for green). The stimuli for the second task were identities of letters, to which people had to respond verbally (saying “right” for an S or “left” for an H). When the responses were incongruent (left/“right”), then people were a lot slower because of increased crosstalk, but this effect was reversed when responses shared the same conceptual category (left/“left” or right/“right”).

The explanation for this can be reasoned within the parameters of a multiple resource bottleneck model. Parallel processing for tasks that require different resources (e.g. manual and verbal response resources) shows the least amount of crosstalk when there is dimensional overlap, or when responses share the same conceptual category. What this means is that, when driving autonomous vehicles, multitasking difficulties don’t merely arise from the issue of having to process multiple stimuli at once, like changing lanes and having a conversation, but specifically from using the same cognitive representational coding resources (Wickens 2002), such as turning at an intersection and identifying road signs.

Say you’re being driven by your autonomous car but you’re on your phone. It becomes a lot harder to avoid a potential hazard when your responses are not being primed in an appropriate way. The responses you have to your phone are not the same responses you need on the road. This incongruency creates cognitive delay in the form of lowered situational awareness, which can be dangerous for autonomous drivers.

In 2014, Strand et al. observed that fully autonomous drivers experienced double the collisions than drivers of semi-autonomous vehicles. Also, compared to manual drivers, autonomous drivers can take 70% longer to overtake a lead vehicle and 2.5 seconds longer to hit the brakes in reaction to a red traffic light (Radlmayr et al. 2014; Merat and Jamson 2017).

This is especially worrying as many individuals see these cars as futuristic hubs of enhanced productivity. A 2014 survey asked U.S. respondents what kind of activities they would likely engage in while in an autonomous vehicle (Schoettle and Sivak 2014). Only 36% of people admitted that they would not take their eyes off the road. About 10% of respondents said they would read or text and talk with friends and family, up to 6% said they would work or watch movies, and 7% trusted the vehicle enough to sleep en route to their destination. It’s true that an autonomous vehicle could free up driving time and unleash the potential for multitasking. But, how can we be smart and safe about making this multitasking dream a reality?

The Solution: Cognitive Ergonomics

Understanding the mechanics of central bottleneck processing is crucial in the context of human-machine interface (HMI) design. Meaningful thought needs to be placed into the design of complementary HMIs so that drivers can safely focus on secondary tasks. It’s no mystery that things like reading or talking on a cell phone interfere with a driver’s ability to focus and respond to unexpected situations on the road (Levy, Pashler, and Boer 2006).

Fortunately, existing research efforts have already been placed into the understanding of optimal HMI design. Predictive cues, for instance, can be incorporated into HMIs to enhance a driver’s attention to important details. When the car’s sensor system detects and predicts environmental obstacles, explicit cues can aid a driver in recognizing pedestrians, cyclists, automobiles, road signs, and other items.

These cues can be programmed into the car’s software and can span the visual, auditory, or tactile modalities (Broeker et al. 2017). For instance, simulated driving experiments have shown drivers to optimally integrate auditory cues that are presented at the same time and in the same region of space as visual areas of interest, resulting in enhanced attention and driving performance (C. Ho and Spence 2005; Steenken et al. 2014).

Vibrotactile warning cues have also shown to be particularly effective in spatially orienting a driver to a visually relevant obstacle, such as the sudden deceleration of a lead car (C. Ho, Reed, and Spence 2006). Vibrotactile warning cues are especially helpful when paired with a simultaneous auditory cue (C. Ho, Reed, and Spence 2007). Though numerous autonomous car companies already use simple versions of visual, auditory, and tactile warning cues, more can be done to enhance the cognitive ergonomics between driver and car (Beattie, Baillie, and Halvey 2017).

Multisensory integration reduces cognitive workload because rather than taking data from a single cue, the brain uses redundant data from multiple cues and parallel streams of incoming information to provide the most reliable estimate for perceptual discrimination (Ernst 2006). Multisensory integration takes advantage of the brain’s ability to simultaneously process different yet congruently programmed cues, thereby sidestepping the shortcomings of cognitive serial processing.

Best Paired with Cues

Certain types of cues work best when incorporated into augmented reality technologies or HMIs for assisted driving. Drivers of autonomous vehicles prefer few spatially predictive natural sounds as opposed to many omnidirectional abstract sounds (Fagerlonn and Alm 2010; C. Ho and Spence 2005), which is the standard in the autonomous car industry. If a car is tailgating you, it’s much easier to respond to this event if your own car alerts you with a localized cue at the rear rather than anywhere else in space; the dimensional congruency offered by the localized cue takes advantage of the efficiency provided by the brain’s parallel processing, as suggested by the multiple resource model (C. Ho and Spence 2005; D. C. Ho and Spence 2012).

A driver’s situational awareness can be further enhanced by incorporating auditory cues that mimic the natural sounds of external events, such as crashes, as opposed to simple beeps (van der Heiden, Iqbal, and Janssen 2017). Cues can be especially helpful if drivers are asked to voluntarily translate symbols, such as road signs, to real meanings in their mind. This form of endogenous cueing, which involves controlled and voluntary cognitive processes, can help prime a driver’s attention to predict obstacles and focus on important tasks for a longer period of time (C. Ho, Reed, and Spence 2007; Talsma et al. 2010; Lee, Lee, and Boyle 2009).

The AI Governance Challenge book
eBook

The AI Governance Challenge

Many car makers promise to have fully autonomous vehicles on roads by 2020 (Sage and Lienert 2016). In the meantime, a legislative framework is direly needed. Fortunately, user experience designers at autonomous companies like Tesla are heavily invested in refining HMIs to be as integrated with the driver’s attentional limits as possible (Shepardson 2017).

While automotive companies continue their race to corporate dominance, policy-makers will have to take advice from user-experience researchers and other scientific bodies to ensure that safety is ensured with high-quality driver assistance design. The use of predictive technology to increase the driver’s situational awareness of potential hazards looks to be the best option for transportation synchrony, at least in the currently precarious stage of autonomous car development.

References

Beattie, David, Lynne Baillie, and Martin Halvey. 2017. “Exploring How Drivers Perceive Spatial Earcons in Automated Vehicles.” Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1 (3): 36:1–36:24. https://doi.org/10.1145/3130901.

Broeker, Laura, Andrea Kiesel, Stefanie Aufschnaiter, Harald E. Ewolds, Robert Gaschler, Hilde Haider, Stefan Künzell, et al. 2017. “Why Prediction Matters in Multitasking and How Predictability Can Improve It.” Frontiers in Psychology 8. https://doi.org/10.3389/fpsyg.2017.02021.

Ernst, Marc O. 2006. “A Bayesian View on Multimodal Cue Integration.” Perception 131 (Chapter 6). https://pub.uni-bielefeld.de/publication/2355548.

Fagerlonn, J., and H. Alm. 2010. “Auditory Signs to Support Traffic Awareness.” IET Intelligent Transport Systems 4 (4): 262–69. https://doi.org/10.1049/iet-its.2009.0144.

Fischer, Rico, and Franziska Plessow. 2015. “Efficient Multitasking: Parallel versus Serial Processing of Multiple Tasks.” Frontiers in Psychology 6. https://doi.org/10.3389/fpsyg.2015.01366.

Heiden, Remo M.A. van der, Shamsi T. Iqbal, and Christian P. Janssen. 2017. “Priming Drivers Before Handover in Semi-Autonomous Cars.” In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 392–404. CHI ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3025453.3025507.

Ho, Cristy, Nick Reed, and Charles Spence. 2006. “Assessing the Effectiveness of ‘Intuitive’ Vibrotactile Warning Signals in Preventing Front-to-Rear-End Collisions in a Driving Simulator.” Accident Analysis & Prevention 38 (5): 988–96. https://doi.org/10.1016/j.aap.2006.04.002.

Ho, Cristy, Nick Reed, and Charles Spence. 2007. “Multisensory In-Car Warning Signals for Collision Avoidance.” Human Factors 49 (6): 1107–14. https://doi.org/10.1518/001872007X249965.

Ho, Cristy, and Charles Spence. 2005. “Assessing the Effectiveness of Various Auditory Cues in Capturing a Driver’s Visual Attention.” Journal of Experimental Psychology. Applied 11 (3): 157–74. https://doi.org/10.1037/1076-898X.11.3.157.

Ho, Dr Cristy, and Professor Charles Spence. 2012. The Multisensory Driver: Implications for Ergonomic Car Interface Design. Ashgate Publishing, Ltd.

Hommel, B. 1998. “Automatic Stimulus-Response Translation in Dual-Task Performance.” Journal of Experimental Psychology. Human Perception and Performance 24 (5): 1368–84.

Knowles, W. B. 1963. “Operator Loading Tasks.” Human Factors 5 (April): 155–61. https://doi.org/10.1177/001872086300500206.

Lee, Yi-Ching, John D. Lee, and Linda Ng Boyle. 2009. “The Interaction of Cognitive Load and Attention-Directing Cues in Driving.” Human Factors 51 (3): 271–80. https://doi.org/10.1177/0018720809337814.

Levy, Jonathan, Harold Pashler, and Erwin Boer. 2006. “Central Interference in Driving: Is There Any Stopping the Psychological Refractory Period?” Psychological Science 17 (3): 228–35. https://doi.org/10.1111/j.1467-9280.2006.01690.x.

Marshall, Aarian. 2017. “Wanna Save Lots of Lives? Put (Imperfect) Self-Driving Cars on the Road, ASAP.” WIRED. November 7, 2017. https://www.wired.com/story/self-driving-cars-rand-report/.

Merat, Natasha, and A Jamson. 2017. “How Do Drivers Behave in a Highly Automated Car?” In , 514–21. https://doi.org/10.17077/drivingassessment.1365.

Radlmayr, Jonas, Christian Gold, Lutz Lorenz, Mehdi Farid, and Klaus Bengler. 2014. “How Traffic Situations and Non-Driving Related Tasks Affect the Take-Over Quality in Highly Automated Driving.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting 58 (1): 2063–67. https://doi.org/10.1177/1541931214581434.

Sage, Alexandria, and Paul Lienert. 2016. “Ford Plans Self-Driving Car for Ride Share Fleets in 2021.” Reuters, August 17, 2016. https://www.reuters.com/article/us-ford-autonomous/ford-baidu-co-invest-in-autonomous-tech-firm-velodyne-idUSKCN10R1G1.

Schoettle, Brandon, and Michael Sivak. 2014. “A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia.” Public Opinion, Survey. Ann Arbor, Michigan: University of Michigan. https://deepblue.lib.umich.edu/handle/2027.42/108384.

Shepardson, David. 2017. “Tesla, Others Seek Ways to Ensure Drivers Keep Their Hands on the Whee.” Reuters, June 24, 2017. https://www.reuters.com/article/us-usa-autos-selfdriving-safety/tesla-others-seek-ways-to-ensure-drivers-keep-their-hands-on-the-wheel-idUSKBN19E1ZA.

Steenken, Rike, Lars Weber, Hans Colonius, and Adele Diederich. 2014. “Designing Driver Assistance Systems with Crossmodal Signals: Multisensory Integration Rules for Saccadic Reaction Times Apply.” PLoS ONE 9 (5). https://doi.org/10.1371/journal.pone.0092666.

Strand, Niklas, Josef Nilsson, I. C. MariAnne Karlsson, and Lena Nilsson. 2014. “Semi-Automated versus Highly Automated Driving in Critical Situations Caused by Automation Failures.” Transportation Research Part F: Traffic Psychology and Behavior, Vehicle Automation and Driver Behavior, 27 (November): 218–28. https://doi.org/10.1016/j.trf.2014.04.005.

Talsma, Durk, Daniel Senkowski, Salvador Soto-Faraco, and Marty G. Woldorff. 2010. “The Multifaceted Interplay between Attention and Multisensory Integration.” Trends in Cognitive Sciences 14 (9): 400–410. https://doi.org/10.1016/j.tics.2010.06.008.

The Guardian Staff. 2018. “Tesla Car That Crashed and Killed Driver Was Running on Autopilot, Firm Says.” The Guardian, March 31, 2018, International Edition edition. https://www.theguardian.com/technology/2018/mar/31/tesla-car-crash-autopilot-mountain-view.

Welford, A. T. 1952. “The ‘Psychological Refractory Period’ and the Timing of High-Speed Performance—a Review and a Theory.” British Journal of Psychology. General Section 43 (1): 2–19. https://doi.org/10.1111/j.2044-8295.1952.tb00322.x.

Wickens, Christopher D. 2002. “Multiple Resources and Performance Prediction.” Theoretical Issues in Ergonomics Science 3 (2): 159–77. https://doi.org/10.1080/14639220210123806.

About the Author

Hanna Haponenko

Hanna Haponenko

McMaster University

Hanna obtained her undergraduate degree in Health Sciences before branching off to focus on cognition and perception, and is currently a PhD candidate in Cognitive Psychology at McMaster University. Her current research endeavours involve coding a driving simulator to test the effectiveness of cues while multitasking.

Read Next

Insight

Unpacking the Stats: Digital Mental Health Interventions

​​In 2023, The Decision Lab conducted a comprehensive survey with over 700 participants. Questions spanned across our focus areas, including emerging technology, mental health, and personal and professional growth. Let's delve into the findings.

Insight

Political Persuasion: Rethinking The Rhetoric That Resonates

Words can be tricky. Especially in political conversations.  

We may think we’re communicating clearly, but the fact that online environments are proven hotbeds for hostility should cause us to pause and reconsider: What if we’re incorrectly detecting disagreement because someone uses words differently than we do?

Notes illustration

Eager to learn about how behavioral science can help your organization?