How the Pandemic Has Tested Behavioral Science - Facts So Romantic - Nautilus
In March the United Kingdom curiously declined to impose significant social distancing measures in response to the global pandemic. The government was taking advice from the so-called "Nudge Unit," a private company called Behavioral Insights Team, which uses behavioral science to advise U.K. policymakers, among others, on how to "nudge" people toward certain actions. The company, led by experimental psychologist David Halpern, told policymakers to be wary of "behavioral fatigue," the idea that the public's commitment to the measures would fade over time. The lax measures sparked fierce backlash not just from epidemiologists concerned about the virus' spread, but also from a group of 600 behavioral scientists-psychologists, sociologists, economists, political scientists, and more. They signed an open letter doubting the quality of the evidence that led to the government's decision.
To their credit, the Nudge Unit has had some noteworthy successes, like developing interventions that have increased rates of tax payment and organ donation. But they've also been accused of overreaching; there is some evidence for behavioral fatigue, for example, but probably not enough for it to form the foundation of a country's response to a deadly pandemic. As Anne-Lise Sibony, a researcher who studies the relationship between law and behavioral science, wrote in the European Journal of Risk Regulation, "[I]t is not clear why behavioral fatigue was singled out given that other, better-documented behavioral phenomena might-with equally unknown probability and distribution-be at work and either fuel or counteract it."
The U.K. eventually bowed to the pressure and ramped up its efforts to slow the virus' spread by banning mass gatherings, requiring 14 days of self-isolation for anyone with COVID-19 symptoms, and encouraging people to avoid non-essential travel and contact with others. But the debate about how and when behavioral science should shape public policy rages on.
"The reality is this multimillion, maybe billion, dollar industry has gotten way far ahead of the evidence."
The lack of a vaccine means our best countermeasure against the pandemic is to change our behavior. To that end, a group of behavioral scientists, led by psychologists Jay Van Bavel and Robb Willer, published a paper in Nature Human Behavior in April on how social and behavioral science could support the response to the pandemic. It highlights research on topics like science communication, moral decision-making, and stress and coping. The goal of the paper, the researchers wrote, is to "help align human behavior with the recommendations of epidemiologists and public health experts." For example, the authors point to studies that have shown that emphasizing a shared social identity can help groups of people respond to threats and can encourage adherence to social norms. With this in mind, they suggest that it may be helpful for public health officials to spread messages that give people a sense of connection to their local community or their fellow citizens.
If insights like these make people a little more likely to take the recommended precautions, it could mean the difference between life and death. So why shouldn't we listen to behavioral scientists? As the economist John Maurice Clark once remarked, if a policymaker doesn't take psychology into account, "he will not thereby avoid psychology. Rather, he will force himself to make his own, and it will be bad psychology."
The flipside to this, of course, is when bad psychology comes from scientists. "If we're overconfident in studies that don't replicate," psychologist Hans IJzerman told Nautilus in an email, "then we're also establishing our own psychology." Using evidence before it's ready for primetime may not be better than nothing-it could be a waste of resources, or even actively harmful to those it's intended to help. Concerns about behavioral fatigue, for example, were meant to protect the UK public, but they ended up indirectly facilitating the virus' spread by delaying social distancing measures.
Behavioral science-and psychology in particular-has had a long and well-publicized struggle with quality control. Many influential experiments have failed to hold up after further scrutiny, often due to small and non-representative samples, sloppy data analysis, and highly context-specific findings. This has exposed systemic flaws in how behavioral science is conducted and interpreted-making it shaky ground for any public policy. "As someone who has been doing research for nearly 20 years," wrote Michael Inzlicht, a social psychologist who studies self-control, "I now can't help but wonder if the topics I chose to study are in fact real and robust. Have I been chasing puffs of smoke for all these years?"
Psychology and other fields are making progress in addressing their flaws, but it remains true that in the interplay between behavioral science and policy, puffs of smoke abound. For example, in the wake of worldwide protests against racist policing, there's renewed interest in using science to change the behavior of police officers. For years, implicit bias training-classes and workshops designed to help participants recognize and counteract their own discriminatory thoughts and feelings-has been touted as the answer, not just for police departments but for white-collar office spaces and many other kinds of professional environments. The problem, though, is that it doesn't seem to work, at least in its current form. A 2019 meta-analysis found that, while certain interventions can reduce measures of implicit bias, they don't do much to change people's behavior. "The reality is this multimillion, maybe billion, dollar industry has gotten way far ahead of the evidence," said Patricia Devine, who runs a lab studying prejudice, on Marketplace Morning Report.
Another example of behavioral science-based policy gone awry is what some education researchers call the "education hype cycle," wherein "promising ideas that produce positive results in experiments get over-simplified and touted as 'the answer'," wrote psychologist David Yeager. "Then educators or policymakers apply them indiscriminately, as if they're Jack's magic beans that boost students up no matter where they're planted." Take the idea of "learning styles": Many educators have been encouraged to identify their students as either visual, auditory, or kinaesthetic learners and adapt their teaching styles accordingly-but the concept is bunk.
Deciding whether to base policy on behavioral science comes down to a tricky balance between the pros and cons of acting on imperfect evidence. One pro is obvious: the potential for policy that neatly complements the many quirks of human behavior, like the Behavioral Insights Team's success with tax payment and organ donation, or the use of carefully designed posters to improve hand hygiene among healthcare workers. But many researchers still prefer to err on the side of caution.
In a preprint responding to Van Bavel and Willer's paper, IJzerman and his colleagues called for more humility and restraint among behavioral scientists. They proposed a system they call "evidence readiness levels," which they describe as "guidelines for flagging trustworthy and actionable research findings." Based on a similar system that NASA uses to assess its technology, evidence readiness levels range from preliminary observations, at level 1, to field-tested solutions that are ready to deploy in a crisis, at level 9.
One can imagine the evidence-readiness levels framework being really useful for, say, preventing another education hype cycle or another infusion of public funds into ineffective implicit bias trainings. But what about during a pandemic, when public health officials are compelled to try to change people's behavior, with or without input from behavioral science?
"I'm not sure [rocket science] is always a good comparator for behavioral science, even for behavioral science deployed during the pandemic," bioethicist and behavioral scientist Michelle Meyer wrote, in an email, to Nautilus. "It's not clear to me that we need to go to the moon, but we do need to communicate public health messages to people about how to protect themselves during the pandemic. Conditional on that messaging happening anyway, why not draw on insights from behavioral science, develop a few different messages, and test them to see which is most effective?"
Other evidence-evaluation frameworks have been proposed, but no matter which approach behavioral scientists take, it will have to involve an answer to the same difficult question: What level of uncertainty is acceptable? Even the most robust, well-replicated behavioral interventions involve some level of imperfection. So until behavioral scientists come to an agreement about how big the gray areas can be, public health officials, educators, and all others who seek insights from behavioral science may just have to decide for themselves.Scott Koenig is a doctoral student in neuroscience at CUNY, where he studies morality, emotion, and psychopathy. Follow him on Twitter @scotttkoenig.
Get the Nautilus newsletter
The newest and most popular articles delivered right to your inbox!WATCH: Why psychologists often view their research as "me-search."