Apr 27 2020
Psychological Pitfalls and COVID-19
SARS-Cov2 is a challenging little bugger, but in my assessment no match for human science and ingenuity. There are already 1,650 listed scientific articles on COVID-19 and 450 ongoing clinical trials. In short, we are scienceing the shit out of this pandemic and we will get through it. But as I have argued previously, perhaps a bigger threat than the virus itself is human psychology. Crises bring out the best and worst in people, and we are seeing both in spades. Also, a crisis exposes the weaknesses in institutions, and they are being highlighted as well.
That’s why, in medicine, we have something called M and M – morbidity and mortality rounds. The goal of these rounds is to review all negative clinical outcomes in whatever setting is being covered and try to figure out what went wrong. But, importantly, such conferences are not about assigning blame, recrimination, or discipline. It is about improving the system. Was a particular negative outcome unavoidable? Was it precipitated by a personal failure, or rather a systemic failure. And if not a failure per se, is there some systematic change we can put in place to minimize these negative outcomes in the future? Should this be handled by education, by some additional checklist or process, or by reconfiguring the workforce?
For some crises, like the pandemic (or a war, for example), we can’t wait until it’s all over to look back and analyze the systemic shortcomings (although we should do this also, to prepare for the next one). We need ongoing analysis and adjustment. That is what a group of psychologists have done, with respect to common psychological pitfalls and how they might affect our individual response to the pandemic. I like this review because it is square in the tradition of skeptical thinking – it identifies psychological pitfalls so that we can better understand ourselves, and proposes specific adjustments we can do to mitigate them. You can read the full article, but I want to highlight a few of particular interset.
The one that most caught my interest was hindsight bias, partly because I have discussed it before. This is a subtle bias in which we look back at an outcome (the results of a sporting competition or election, for example) as if it were inevitable and therefore overinterpret the factors that lead to the outcome. My team won this particular game because they had momentum, or because of a historic grudge against the other team, etc. Rather, this was a statistical match up and whichever team won that specific game was like rolling the dice. If you barely win a competition, then everything you did “worked”, and if you barely lose those same factors “failed”. The authors summarize this bias as:
“Summary judgments are weighed by final outcomes.”
That is a good pithy explanation. They point out that whatever the outcome of the pandemic, it is easy in hindsight to castigate those involved in managing it for the decisions they made before the outcome was known (rather than judging them based on the information available at the time the call was made). This does not mean there won’t be legitimate cause for criticism – but this needs to be fair, and based on what was known at the time, not the bias of hindsight of what we now know.
But also it is easy to criticize the steps taken to mitigate a crisis when those steps actually work to minimize the crisis. The most famous example of this is Y2K. Experts warned of potentially disastrous outcomes from the programming bug that did not account for the year changing from 1999 to 2000 (most programs only had a 2-digit year). Billions of dollars were spent changing millions of lines of code to fix the problem, and in the end not much of anything happened. This lead some cynics to claim that Y2K was all a hoax, and it was never a problem in the first place. Wrong. There was no crisis because years of great effort were spent preventing it.
Hindsight bias may also not account for recognized uncertainty. If the experts say outcome A is 90% likely, but outcome B occurs, were they wrong? Not necessarily – if they are correct 90% of the time they say an outcome is 90% likely. But we tend to judge them on one statistical call, which may have actually been correct.
With respect to COVID-19 we are already seeing people minimizing the severity of the pandemic, because the numbers are not that bad. This is premature hindsight bias. First, they are ignoring the effect of all the steps we are taking to minimize the pandemic (like with Y2K). The numbers would be far worse if we were not physical distancing. Second, the pandemic is not over yet, so it is premature to quote numbers and compare it to previous completed epidemics.
The authors also bring up status quo bias and inertia against changing ingrained social norms. This is a fascinating question that really only can be answered once we have some distance and can look back. But the point the authors are making is that we cannot assume that after the pandemic we can return to the prior status quo normal. There may be multiple effects on social norms, such as shaking hands. But further, we need to reconsider everything about the status quo prior to the pandemic and be willing to change.
For example, acceptance of telehealth and telemental health may need to dramatically increase. We see this happening now, but it remains to be seen if it will be sustained. At my own institution those involved with this effort observed that we accomplished in three weeks what would have taken three years instituting telehealth. That’s great – but what will happen when the pandemic is over? Will insurance companies stop paying for telehealth and try to go back to the prior status quo? Let’s hope not.
Also, I think this experience tells us something about bureaucracy. It is amazing what we can accomplish quickly if the political will is there. There was never any reason why we couldn’t quickly roll out telehealth – because we did it in three weeks. It was taking years simply because of resistance and fear of the unknown. Sometimes this is genuine caution, and that’s fine. We also can’t fall for hindsight bias here – it worked out well, so the caution was never justified. But it seems clear in this case, and pretty much everyone involved knew, there was excess caution here and it was mostly selfish. The insurance companies were simply worried about cost, so they were getting in the way of an innovation that could improve efficiency and quality of life. Some patients have a very hard time coming into the office, and don’t need a physical exam for every visit. There was also the legitimate concern of HIPPA – patient privacy with online visits. But there was already a system in place for secure patient communication, that is why were were able to so quickly do this.
There is an optimal compromise between caution and rapid innovation, and that is the bigger status quo that we need to examine. Do we have the balance right? Do we need to move more quickly, so that we then aren’t in a rush when a crisis hits?
The bigger lesson of this review, again one that is central to skeptical philosophy, is that our minds are the most important tool we use to solve our problems, and it is universal. But we have to understand it, especially its weaknesses and limitations. The pandemic is shining a bright light on both personal and institutional shortcomings. Let’s use the opportunity to improve.