Hi,
I'm looking into writing a narrative review around the topic of mental health misinformation on social media and its affect on individuals, but there's barely any studies on this. In this case should i change my topic is or is still possible to write a narrative review?
Thanks in advance!
All the more reason for undertaking the research! Earlier research may be slim, but the topic is important.
Scant literature on a given topic means that an author is doing exploratory research, which would contribute to building a literature around it as there are contributions. As Prof. Wright says, this is a good reason to undertake the research to flesh out the topic through individual contributions.
As already pointed out by Laurence Wright and Zouheir Maalej, it is great if you do an exploratory research. If you want to look at narratives, maybe it is an idea to get inspiration from narratological studies in related fields like the medical humanities.
You can find enough. First, you need to appreciate the metaphor 'enough' which means related and relevant sources that can provide sufficient information for the reader to understand the body of study about the topic. Hence, to produce enough review, you need to provide sources and materials that capture the beginnings of study in a particular area or topic. Then clearly highlight and indicate the major contributions and trajectories. Lastly, indicate the current/subsisting debates through current studies/publications in the area. Due diligence as suggested above, definitely will provide enough insight about the pace-setting studies, the debates, and the possible gaps which will form your study scope and justification.
This scientific research paper may prove to be helpful to you because the Reference section contains several different sorts of resource materials:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8502082/
"📷 Z Gesundh Wiss. 2021 Oct 9 : 1–10.
doi: 10.1007/s10389-021-01658-z [Epub ahead of print]
PMCID: PMC8502082
PMID: 34660175
The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review
Yasmim Mendes Rocha,1 Gabriel Acácio de Moura,2 Gabriel Alves Desidério,3 Carlos Henrique de Oliveira,3 Francisco Dantas Lourenço,3 and Larissa Deadame de Figueiredo Nicolete📷3
Author information Article notes Copyright and License information Disclaimer
Go to:
Abstract
Purpose
As the new coronavirus disease propagated around the world, the rapid spread of news caused uncertainty in the population. False news has taken over social media, becoming part of life for many people. Thus, this study aimed to evaluate, through a systematic review, the impact of social media on the dissemination of infodemic knowing and its impacts on health.
Methods
A systematic search was performed in the MedLine, Virtual Health Library (VHL), and Scielo databases from January 1, 2020, to May 11, 2021. Studies that addressed the impact of fake news on patients and healthcare professionals around the world were included. It was possible to methodologically assess the quality of the selected studies using the Loney and Newcastle–Ottawa Scales.
Results
Fourteen studies were eligible for inclusion, consisting of six cross-sectional and eight descriptive observational studies. Through questionnaires, five studies included measures of anxiety or psychological distress caused by misinformation; another seven assessed feeling fear, uncertainty, and panic, in addition to attacks on health professionals and people of Asian origin.
Conclusion
By analyzing the phenomenon of fake news in health, it was possible to observe that infodemic knowledge can cause psychological disorders and panic, fear, depression, and fatigue.
Keywords: Covid-19, Fake news, Health, Infodemic knowing
Go to:
Introduction
Coronavirus 2019 disease (COVID-19), caused by the SARS-CoV-2 virus, led to the emergence of a pandemic, with a shift in economics, disruption in education, and various rules on home confinement (Munster et al. 2020). In this context of uncertainty, there was a need for new information about the virus, clinical manifestations, transmission, and prevention of the disease (Eysenbach 2020).
The rapid implementation of these measures, together with the number of significant deaths caused by the virus, ended up causing uncertainty in the population (Tangcharoensathien et al. 2020). In association with the generalized panic and the constant concern that COVID-19 caused, this culminated in the appearance of physical and psychological disorders, in addition to reduced immunity in the general population (Lima et al. 2020).
Previous studies indicate that the emergence of the pandemic and measures of social confinement caused the number of patients and health professionals with anxiety, sleep disorders and depression to increase; in addition, suicide rates were also considered high (Choi et al. 2020; Okechukwu et al. 2020). However, the use of social media and search queries to obtain information about the course of the disease is constantly expanding, and includes Twitter, Facebook and Instagram, Google Trends, Bing, Yahoo, and other more popular sources such as blogs, forums, or Wikipedia (Depoux et al. 2020).
Thus, information overload accompanied by fabricated and fraudulent news, also called fake news (FN), has emerged in the twentieth century to designate the fake news produced and published by mass communication vehicles such as social media, dominating traditional and social platforms, becoming increasingly part of many people’s daily lives. FNs multiply rapidly and act as narratives that omit or add information to facts (Naeem et al. 2020).
The potential effect of FN stems from conspiracy theories, such as a biological weapon produced in China, water with lemon or coconut oil that could kill the virus, or drugs, which even if approved for other indications, could have potential effectiveness in prevention or treatment of COVID-19. Therefore, the impact of this massive dissemination of disease-related information is known as “infodemic knowledge” (Hua and Shaw 2020). Other worrisome examples of infodemic knowledge include cases of hydroxychloroquine overdose in Nigeria, drug shortages, changing treatment of patients with rheumatic and autoimmune diseases, and panic over supplies and fuel (CNN 2020; Tentolouris et al. 2021).
The World Health Organization (WHO 2020) has worked closely to track and respond to the most prevalent myths and rumors that can potentially harm public health. In this context, the objective of the study was to evaluate, through a systematic review, the impact of the media and the media during the pandemic caused by the new coronavirus, and to determine how the spread of infodemic impacts people’s health.
Go to:
Methods
This is a systematic literature review that aimed to use explicit and systematic methods to avoid the chance of risk of bias (Donato and Donato 2019). Therefore, the study followed a design according to the guidelines of Preferred Report items for Systematic Reviews (PROSPERO) and PRISMA Meta-analyses (PRISMA 2021) and the search procedures were filed in the database and registered in PROSPERO: CRD42021256508 (PROSPERO 2021).
Go to:
Searching strategy
Search strategies were developed from the identification of relevant articles using the Medical Subjects Headings (MeSH) in a combination of Boolean AND. The search by string and keyword was calculated as follows: “Covid-19” OR “SARS-CoV-2” AND “fake news” AND “health” OR “Covid-19” AND “fake news” OR “misinformation” AND “health”. The strategy was performed using MedLine, Virtual Health Library (VHL), and Scielo databases. Search results were revised to prevent duplicate studies. The articles obtained were analyzed for relevance and step-by-step, as illustrated in Fig. 1. The report items for systematic review illustrate the PRISMA (PRISMA 2021) process used to report the results.
📷
Open in a separate window
Fig. 1
Search strategy flowchart
Go to:
Inclusion and exclusion criteria
The search terms were oriented according to the Population, Intervention, Comparison, Results and Study Design (PICOS) approach, methodology used to select the studies included in the systematic search (Methley et al. 2014), as shown in Table Table1.1. Cross-sectional studies, of cohorts or clinicians that addressed the impact of fake news on patients and health professionals around the world, were used. On the other hand, studies that did not refer to the proposed theme, review articles, or were letters and opinions were excluded. In addition, only full articles written in English, Portuguese (Brazil), and Spanish, published between January 1, 2020, and May 11, 2021, were reviewed.
Table 1
Approach to study selection (PICO) following systematic search
DescriptionAbbreviationQuestion componentsPopulationPLay population or health professionals, population with different levels of education and in different countriesInterventionIUse of an online questionnaire to analyze the impacts of FNs on healthComparisonCNot appliedOutcomesOSocial media platforms contribute to the spread of FNType of studySClinical trials; cohort studies; cross-sectional studies
Database searched in May 2021
Go to:
Assessment of risk of bias in included studies
Internal quality was performed based on selected study designs using two scales to independently assess the risk of bias; Newcastle–Ottawa for cohort studies and Loney scale for cross-sectional studies. In case of disagreement between two researchers, the assessment was performed by a third experienced researcher (Santos et al. 2019). The assessment of the risk of bias between studies was assessed as shown in Tables 2 and and33.
Table 2
Methodological quality of cross-sectional studies (Loney Scale)
ReferencesAre the study methods valid?What is the interpretation of the results?How likely are the results?Final scoreCriteria12345678Ruíz-Frutos et al.111111118Najmul-Islam et al.111111017Talwar et al.011111016Sallam et al.111111118Duplaga111111118Secosan et al.110111016
Questions in header relate to different criteria of quality as measured by the Loney Scale:
1 – Is the study design and sampling appropriate to answer the research question? 2 – Is the sample base adequate? 3 – Is the sample size adequate? 4 – Are adequate and standardized objective criteria used to measure motor development? 5 – Was EDM applied in an unbiased way? 6 – Is the response rate adequate? 7 – Were the EDM results presented in a detailed way? 8 – Are participants and context described in detail and can they be generalized to other situations?
Numbers alongside each reference relate to quality of response questions above: 1 = adequate, 2 = inadequate
Table 3
Methodological quality on the Newcastle–Ottawa Scale (NOS)
StudyNOS-items scoresCriteriaSelectionSelectionSelectionSelectionComparabilityResultsResultsResultsFinal score12341a123Radwan et al.111101117Sun et al.111101117Ahmad et al.110101116Almomani111101117Roozenbeek et al.111101117Montesi111101106Schmidt et al.111101117Fernandéz-Torres et al.111101117
Questions in header relate to different criteria of quality as measured by the NOS:
Selection 1: representativeness of the exposed cohort; Selection 2: selection of the unexposed cohort; Selection 3: exposure determination; Selection 4: demonstration that the result of interest was not present at baseline; Comparability 1a and 1b: comparability of cohorts based on design or analysis; Results 1: result evaluation; Results 2: follow-up of cohorts; Results 3: adequacy of cohort follow-up
Go to:
Data extraction
After collecting data from the articles, they were extracted and tabulated according to the information cited later:
Go to:
Results
Study selection
The search strategy identified 1644 publications through the MedLine database, the Virtual Health Library (VHL), and Scielo databases. Of these studies found, 24 were removed for being duplicative and 1606 for being within the exclusion criteria. Based on this, 14 studies met the inclusion criteria and were suitable to be considered in the present review, as shown in Fig. Fig.11.
Study characteristics
Of all the studies included, six were cross-sectional (Ruiz-Frutos et al. 2020; Islam et al. 2020; Talwar et al. 2020; Sallam et al. 2020; Duplaga 2020; Secosan et al. 2020) and eight were descriptive observational studies (Radwan et al. 2020; Sun et al. 2020; Ahmad and Murad 2020; Almomani and Al-Qur’an 2020; Roozenbeek et al. 2020; Montesi 2020; Schmidt et al. 2020; Fernández-Torres et al. 2021). The sample size of the fourteen selected articles was a total of 571,729 participants, 1467 false new items, and 2508 reports. Most participants were over 18 years of age. The studies were conducted in 14 different countries, including Palestine (n = 1), Spain (n = 4), India (n = 1), Bangladesh (n = 1), Iraq (n = 1), Mexico (n = 1), United States of America (n = 1), United Kingdom (n = 1), Ireland (n= 1), Jordan (n = 2), China n = 1), South Africa (n = 1), Poland (n = 1) and Romania (n = 1), each study being able to evaluate more than one country. Other characteristics of the study and the results of the primary study are summarized in Table Table44.
Table 4
Characteristics of study samples and risk factors associated with fake news
Main authorFake news classificationMethodology appliedFake news sourceFake news impactSchoolingCountryAgeRuíz-Frutos et al. 2020Routes of origin and transmission, the magnitude of impact on countriesOnline research (Qualtrics)Social mediaPsychic suffering and anxiety–Spain18 up to 42Najmul-Islam et al. 2020–Online research (Webropol software)Facebook and YoutubeFatigue–Bangladesh18 up to 35Talwar et al. 2020––Social mediaFear and panic–India18 up to 23Sallam et al. 2020The origin of the disease is related to biological warfare, global conspiracy, 5G networks in the spread of the diseaseOnline queryFacebook, WhatsApp, YouTube & TwitterAnxiety73.6% graduatedJordanOver 18Duplaga 2020Man-made genetic manipulationPolish programme of interviewer quality control–Panic48% high school, 10.7% graduatedPoloniaOver 18Secosan et al.Food and beverages as natural drugs, hygiene practices, and medicinesOnline query–Anxiety/ stress/ depression/ insomnia100% graduatedRomaniaOver 18Radwan et al.Fake news about the COVID-19 outbreakOnline queryFacebook & WhatsAppPanic/depression/stress/anger/ anxietyHigh schoolPalestineOver 11Sun et al. 2020Rinsing the mouth with brine can prevent COVID-19Online query (WeChat software)Social mediaAnxiety45.86% had higher education, 20.50% high school/technical education, 7.01% postgraduate educationChine*Over 46Ahmad and Murad 2020Generalized information about COVID-19Online query (SPSS)FacebookFear and panic–Iraqi KurdistanOver 18Almomani and Al-Qur’an 2020Alcohol consumption / using ultraviolet light / using nasal spray / garlic or chlorine on the skinOnline query (SPSS)Social mediaFear and panic–Jordan18 up to 60Roozenbeek et al. 2020Wuhan’s Laboratory,synthetic virusOnline qesearchSocial mediaPotential risk to public health/hesitation about vaccination–Mexico, USA, UK, Spain e IrelandOver 18Montesi 2020A vaccine that controls people/smokers are less vulnerable to COVID-19/home remedies bring a cureOnline qesearch (Site Maldita.es)Social mediaDoes not pose a danger to people’s health and safety–Spain–Schmidt et al. 2020Wuhan’s Laboratory, synthetic virus, and 5G ConspiracyTelephonic interviewSocial mediaFear/confusion/panic–Provinces of Gauteng, KwaZulu-Natal and Western Cape of South AfricaOver 18Fernández-Torres et al. 2021Conspiracy theories, supposed homemade methods to find out if the person is infectedOnline query (Google Forms)Tradicional media, Facebook, WhatsApp & YouTubeFear and confusion45% graduated, 37% post-graduated, 16% high school, 2% elementary schoolSpainAverage 35
Open in a separate window
*Possible significant effect of the relationship between fake news and people older than 76 years because they are more likely to be influenced by fake news and to spread such information
The potential risks of misinformation
The results included varied in our review. It was possible to identify that misinformation could trigger varied disturbances to an individual’s perception of FNs. In five papers, the population was observed to be more prone to fearful situations (Talwar et al. 2020; Ahmad and Murad 2020; Almomani and Al-Qur’an 2020; Schmidt et al. 2020; Fernández-Torres et al. 2021). Consequently, two studies found that a proportion of these patients who reported being afraid because of the influence of FNs reported being confused as to the veracity of this transmitted information (Schmidt et al. 2020; Fernández-Torres et al. 2021). Our review also found that this situation of fear and confusion can lead to the onset of panic (Talwar et al. 2020; Radwan et al. 2020; Duplaga 2020; Ahmad and Murad 2020; Almomani and Al-Qur’an 2020; Schmidt et al. 2020). In which, the set between the perceptual aspects to these FN can lead to milder symptoms such as fatigue (Islam et al. 2020), stress (Secosan et al. 2020; Radwan et al. 2020), insomnia (Secosan et al. 2020), and anger (Radwan et al. 2020). The literature also informs us that in addition to milder symptoms inherent to a state of confusion with regard to perceived misinformation conveyed, there is a likelihood of more complex symptomatologies as was reported in five studies with an increase in the number of patients with anxiety (Ruiz-Frutos et al. 2020; Sallam et al. 2020; Secosan et al. 2020; Radwan et al. 2020; Sun et al. 2020). Patients have also reported being affected by depression processes inherent to these FNs (Secosan et al. 2020; Radwan et al. 2020).
Susceptibility to spreading fake news according to education and age of the population
To understand the behavior of rumor spreading among the population, our findings reveal that the age of the patients who participated in the study varied mainly between 18 and 60 years, which could infer that a good portion of individuals in different age groups could be susceptible to FN spread through social media. However, in a single study, it was found that people over the age of 76 were more susceptible to being influenced by fake news as well as spreading this information (Sun et al. 2020). Another important finding in the literature indicates that susceptibility to interacting with FN is independent of the individual educational level of each study subject, where in four studies it was observed that the patients involved were in secondary school (Duplaga 2020; Radwan et al. 2020; Sun et al. 2020), five studies addressed the susceptibility of undergraduate patients to FN (Sallam et al. 2020; Duplaga 2020; Secosan et al. 2020; Sun et al. 2020; Fernández-Torres et al. 2021), and in two studies graduate patients were observed (Sun et al. 2020; Fernández-Torres et al. 2021).
Content and propagation of fake news circulating on social networking platforms
It was possible to verify in the selected articles that the social network Facebook had the greatest participation in the selected studies (Islam et al. 2020; Sallam et al. 2020; Fernández-Torres et al. 2021), followed by Youtube in three studies (Islam et al. 2020; Sallam et al. 2020; Fernández-Torres et al. 2021) and WhatsApp in three more studies (Sallam et al. 2020; Radwan et al. 2020; Fernández-Torres et al. 2021); Twitter appeared in only one study (Sallam et al. 2020). Among the main FNs, we had the disclosure that the consumption of food, vitamins, and beverages improved the clinical condition of the affected patient, in addition to reducing the contamination rate (Islam et al. 2020; Secosan et al. 2020). In other studies, the infection improved with the use of mouthwashes and cutaneous substances (Sun et al. 2020; Almomani and Al-Qur’an 2020). News related to viral spread, such as the creation of the virus in the laboratory and the spread of the virus by vectors such as mosquitoes, were also addressed (Ahmad and Murad 2020; Roozenbeek et al. 2020; Montesi 2020). Vaccines have also become targets of fake news in studies (Montesi 2020).
Go to:
Discussion
In the context of the pandemic, the media emerged to seek information about the disease. However, many occurrences were false news masquerading as reliable disease prevention and control strategies, which created an overload of misinformation. In this process, there was interference in the behavior and health of people, generating social unrest associated with violence, distrust, social disturbances, and attacks on health professionals (Moscadelli et al. 2020; Apuke and Omar 2021).
Overall, our review suggests that people of different nationalities were affected by sharing unverified information. In all the studies included, totaling 1467 news and 2508 reports, the results show that people trust the information they find on social networks, and through these accounts ended up believing and being affected by this information. Only one author pointed out that the news did not represent a danger to people’s health and safety, being considered harmless. This fact was explained by Aleinikov et al. (2020) pointing out that in this delicate process, the important thing is to relate the perception of risk found in social media and trust in the information provided by institutions (Aleinikov et al. 2020).
These tools, while becoming increasingly popular, are also increasingly exposed to unreliable information. As a result, people feel anxious, depressed, or emotionally exhausted, and these expressive health effects are directly associated with the spread of this information (Lin et al. 2020). So much so, that when analyzing our data, it was realized that this interaction can come with both mild effects and more serious psychological problems. This is also consistent with the literature, according to Jiang (2021), who evaluated the possible psychological impact of social media on students during the pandemic and found an increase in the anxiety levels of these students, as well as a worsening in academic performance and physical exhaustion (Jiang 2021).
The proliferation of false news has consequences for public health because it fuels panic among people and discredits the scientific community in the eyes of public opinion. For example, a popular myth that consumption of pure alcohol — methanol — could eliminate the virus in the contaminated body killed approximately 800 people in Iran, while another 5876 people were hospitalized for methanol poisoning (Hassanian-Moghaddam et al. 2020). As demonstrated in our evaluation, Almomani and Al-Qur’an 2020 and Secosan et al. 2020, in their reports also claim that the participants, in fact, believed that alcohol consumption cured COVID-19 (Secosan et al. 2020; Almomani and Al-Qur’an 2020).
Based on the literature, even social media that play a significant role in disseminating true news about COVID-19 have also been linked to illness, because as platforms that help to spread public health messages to people, they also promote opinionated reporting. and concerns about the disease (Galea et al. 2020). In fact, the results pointed out in this review reveal that 36% of the authors showed that exposure to infodemic knowledge generated fear, panic, depression, stress, and anxiety in people interviewed through an online questionnaire. This is corroborated by a cross-sectional study carried out by Olagoke et al. (2020), that when evaluating 501 participants, the anxiety and depression score was related to news exposure in the traditional media, showing a prevalence of depressive symptoms and a greater perceived vulnerability, causing great psychological impact.
Our results indicate that different age groups have susceptibility to interact with the FN propagated by social media, especially in the elderly population. These results were also verified in a previous study by Guimarães et al. 2021, who aimed to assess the population’s knowledge about COVID-19 and misinformation from an anonymous online survey and, with this, some parameters such as gender, education, and age were shown to be directly associated with a better perception of health issues in the context of the pandemic (Guimarães et al. 2021). The same was also seen by Hayat et al. 2020, who explored the public’s understanding of the current situation of the COVID-19 from online forms and concluded that participants with ages ranging from 16 to 29 years obtained better scores than older participants (Hayat et al. 2020). Such a fact is associated with the digital media literacy of individuals primarily over the age of 60 who end up not reliably determining the trustworthiness of online news, thus needing to develop literacy competencies that encompass the types of skills needed to identify questionable content (Guess et al. 2019).
To understand the behavior of spreading rumors among the elderly population, our results show that most respondents (74.82%) negatively evaluated the dissemination of fake news, while 2.52% did not care anyway. Among them, the correlation between the spread of rumors and anxiety was negatively associated, as they influence the behavior and perception of the elderly to understand what a fact is and what is fake news. Research shows that individuals over 65 years share up to seven times more unverified information when compared to other age groups, often in order to feel useful, active, and connected (Guess et al. 2019). Certainly, psychological interventions are mainly recommended to vulnerable populations and health professionals (Van Der Linden et al. 2020).
Our results also showed that 36% of the authors reported that, regardless of age, it was possible for participants to experience fatigue, anguish, and psychological distress, in addition to having a higher probability of developing anxiety-related symptoms. This is contradicted in two previous studies by Huang and Zhao (2020) and Wang et al. (2020); when evaluating the psychological impact of the uncontrolled spread of COVID-19, they realized that the manifestations of anxiety and psychological outbreaks were more common especially in the younger population who used social networks for a longer time (Huang and Zhao 2020; Wang et al. 2020). On the other hand, pandemic uncertainty and confinement created considerable levels of stress in young people, especially women, in Switzerland (Mohler-Kuo et al. 2021). It was further shown that misinformation fueled by rumors and conspiracy theories led to physical harassment and violent attacks against healthcare professionals and people of Asian origin in 28% of the results shown in this review. This finding is in line with a study that shows that conspiracy theories are not a new phenomenon, but they increase in times of crisis. Thus, people who believe in this “conspiracy world” are less likely to comply with social norms (Imhoff and Lamberty 2020).
The impact of denial and its association with fake news presents itself as a social phenomenon through the production of controversial theses to the scientific consensus (Duarte and César 2020). Good examples of denial content can be the emergence of the earthmoving movement, the global warming farce, and anti-vaccination discourses (Vasconcelos-Silva and Castiel 2020). With regard to the COVID-19 pandemic, denialism takes on an expression never seen before, in which the number of people who spread this news grows more and more, and therefore results in an increase in the number of deaths of the most vulnerable patients (Morel 2021).
Importantly, false information has been a genuine concern among social-media platforms and governments, which have implemented strategies to contain misinformation and fake news during the pandemic. Of the social-media platforms, in order to contain the advance of FNs, Facebook has implemented a new feature to inform users when they engage with unverified information (BBC 2020). Another way to counteract misinformation is to seek support and discuss actions that authorities or public agencies could take to mitigate the spread of conspiracy theories, and encourage users to flag inappropriate content to social-media companies (González-Padilla and Tortolero-Blanco 2020).
Go to:
Conclusion
Social-media platforms have contributed to the spread of false news and conspiracy theories during the new coronavirus pandemic. When analyzing the phenomenon of fake news in health, it is possible to observe that infodemic knowledge is part of people’s lives around the world, causing distrust in Governments, researchers, and health professionals, which can directly impact people’s lives and health. When analyzing the potential risks of misinformation, panic, depression, fear, fatigue, and the risk of infection influence psychological distress and emotional overload. In the COVID-19 pandemic, the disposition to spread incorrect information or rumors is directly related to the development of anxiety in populations of different ages.
Go to:
Acknowledgments
The authors would like to thank the CAPES and FUNCAP for the fellowships of Yasmim M Rocha and Gabriel A de Moura.
Go to:
Author contributions
Yasmim Mendes Rocha: bibliographic research, concepts, methodology, writing, and data analysis. Gabriel Acácio de Moura: bibliographic research, methodology, revision, editing, and data analysis. Gabriel Alves Desidério: reading of included articles and review. Carlos Henrique de Oliveira: translation into English, reading of articles, and writing. Francisco Dantas Lourenço: article reading and review. Larissa Deadame de Figueiredo Nicolete: article idea, supervision, methodology, research, formal analysis, and editing.
Go to:
Funding
This study was supported by the Coordination for the Improvement of Higher Education Personnel (CAPES) and the Cearense Foundation for Scientific and Technological Development Support (FUNCAP).
Go to:
Declarations
Conflict of interest
The authors declare no conflict of interest.
Go to:
Footnotes
This is a review article that has not been published before and is not being considered for publication anywhere. Authors confirm that the manuscript has been read and approved by all named authors and that no other person has met the authorship criteria, but it is not listed. We further confirm that none of us have any conflict of interest to declare. We would like to thank you for your attention and the opportunity given to submit our study, and agree, if the manuscript is accepted for publication, to the transfer of all copyright to the Journal of Public Health.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Go to:
References
Source: Article The impact of fake news on social media and its influence on...
This scientific research paper may be helpful in enabling you to pursue topical areas of investigation as you plan and organize your research project:
Article Mental Health Issues Mediate Social Media Use in Rumors: Imp...
"Asian J Psychiatr. 2020 Oct; 53: 102132.
Published online 2020 May 7. doi: 10.1016/j.ajp.2020.102132
PMCID: PMC7204703
PMID: 32474344
Mental health issues mediate social media use in rumors: Implication for media based mental health literacy
Manoj Kumar Sharma*
SHUT Clinic (Service for Healthy use of Technology), National Institute of Mental Health & Neurosciences, Bengaluru, Karnataka, India
Nitin Anand
National Institute of Mental Health & Neurosciences, Bengaluru, Karnataka, India
Akash Vishwakarma
SHUT Clinic, Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Maya Sahu
Department of Nursing, NIMHANS, Bangalore, Karnataka, India
Pranjali Chakraborty Thakur
SHUT Clinic, Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Ishita Mondal
Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Priya Singh and Ajith SJ
SHUT Clinic, Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Suma N
Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Ankita Biswas
SHUT Clinic, Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Archana R and Nisha John
Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Ashwini Tapatrikar
SHUT Clinic, Department of Clinical Psychology, NIMHANS, Bangalore, Karnataka, India
Keshava D. Murthy
Department of Psychiatric Social Work, NIMHANS, Bangalore, Karnataka, India
Author information Article notes Copyright and License information Disclaimer
This article has been cited by other articles in PMC.
Social media use has recently become immensely popular not only for its leisure activities through connecting people over the world, but also for keeping updated with the current trends through news and sharing information. It provides a perfect platform to interact with others by offering opportunities to share a user’s thoughts, emotions, pictures, videos and creative ideas through posts or blogs (Kuss and Griffiths, 2011b, 2011a). Hence, one important characteristic of social media platforms is rapid spreading of information through its users which is usually impactful.
Another concern is when it comes to health related information sharing on social media. As open to all, anyone can produce information and publish in the digital forum, share experiences, form their own perspectives which remain unverified by any professional news channel, editors or fact-checkers (Sommariva et al., 2018). Thus, social media comes with its own limitations for misinformation in the form of rumours or fake news (Zubiaga et al., 2016). Moreover, once rumors begin to spread on social media, they are very difficult to control with updates or corrections (Jones et al., 2017). Among these, health rumors which are unverified information regarding the practice of medicine and healthcare, often endanger public health (Oh and Lee, 2019). Hence, it is important to understand the role and impact of social media in spreading rumours and verify information before sharing it with others.
Research literature has found that social media has power in influencing people's behavior when there is an outbreak of epidemic or pandemic. Over the decades, social media has been flooded with misinformation on diabetes, anorexia as well as anti-vaccination content along with the recent Zika virus or Ebola epidemic (Fernández-Luque and Bau, 2015; Sommariva et al., 2018). The news of the Ebola epidemic created a climate of global nervousness with rumours and misinformation quickly spreading through social media platforms. Similar trend is being observed with current occurrence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which has been declared as a pandemic.
Studies have also documented that during crisis events, people often seek out event-related information to stay informed of what is happening. If there is lack of official information, people may be at risk for exposure to rumors that fill the information void (Jones et al., 2017). Additionally, constant assault of information through social media also leads its users to easily consume available information irrespective of its authenticity. In this is the era of “headline stress disorder”, a lot of negative feelings like anxiety, hopelessness, despair, and sadness is fueled by being regulated by the 24-h news cycle. The individual’s anxiety levels or anxious personality traits makes them to spend more time online to look for information. Many reports show that there is an increase in online social media activity because of constant connectivity with the national and world news and as well as with presence of anxiety traits i.e. people with a high level of health anxiety spend more time searching online for health information than those with a lower level of health anxiety (Muse et al., 2012). Unverified information or rumors on social media and anxiety go hand in hand as anxious individuals or the people who are at risk of developing anxiety or for those for whom it serves as a medium to ventilate their affective state are the individuals who are more likely to share these kinds of information with others without verifying the source (Anthony, 1973; Pezzo & Beckstead, 2006). These behaviors lead to exponential spreading of fake news which may result in more time being spent online on non-productive social interactions and on transitory alleviation of anxiety. The research literature suggests that around 20 % of older adolescents and young adults are engaging in excessive use of technology like problematic use of social media and online gaming. In addition, they experience symptoms of depression, anxiety and stress which play a role as well in making them spend increased time online to alleviate these psychological symptoms momentarily. The spending of increased time online leads to excessive use of technology and it increases the risk of exposure to and spread of unverified information or rumors on digital platforms (Sharma and Seshadri, 2020).
Similarly, suicide is another public mental health problem where media and social media play a significant role in either increasing or curtailing the problem within the society. The available literature in Bangladesh and India suggests that media reporting about suicide includes information which offers details name of the victim, their occupation, method of suicide, images of suicide victims, suicide notes and citations form suicide notes. This is the information which works to make the news attractive and shares details which increase access to information for harming self and may also work to create misinformation or rumors (Arafat et al., 2020; Armstrong et al., 2018; Jain and Kumar, 2016). However, the media does not highlight information to educate the general population about what are early signs of suicidal behaviors, prevention plans and expert opinions from mental health professionals, helpline numbers for support and availability of emergency services in hospitals. These findings further suggest that the reports in media on suicide do not follow the guidelines issued by the World Health Organization (WHO) and other health regulatory bodies on reporting of suicide in media (Arafat et al., 2020; Armstrong et al., 2018; Cherian et al., 2020; Jain and Kumar, 2016). There are similar irregularities which indicate reporting of sensitive information about suicide in a detrimental manner in media in china as well (Chu et al., 2018).
Thus, in the light of the existing information, it becomes understandable that media in all its formats have a huge impact and more significantly has a role to report responsibly the information in an educative format which is related to health of the population. In addition, it needs to be more sensitive and responsible in reporting about public health problems like the SARS-CoV-2, and suicide where the focus is on offering information which is helpful for prevention, details the steps to take in times of the health emergency, offers expert opinions from mental health professionals, helpline numbers for support and emergency services in hospitals. This role of media will surely work to minimize the digital content which leads to creation of misinformation or rumors.
To summarize, in addition to the responsible role of media in reporting about public health problems, the individual’s members of the population, the government, policy makers, health regulatory bodies and health professionals need to collaborate and develop guidelines for responsible dissemination of information over all kinds of media formats with respect to public health problems. Such guidelines will also work to improve the media based literacy about health and mental problems among the population and will be extremely helpful for use in times of public health emergencies like the SARS−COV-2 pandemic. The development of such guidelines are crucial as the pattern of epidemics and pandemics changes over time, but the cycle of rumors or fake news or inaccurate media reports continues to revolve around media formats and especially in social media likely due to stress, anxiety and other psychological factors of individuals which requires to be studied in greater detail.
Go to:
Declaration of patient consent
The authors certify that informed consent has been taken from the patient for the present communication.
Go to:
Compliance with ethical standard
There was no conflict of interest in relation to present work as well as informed consent of the human subjects had been taken prior to inclusion in the study.
Go to:
Statement of human right
The studies have been approved by the Institutional and/or national research ethics committee
Go to:
Research involving human participants and/or animals
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Go to:
Declaration of Competing Interest
Authors of the paper did not have any conflict of interest.
Go to:
Acknowledgement
ICMR, DHR Delhi, India, awarded the grant to Dr Manoj Kumar Sharma
Go to:
References
Formats:
|
|
|
|
Save items
Add to FavoritesView more options
Similar articles in PubMed
See reviews...See all...Mental health issues mediate social media use in rumors: Implication for media based mental health literacy.[Asian J Psychiatr. 2020] Online Learning Communities and Mental Health Literacy for Preschool Teachers: The Moderating Role of Enthusiasm for Engagement.[Int J Environ Res Public Healt...] When Do People Verify and Share Health Rumors on Social Media? The Effects of Message Importance, Health Anxiety, and Health Literacy.[J Health Commun. 2019] Do Web-based Mental Health Literacy Interventions Improve the Mental Health Literacy of Adult Consumers? Results From a Systematic Review.[J Med Internet Res. 2016] The effectiveness of school mental health literacy programs to address knowledge, attitudes and help seeking among youth.[Early Interv Psychiatry. 2013]
Cited by other articles in PMC
See all...What concerns Indian general public on second wave of COVID-19? A report on social media opinions Western Dietary Pattern Antioxidant Intakes and Oxidative Stress: Importance During the SARS-CoV-2/COVID-19 Pandemic[Advances in Nutrition. 2021]
Links
Recent Activity
See more...Clear / Turn Off Mental health issues mediate social media use in rumors: Implication for media b...Mental health issues mediate social media use in rumors: Implication for media based mental health literacy. Elsevier Public Health Emergency Collection. 2020 Oct; 53()102132 The impact of fake news on social media and its influence on health during the C...The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. Nature Public Health Emergency Collection. ; ()1 The agglomeration state of nanoparticles can influence the mechanism of their ce...The agglomeration state of nanoparticles can influence the mechanism of their cellular internalisation. Journal of Nanobiotechnology. 2017; 15() Artificial intelligence approaches and mechanisms for big data analytics: a syst...Artificial intelligence approaches and mechanisms for big data analytics: a systematic study. PeerJ Computer Science. 2021; 7() New, rapid method to measure dissolved silver concentration in silver nanopartic...New, rapid method to measure dissolved silver concentration in silver nanoparticle suspensions by aggregation combined with centrifugationS. pringer Open Choice. 2016; 18(9)
Support Center / Support Center
Source: Article Mental Health Issues Mediate Social Media Use in Rumors: Imp...
This scientific research paper is from The Proceedings of the National Academy of Science in the United States of America and will provide you with abundant information in your academic area of interest!
https://www.pnas.org/content/118/15/e1912437117
"COLLOQUIUM PAPER
"Misinformation and public opinion of science and health: Approaches, findings, and future directions"
Michael A. Cacciatore
See all authors and affiliations
PNAS April 13, 2021 118 (15) e1912437117; https://doi.org/10.1073/pnas.1912437117
Abstract
A summary of the public opinion research on misinformation in the realm of science/health reveals inconsistencies in how the term has been defined and operationalized. A diverse set of methodologies have been employed to study the phenomenon, with virtually all such work identifying misinformation as a cause for concern. While studies completely eliminating misinformation impacts on public opinion are rare, choices around the packaging and delivery of correcting information have shown promise for lessening misinformation effects. Despite a growing number of studies on the topic, there remain many gaps in the literature and opportunities for future studies.
The popularity of “misinformation” in the American public consciousness arguably peaked in 2018 during the lead-up to the US midterm elections (1). Shortly after the midterms, “misinformation” was Dictionary.com’s “word of the year” (2), just 1 y after Collins English Dictionary had granted “fake news” the same title (3). Interest was driven largely by a focus on politics and the role that misinformation might have played in influencing candidate preferences and voting behaviors. However, certainly more can be said about a topic that has captured the attention of governments and citizens across the globe. What does “misinformation” (and the terms that are oftentimes treated synonymously) mean? How big of a problem is it in areas outside of politics, including science and health? What do we know about the ways in which it impacts citizens? What can be done to minimize the damage it is doing to public understanding of the key issues of the day?
In this paper I summarize the literature on misinformation with a specific focus on academic studies in areas of science and health. I review the methodological approaches and operationalizations employed in these works, explore the theoretical frameworks that inform much of the misinformation research, and break down the proposed solutions for combatting the problem, including the scholarly research aimed at stopping the spread of such content and lessening its impacts on public opinion. Finally, I discuss avenues for future research. I begin, however, with a discussion of some of the most common definitions of misinformation (and related terms) in the communication literature.
Defining “Misinformation”
An exploration of the literature suggests that “misinformation” is the most commonly employed label for studies focused on the proliferation and impacts of false information. Part of this has to do with the fact that misinformation has become something of a catch-all term for related concepts like disinformation, ignorance, rumor, conspiracy theories, and the like.* Its status as a catch-all term has sometimes resulted in broad use of the concept and imprecise definitions. Much of the earliest work on the topic would employ the label “misinformation” while failing to formally define the concept at all (e.g., refs. 4 and 5), treating misinformation as a known concept.
As misinformation work grew, scholars brought greater structure to the term. Arguably the most commonly applied definition of misinformation is the one offered by Lewandowsky et al. (6), who refer to misinformation as “any piece of information that is initially processed as valid but is subsequently retracted or corrected” (6). Others have removed the “processing” element from this definition, describing misinformation as information that is initially presented as true but later shown to be false (e.g., refs. 7 and 8).
Lewandowsky et al. (9) also draw a line between misinformation and disinformation. While not the first scholars to do so (e.g., ref. 10), their distinction hinges on intentionality, with misinformation operating in the unintentional space and disinformation in the intentional [e.g., “outright false information that is disseminated for propagandistic purposes” (9)]. Nevertheless, studies have continued to lean on the term “misinformation” even when referring to groups who are actively spreading false content for advocacy purposes (e.g., refs. 11 and 12), illustrative of a lingering conceptual fuzziness in the literature.
Ignorance differs from mis/disinformation in terms of both how much an individual knows and the degree of confidence they have in that knowledge. An ignorant person is not only ill-informed but realizes they are, while those who are misinformed are usually confident in their understanding even though it is inaccurate (13, 14). Terms like “myth,” “falsehoods,” and “conspiracy” are less commonly employed and typically serve as synonyms for the more general misinformation.
Methodological Approaches and Operationalizations
This next section will focus on how mis/disinformation has been studied. I will outline five major groupings to this scholarship (content analyses, computational text analysis, network analyses/algorithmic work, public opinion surveys/focus groups/interviews, and experiments) and discuss the common ways mis/disinformation is operationalized and manipulated. I will save the discussion of key conclusions for Trends in the Findings.
Content Analysis.
The goal of virtually all of the content analysis work on mis/disinformation is to diagnose the scope of the problem. Content analyses with some emphasis on mis/disinformation—even if the terms are not specifically acknowledged—have been conducted on a variety of topics, including many health issues.
There is much variation in the mis/disinformation content analysis work, although nearly all this work focuses on online sources. Some have focused on returned internet search results. For example, Hu et al. (15) explored the returned results for skin condition searches on top internet search engines, searching for the relative prevalence of product-focused versus educational websites and the quality of information across those categories of content. Kalk and Pothier (16) took a unique look at information searches online, examining returned Google search results for “schizophrenia” in terms of their readability using the standardized Flesch Reading Ease classification. Rowe et al. (17) focused more narrowly on the open question portal on the BBC website in the immediate aftermath of avian flu’s arriving in the United Kingdom. Their analysis focused not on the potential for online content to misinform the public but on the open question portal as a means of identifying whether and in what areas the public lacked an adequate understanding of avian flu. Still other work has taken a slightly different approach by analyzing the rhetoric and persuasive communication strategies of a specific population to understand what makes their use of disinformation effective (e.g., refs. 18 and 19).
Particularly in the 2010s, content analyses of social media platforms became popular. A collaboration between researchers in Nigeria and Norway looked at the prevalence of medical mis/disinformation in Ebola content shared on Twitter, including a comparison of the potential reach of such content relative to facts (20). Jin et al. (21) also explored Ebola content on Twitter, although with a more narrow focus on rumor spread in the immediate aftermath of news of the first case of Ebola in the United States. Other work has focused on Facebook. Bessi et al. (22, 23) relied on a sample of 1.2 million individuals on the platform to better understand how mainstream scientific and conspiracy news are consumed and shape communities, including correlating user engagement with metrics like numbers of Facebook friends. Content analyses of vaccination-related issues have been conducted on YouTube videos (24, 25), with such work focusing on the stance of the video (positive, negative, or neutral toward vaccines) and false links between vaccines and cases of autism.
Perhaps owing to their oftentimes broader focus on issues outside of mis/disinformation (e.g., the tone of content, the frame being emphasized, etc.), much content analysis work lacks clear operationalizations of mis/disinformation and related measures. The most common operationalizing is a determination of whether the content contains evidence of factually inaccurate information, innuendo, or conspiracy theories (e.g., refs. 11, 19⇓–21, and 26⇓⇓–29). Unfortunately, it is not always clear how the authors are differentiating rumor from fact. Some categorize content by relying solely on assessments by groups of coders who are considered experts in the field (e.g., ref. 30), while others, particularly those in the health communication space, compare content to guidelines put forth by major health organizations like the Centers for Disease Control and Prevention (CDC) or the World Health Organization (e.g., refs. 31 and 32). Still other work is decidedly more subjective in nature, requiring coders to search for any evidence that audiences are struggling with or otherwise made confused or anxious by the content they encounter (e.g., refs. 17 and 33). These more subjective operationalizations reflect the fuzziness around our understanding of mis/disinformation and may serve to overstate the scope of the problem.
Additional work has taken a broader approach to the classification of content, focusing less on specific pieces of communication and more on the source of the information. One approach involves identifying “fake news” pages online and treating all content from those pages as disinformation (e.g., refs. 22, 23, 34, and 35). For example, the Bessi et al.’s (22, 23) studies relied on Facebook pages dedicated to debunking conspiracy theories to identify “conspiracy news pages,” while other work has relied on existing databases and projects that track fake news sources (e.g., refs. 34 and 35). This approach can, of course, be supplemented by then examining the content on these “conspiracy” or “fake news” pages for specific instances of disinformation. A second approach speculates that a misinformed public is likely to follow given the focus of content found online (e.g., ref. 15) or the accessibility/readability of that content (e.g., ref. 16). Rather than explicitly measure mis/disinformation, these works warn that the oftentimes product-focused (rather than health- or education-focused) nature of health websites coupled with the use of jargon and sophisticated language on those pages may breed a misinformed public. Assessments of the conclusions provided by content analyses should therefore be made with these operational decisions in mind since some works may not actually be classifying individual news items, instead deeming all content from a given source as mis/disinformation.
Computational Text Analysis, Natural Language Processing, and Topic Modeling.
A cousin of the content analysis work noted above is the work being done through computational text analysis, natural language processing, and related approaches. While a complete overview of these methodologies is not feasible, one can generally think of these as computer-assisted approaches to the thematic clustering of large-scale textual data. These generally take an inductive approach to data, with computer algorithms identifying topics or themes based on hidden language patterns in texts (36). There are various approaches to the clustering of data and a variety of algorithms used for the task (37), but these computational approaches generally carry with them two key advantages. First, they conduct reliable content analyses on collections of data that are too big to code by hand, and thus are an extension of the content analysis approach noted above. Second, they rely on machine learning, which allows for the discovery of patterns in texts that may not be recognized by individual coders (36).
As one example of this work, Bousallis and Coan (38) retrieved all climate change-focused documents produced by 19 well-known conservative think tanks and classified them by type and theme using a clustering algorithm. This approach allowed the authors to identify, among other things, a misinformation campaign that escalated over a 15-y period between 2008 and 2013. Other work in this space uses these methodologies in concert with various forms of metadata or existing datasets. For instance, Farrell (39) collected philanthropic data, including lists of conference attendees and speakers, and combined this information with existing datasets of all persons known to be connected to organizations linked to the promulgation of climate change misinformation between 1993 and 2017. Using natural language processing, he was able to identify the degree to which persons and organizations linked to climate mis/disinformation were also integrated into mainstream philanthropic networks. He also took a similar approach to the question of corporate funding, combining Internal Revenue Service data of Exxon Mobile and Koch Industries funding donations with collections of government documents and written and verbal texts from both mainstream news media and groups opposing the science of climate change (40). Relying on a combination of network science and machine-learning text analysis, this work was able to not only explain the corporate structure of the climate change countermovement but also pinpoint its influence on mainstream news media and politics.
Network Analysis, Algorithms, and Online Tracking.
At the same time that the work identified above was identifying the scope of the mis/disinformation problem, efforts were being made, largely through computer technology, to help solve it. A first step in this process—the identification of factually inaccurate information from legitimate news—has become attractive to scholars working in artificial intelligence and natural language processing. Vosoughi et al. (41) focused on identifying the salient features of rumors by examining their linguistic style, the characteristics of the people who share them, and network propagation dynamics. Other work has focused on specific features of content, like hashtags, links, and mentions (42).
Once rumors and false content are identified, the next step is controlling or stifling their spread. Rumor control studies can be grouped into two major categories. First, scholars have focused on garnering a general understanding of how information—factual or otherwise—is shared and spread online (e.g., refs. 23, 32, 35, 43, and 44). This work looks at the structure of online communities, including the strength of ties between community members and key features of information sources, like whether a source of content is likely a bot. Patterns in shared content are examined, as well, including key features of messages that garner engagement and time series models to better understand the speed at which information is shared. Bessi et al. (23), for example, focused on homophily and polarization as key triggers in the spread of conspiracies, while Jang et al. (44) focused on the differences in authorship between fake and real news and the alterations that each go through as they are shared online.
The second major approach to rumor control work focuses on identifying critical nodes in social networks and either removing them from the network or combatting their effects via information cascades. These works focus heavily on building and testing algorithms that can be automatically applied to large-scale data so as to identify and deal with critical nodes both quickly and at low costs. As one example, a group of researchers looked at information cascades as a method for limiting the spread of mis/disinformation (45). Their approach focuses on stifling the spread of false information by identifying it early, seeding key nodes in a social network with accurate information, and allowing those users to spread the accurate information to others before they are exposed to the false content.
The operationalization of mis/disinformation in these works generally follows that noted for content analyses. In fact, work in this area often includes a content analysis component for identifying mis/disinformation and the major sources of such content. Once mis/disinformation has been identified the authors model the information and run simulations on the data.
Public Opinion Surveys/Focus Groups/Interviews.
Surveys and focus groups are popular for understanding how different population groups perceive or are vulnerable to the problems of mis/disinformation. Studies of expert populations are common in the space of healthcare and disease. For example, Ahmad et al. (46) conducted focus groups with physicians to learn more about the benefits and risks of incorporating internet-based health information into routine medical consultations. A similar approach was taken by Dilley et al. (47), who employed surveys and structured interviews with physicians and clinical staff to learn more about the barriers to human papillomavirus vaccination.
The bulk of survey, focus group, and interview work in this area, however, has focused on lay audiences. Nyhan (13) focused on public misperceptions in the context of healthcare reform, relying on secondary survey data to show how false statements about reform by politicians and media members were linked to misperceptions among American audiences. Silver and Matthews (48) relied on semistructured interviews with survivors of a tornado to learn more about the spread of (mis)information in the aftermath of a disaster, while Kalichman et al. (49) surveyed over 300 people living with HIV/AIDS to assess their vulnerability to medical mis/disinformation.
Mis/disinformation is generally operationalized in similar ways in surveys, focus groups, and interviews. The work with expert populations will often employ attitudinal measures to understand how experts view the size of the problem within a given topic area (e.g., refs. 46, 47, 50, and 51). The work with lay audiences will more often employ measures of factual knowledge—for example, true/false items about the causes, symptoms, and possible cures for a given disease or virus (12, 52⇓–54)—or perceived knowledge or concerns (e.g., refs. 53 and 55), which might ask respondents to report how much they believe they know about a topic, or how big a problem they believe inaccurate information to be. Other work has utilized quasi-experimental stimuli to assess a respondent’s susceptibility to false content by exposing participants to different-quality webpages before asking them to rate the pages in terms of believability and trust (49). Finally, attempts have been made to distinguish mere ignorance from actual mis/disinformation by analyzing not only whether an individual holds a misperception but how strongly that misperception is tied to their self-assessed knowledge of the topic (13).
Experiments.
With the possible exception of content analysis work, experiments have been the most popular methodological approach to the issue of mis/disinformation. It is worth noting that most experiments have tended to focus on misinformation in the form of honest journalist or witness mistakes, rather than more flagrant attempts to deceive (i.e., disinformation). Much of this work has explored the role of retractions or corrections in lessening the continued influence of misinformation in the minds of the public, but other approaches have been employed, including inoculating people to misinformation prior to exposure (e.g., refs. 56 and 57), providing participants with myth–fact sheets or event statements that correct the misinformation (58⇓–60), and using the “related links” feature on Facebook, or subsequent posts in a social media newsfeed, to provide alternative viewpoints on the topic (61⇓–63).
Some work has avoided the use of retractions or inoculating information altogether by looking at intervention materials for areas where misperceptions are already common, such as vaccines (64). Still other work falls outside this general framework. Rather than attempting to reverse misperceptions in people’s minds, Nyhan and Reifler (65) used a mailed reminder of fact-checking services to see if the reminder would deter politicians from making false statements on the campaign trail.
Experimental work generally operationalizes mis/disinformation in one of several ways. First, it is oftentimes a manipulated variable, with the most common manipulations taking the form of providing a false piece of information to experimental participants. These are typically real or constructed news articles or “dispatches” (e.g., refs. 66 and 67) but might also be brief posts or headlines shared on social media (e.g., ref. 68), generic statements or statistics (e.g., refs. 60 and 69), quotes from a politician (e.g., ref. 70), or recordings of news reports (e.g., ref. 71). After exposure, participants will receive some form of retraction notice, thus turning the original information into misinformation.
Misperceptions are typically assessed after exposure to an experimental stimulus through some form of factual knowledge questions, attitudinal items, or inference queries. Factual knowledge questions might take the form of basic fact-recall items based on information in the communication to which the participant was exposed (e.g., “On which day did the accident occur?”; ref. 58). These are similar to the measures employed in survey work, and might take the form of true–false items. Some work assesses fact-recall with response booklets, where participants are asked to provide as many event-related details as possible to provide a complete account of the event (e.g., ref. 58).
Attitudinal items are usually posed around the key components of the shared mis/disinformation. For instance, a study about the false link between vaccines and autism first presented participants with misinformation then corrected that information in one of several ways before measuring attitudes related to the misinformation through a series of agree–disagree items (e.g., “Some vaccines cause autism in healthy children” or “If I have a child, I will vaccinate him or her”; ref. 61).
Inference questions are generally open-ended and allow a respondent to either reference the inaccurate content they were originally given, reference the correction to that information, or avoid the context altogether. In their study of a fictitious minibus accident, Ecker et al. (58) asked participants the following inference question: “Why do you think it was difficult getting both the injured and uninjured passengers out of the minibus?” Having first misinformed participants by noting that the passengers were elderly, and then later correcting that information, a reference to the advanced age of the passengers would be evidence of misinformation.
Finally, unique approaches to studying mis/disinformation require unique approaches to measuring outcomes. The Nyhan and Reifler (65) study that used a reminder of fact-checking to see if it deterred politicians from making false statements measured how dishonest the politicians were in their later statements by turning to PolitiFact ratings and searching LexisNexis for any media articles that challenged a statement by any of the legislators in the study.
Theoretical Underpinnings
The Continued Influence Effect.
The backbone of a significant number of studies of mis/disinformation, particularly many of the experimental approaches built around correcting the effects of misinformation on the public, is the so-called continued influence effect (CIE). The CIE refers to the tendency for information that is initially presented as true, but later revealed to be false, to continue to affect memory and reasoning (59). A relatively small group of researchers have made the most headway in this space, primarily exploring the CIE in news retraction and correction studies (e.g., refs. 6, 58⇓–60, 66, 67, and 71⇓–73).
There are multiple proposed explanations for the CIE. The first concerns “mental event models” (74, 75). People are said to build mental models of events as they unfold. However, in doing so, they are reluctant to dismiss key information, such as the cause of an event, unless a plausible alternative exists to replace the dismissed information. If no plausible alternative is available, people prefer an inconsistent model over an incomplete one, resulting in a continued reliance on the outdated information.
The second explanation for the CIE is focused on retrieval failure in controlled memory processes (6). This process can be relatively simple, such as misattributing a specific piece of information to the wrong source (e.g., recalling the subsequently retracted cause of a fire but thinking that information came from the credible police report), or it might be rather complex, having to do with dual-process theory and the automatic versus strategic retrieval of information from memory (76). While a complete overview of dual-process theory is beyond the scope of this paper, this explanation largely focuses on a breakdown in the encoding and retrieval process in memory due to things like time pressure or cognitive overload (73). In short, how we encode information impacts how quickly and with what accuracy we will retrieve information at a later time.
A third explanation for the CIE concerns processing fluency and familiarity. Oftentimes, in producing a retraction we repeat the initial false information, which may inadvertently increase the strength of that information in the receiver’s memory and their belief in it by making it more familiar (73). When the receiver is later called upon to recall the event, the mis/disinformation is more easily recalled, thereby giving it greater credence. Finally, there is some evidence that the CIE might be based on reactance effects, whereby people do not like being told what to think and push back when they are told to disregard an earlier piece of information by a retraction. This explanation has been largely tested in courtroom settings where jurors are asked to disregard a piece of evidence after being told it is inadmissible (6).
Motivated Reasoning.
Since at least the mid-20th century, scholars have noted that partisans are selective in both their choice and processing of information. The biased processing of content has come to be known as “motivated reasoning.” Motivated reasoning has become a popular concept in mis/disinformation research, particularly for issues with a strong partisan divide (e.g., refs. 13, 61, 66, 72, and 77).
Several mechanisms have been proposed to explain motivated reasoning, including the prior attitude effect, disconfirmation bias, and confirmation bias (78). The prior attitude effect occurs when “people who feel strongly about an issue … evaluate supportive arguments as stronger and more compelling than opposing arguments” (78). Disconfirmation bias argues that “people will spend more time and cognitive resources denigrating and counterarguing attitudinally incongruent than congruent arguments” (78). Individuals are engaging in confirmation bias when they choose to expose themselves to “confirming over disconfirming arguments” when they are given freedom in their information choice (78). Additional work has expanded upon the mechanisms noted here. For instance, Jacobson’s (79) selective perception argues that “people are more likely to get the message right when it is consistent with prior beliefs and more likely to miss it when it is not” (79), while his selective memory suggests that “people are more likely to remember things that are consistent with current attitudes and to forget or misremember things that are inconsistent with them” (79). In the context of mis/disinformation, motivated reasoning can help explain why some people may be resistant to new information that, for example, contradicts a believed link between vaccinations and autism (64).
Other Concepts Common to the Literature.
Factors related to the CIE and motivated reasoning that are also common in the mis/disinformation literature include echo chambers (“polarized groups of like-minded people who keep framing and reinforcing a shared narrative”; ref. 80), filter bubbles (“where online content is controlled by algorithms reflecting user’s prior choices”; ref. 44), worldviews (audience values and orientation toward the world, including their political ideology; ref. 6), and skepticism (the degree to which people question or distrust new information or information sources; ref. 6). These concepts generally help explain the resistance to correcting information that forms the foundation of the CIE.
Trends in the Findings
How Big Is the Problem?
As noted, content analysis work and computational text analyses have helped scholars better understand the scope of the mis/disinformation problem. A complete summary of the studies in this space is not feasible; however, some patterns are worth noting. First, there is often convergence in results even with vastly different approaches to studying the problem. The work on vaccine mis/disinformation represents one area where scholars have generally coalesced in their research findings. For example, Basch et al. (24) explored videos about vaccines on YouTube and found that a strong percentage reported a link between vaccines and autism, a finding that was echoed by Donzelli et al. (25) in their exploration of the same topic and platform. Those findings have been complemented by Moran et al. (19) and Panatto et al. (11), who identified similar false claims about links between vaccination and autism and Gulf War syndrome, respectively, in their samples of web pages. Computational analyses focused on climate change communication have also generally identified problems with mis/disinformation. For example, Boussalis and Coan (38) found increases in climate change mis/disinformation over time, arguing that the “era of science denial” is alive and well, while Farrell (36) found evidence that organizations that produce climate contrarian texts exert strong influence within networks and therefore wield great power in the spread of information.
At the same time, results have not always been consistent, even when exploring the same issue within the same medium. For instance, researchers conducted a search of “Ebola” and “prevention” or “cure” on Twitter, a search that returned a large set of tweets, of which 55% were said to contain medical mis/disinformation with a potential audience of more than 15 million (as compared to about 5.5 million for the medically accurate tweets) (20). Also on Twitter, Jin et al. (21) looked at rumor spread in the immediate aftermath of news of the first case of Ebola in the United States. They found rumors to represent a relatively small fraction of the overall Ebola-related content on the platform. They also found evidence that rumors typically remain more localized and are less believed than legitimate news stories on the topic. All told, the work focused on identifying the scope of the mis/disinformation problem, while oftentimes varying in approach, has consistently found evidence for at least some degree of concern, although pinpointing the exact nature of the problem has proven difficult.
Combatting the Spread of Misinformation.
Computational analyses, including algorithm creation, have allowed for a better understanding of how mis/disinformation spreads, particularly in the online environment. This work is promising for alerting people to likely pieces of false content and has potential for limiting its spread. Vosoughi et al. (41) focused on mis/disinformation identification. They explored the linguistic style of rumors, the characteristics of the people who share them, and network propagation dynamics to develop a model for the automated verification of rumors. They tested their system on 209 rumors across nearly 1 million tweets and found they were able to correctly predict 75% of the rumors, and did so faster than any other public source. Similarly, Ratkiewicz et al. (42) created the “Truthy” system, which identified misleading political memes on Twitter through tweet features like hashtags, links, and mentions.
The work on rumor control has also yielded important findings. Pham et al. (81) developed an algorithm for identifying a set of nodes in a social network that, if removed, will severely limit the spread of mis/disinformation. The authors claim that their approach is not only efficient but a cost-effective tool for combatting mis/disinformation spread. Similar algorithms have been developed by Saxena et al. (82) and Zhang et al. (83). In each case, the authors argue that their algorithmic approach can dramatically disrupt information spread, preventing exposure to a large number of nodes. Of course, the question with these works, and others not outlined here, is how nodes will ultimately be removed from a network, and under what circumstances it is ethically and legally feasible to remove or silence a social media user.
Perhaps because of these questions, Tong et al. (45) focused on stifling the spread of mis/disinformation by identifying it early, seeding key nodes in a social network with accurate information, and allowing those users to spread the accurate information to others before exposure to the false content. Their approach was found effective for rumor blocking, suggesting there are multiple promising avenues for identifying and controlling the spread of mis/disinformation online.
Combatting Misinformation within Members of the Public.
Arguably the most extensive work aimed at combatting misinformation is the experimental work on retractions and corrections, usually in the context of the CIE. Once again, this work has generally focused on honest mistakes in reporting rather than more deliberate attempts to deceive, which is likely to impact how receptive audiences are to correcting information. Work in this space has focused on altering the impact of a retraction by being more clear and direct with its wording (74), repeating it multiple times (84), altering the timeline for the presentation of the retraction (74, 85), and providing supplemental information alongside it (i.e., giving reasons why the misinformation was first assumed to be factual; ref. 86). Other work has focused on the emotionality of the misinformation (87) or has manipulated how carefully a respondent is asked to attend to the presented information (88).
Virtually no work has been successful at completely eliminating the effects of misinformation; however, some studies have shown promise for reducing misperceptions. Among the most promising involves delivering warnings at the time of initial exposure to the misinformation (6). Ecker et al. (58) found that a highly specific warning (a detailed description of the CIE) reduced but failed to fully eliminate the CIE. A more general warning having to do with the limitations of fact checking in media did very little to reduce reliance on misinformation. Cook et al. (56), as well as van der Linden et al. (57), have found promising evidence that audiences can be inoculated against the effects of false content by providing very specific warnings about issues like false-balance reporting and the use of “fake experts.” It is worth noting that warnings are most effective when they are administered prior to mis/disinformation exposure (89).
The repetition or strengthening of retractions has been found to reduce, but again not eliminate, the CIE (6). The best evidence of this is from a study by Ecker et al. (73), who varied both the strength of the misinformation (one or three repetitions) and the strength of the retractions (zero, one, or three repetitions). Their experiments revealed that after three presentations of misinformation a single retraction served to lessen reliance on misinformation, with three retractions reducing it even further. However, the repetition of misinformation also had a stronger effect on thinking than the repetition of the retraction (73). Therefore, efforts to correct a misperception through repetition of a retraction might actually result in boomerang effects as retractions oftentimes involve repeating the original misinformation (90). Further, there is at least some evidence that the repetition of a retraction produces a “protest-too-much” effect, causing message recipients to lose confidence in the retraction (86).
The provision of alternative narratives has also shown promise for reducing the CIE. An alternative narrative fills the gap in a recipient’s mind when a key piece of evidence is retracted (e.g., “It wasn’t the oil and gas [that caused the fire], but what else could it be?”; ref. 6). There is some fMRI data that corroborates this theory as it found that the continued influence of retracted information may be due to a breakdown of narrative-level integration and coherence-building mechanisms implemented by the brain (71). To maximize effectiveness, the alternative narrative should be plausible, should account for the information that was removed by the retraction, and should explain why the misinformation was believed to be correct (6).
Other factors that have been tested include recency and primacy effects, with recency emerging as a more important contributor to the persistence of misinformation as people generally rely more on recent information in their evaluations of retractions (59). Familiarity and levels of explanatory detail have also been tested (60). The authors found that providing greater levels of detail when correcting a myth produced a more sustained change in belief. They also found that the affirmation of facts worked better than the retraction of myths over the short term (1 wk), but not over a longer term (3 wk), and that this effect was most pronounced among older rather than younger adults. It is also worth noting that combining approaches can enhance their effects. For example, merging a specific warning with a plausible alternative explanation can further reduce the CIE compared with administering either of those approaches separately (58).
Source work has also been popular for combatting the effects of false information. For instance, having the refutation of a rumor come from an unlikely source, such as someone for whom the refutation runs counter to their personal or political interests, can increase the willingness of even partisan individuals to reject the rumor (66). It is worth noting, however, that the author conducted a content analysis of rumor refutation by unlikely sources in the context of healthcare reform and found it to be an exceedingly rare event.
Driven by its role in the proliferation of mis/disinformation, Bode and Vraga (61) have focused their correction studies on social media. One study did so using the “related stories” function on Facebook. This work presented participants a Facebook post that contained inaccurate content and then manipulated the related stories around it to either 1) confirm, 2) correct, or 3) both confirm and correct that information. The analysis revealed a significant reduction in misperceptions among those participants who received content designed to correct it. They later looked at source credibility in the context of information shared on Twitter and found that while a single correction from another social media user failed to significantly reduce misperceptions, a single correction from the CDC could impact misperceptions (62). In fact, corrections from the CDC worked best among those with the highest levels of initial misperception. They further investigated whether providing a source was necessary to curb misperceptions by having two individual commenters discredit the information in a Facebook and Twitter conversation (68). In one condition those users provided a link to debunking news stories from the CDC or Snopes.com, while in the other they did so without reference to any outside sources. Their results suggest a source is needed to correct misperceptions.
Finally, outside of factors related to the misinformation itself or the retraction, individual-level differences have also been tested in the context of the CIE and misinformation correction studies, including racial prejudice (72), worldview and partisanship (70, 91), and skepticism (92), with mixed results scattered across studies.
Gaps in the Literature and Moving Forward
While misinformation remains a relatively new topic of public concern, scholars have been addressing issues in this space for quite some time. The result is a large body of literature, but one with significant gaps. Perhaps most worrisome is that much of the work has focused on combatting misinformation, and, importantly, not disinformation. This distinction is subtle, but important. The bulk of the studies focused on the CIE, for instance, have focused on small journalistic errors in reporting (e.g., misrepresenting the cause of a fire) and have largely avoided issues characterized by more deliberate attempts to deceive and persuade. Of course, the major controversy surrounding false information has less to do with honest errors in writing and much more to do with deliberate attempts to deceive. The early retraction studies (e.g., refs. 58 and 87) have provided a strong foundation of initial findings, but we must push these further with highly partisan issues and audiences.
Related to the point above, relatively few studies have explored methods for inoculating individuals from mis/disinformation. As noted, progress has been made in this space with regard to issuing warnings about things like the CIE prior to misinformation exposure (58). Other work has explored factors like false-balance reporting and the use of “fake experts,” also with promising results (56, 57). While such work does not always completely prevent mis/disinformation from taking hold, it does present a promising avenue for better understanding the causes of mis/disinformation and ways to prevent its spread.
A third gap in the literature, one articulated by Lewandowsky et al. (6), has to do with the relative dearth of studies focused on individual-level differences that exacerbate or attenuate things like the CIE. The authors specifically reference intelligence, memory capacity and updating abilities, and tolerance for ambiguity as factors worthy of research attention. However, other factors, including elaborative processing, social monitoring, and a host of variables related to media use and literacy, also remain untested. Greater attention should also be paid to the role of emotion in both the processing of mis/disinformation and its spread (6).
A fourth gap in the literature has to do with better understanding the mechanisms that explain the persistence of mis/disinformation in our minds. Different pathways have been suggested for explaining why mis/disinformation is so difficult to combat. However, relatively few studies have attempted to test competing theories, instead choosing to speculate on explanations post hoc. The functional MRI work of Gordon et al. (71) is both an interesting approach and promising step in furthering our understanding of information persistence. Without more definitive attempts to explain the process through which mis/disinformation seemingly infects our brains, we are doomed to continue the uphill battle against this content.
A common thread in much of the literature cited in this paper is a focus on individuals—typically everyday citizens—and their perceptions. Of course, mis/disinformation can also influence other populations, including political elites, the media, and funding organizations. Indeed, it is arguably most impactful when these audiences are reached as they represent potentially powerful pathways to political influence. Unfortunately, there is a relative dearth of work in this space, at least as compared to studies focused on individual perceptions. Notable exceptions can be found in some of the computational work focused on climate change countermovements (e.g., refs. 36 and 38⇓–40). For example, Brulle (93) recently examined the network of political coalitions, including those in coal and oil and gas sectors, to better understand the organization and structure of a movement opposed to mandatory limits on carbon emissions. Further work focused on the nature and makeup of networks involved in the spread of false content is an especially fruitful path for future research.
Finally, it is worth noting that addressing any of the above gaps in the literature will be very difficult without paying greater attention to issues of conceptualization and operationalization that plague many of the key concepts in the space. Far too many studies have defined or measured misinformation in ways that are actually reflective of different concepts, including disinformation, ignorance, or misunderstandings. A necessary first step in improving our understanding of mis/disinformation impacts and combatting their negative effects, therefore, is to clearly and appropriately define what we mean by key terms and how we should be measuring them in empirical studies of the topic.
Data Availability Statement.
There are no data associated with the paper.
Footnotes
Published under the PNAS license.
References
Source: COLLOQUIUM PAPER
Ag
d—have been conducted on a variety of topics, including many health issues.
There is much variation in the mis/disinformation content analysis work, although nof coders who are considered experts in the field (e.g., ref. 30), while others, particularly those in the health communication space, compare content to guidelines put forth by major health organizations like the Centers for Disease Control and Prevention (CDC) or the World Health Organization (e.g., refs. 31 and 32). Still other work is decidedly more subjective in nature, requiring coders to search for any evidence that audiences are struggling with or otherwise made confused or anxious by the content they encounter (e.g., refs. 17 and 33). These more subjective operationalizations reflect the fuzziness around our understanding of mis/disinformation and may serve to overstate the scope of the problem.
Additional work has taken a broader approach to the classification of content, focusing less on specific pieces of communication and more on the source of the information. One approach involves identifying “fake news” pages online and treating all content from those pages as disinformation (e.g., refs. 22, 23, 34, and 35). For example, the Bessi et al.’s (22, 23) studies relied on Facebook pages dedicated to debunking conspiracy theories to identify “conspiracy news pages,” while other work has relied on existing databases and projects that track fake news sources (e.g., refs. 34 and 35). This approach can, of course, be supplemented by then examining the content on these “conspiracy” or “fake news” pages for specific instances of disinformation. A second approach speculates that a misinformed public is likely to follow given the focus of content found online (e.g., ref. 15) or the accessibility/readability of that content (e.g., ref. 16). Rather than explicitly measure mis/disinformation, these works warn that the oftentimes product-focused (rather than health- or education-focused) nature of health websites coupled with the use of jargon and sophisticated language on those pages may breed a misinformed public. Assessments of the conclusions provided by content analyses should therefore be made with these operational decisions in mind since some works may not actually be classifying individual news items, instead deeming all content from a given source as mis/disinformation.
Computational Text Analysis, Natural Language Processing, and Topic Modeling.
A cousin of the content analysis work noted above is the work being done through computational text analysis, natural language processing, and related approaches. While a complete overview of these methodologies is not feasible, one can generally think of these as computer-assisted approaches to the thematic clustering of large-scale textual data. These generally take an inductive approach to data, with computer algorithms identifying topics or themes based on hidden language patterns in texts (36). There are various approaches to the clustering of data and a variety of algorithms used for the task (37), but these computational approaches generally carry with them two key advantages. First, they conduct reliable content analyses on collections of data that are too big to code by hand, and thus are an extension of the content analysis approach noted above. Second, they rely on machine learning, which allows for the discovery of patterns in texts that may not be recognized by individual coders (36).
As one example of this work, Bousallis and Coan (38) retrieved all climate change-focused documents produced by 19 well-known conservative think tanks and classified them by type and theme using a clustering algorithm. This approach allowed the authors to identify, among other things, a misinformation campaign that escalated over a 15-y period between 2008 and 2013. Other work in this space uses these methodologies in concert with various forms of metadata or existing datasets. For instance, Farrell (39) collected philanthropic data, including lists of conference attendees and speakers, and combined this information with existing datasets of all persons known to be connected to organizations linked to the promulgation of climate change misinformation between 1993 and 2017. Using natural language processing, he was able to identify the degree to which persons and organizations linked to climate mis/disinformation were also integrated into mainstream philanthropic networks. He also took a similar approach to the question of corporate funding, combining Internal Revenue Service data of Exxon Mobile and Koch Industries funding donations with collections of government documents and written and verbal texts from both mainstream news media and groups opposing the science of climate change (40). Relying on a combination of network science and machine-learning text analysis, this work was able to not only explain the corporate structure of the climate change countermovement but also pinpoint its influence on mainstream news media and politics.
Network Analysis, Algorithms, and Online Tracking.
At the same time that the work identified above was identifying the scope of the mis/disinformation problem, efforts were being made, largely through computer technology, to help solve it. A first step in this process—the identification of factually inaccurate information from legitimate news—has become attractive to scholars working in artificial intelligence and natural language processing. Vosoughi et al. (41) focused on identifying the salient features of rumors by examining their linguistic style, the characteristics of the people who share them, and network propagation dynamics. Other work has focused on specific features of content, like hashtags, links, and mentions (42).
Once rumors and false content are identified, the next step is controlling or stifling their spread. Rumor control studies can be grouped into two major categories. First, scholars have focused on garnering a general understanding of how information—factual or otherwise—is shared and spread online (e.g., refs. 23, 32, 35, 43, and 44). This work looks at the structure of online communities, including the strength of ties between community members and key features of information sources, like whether a source of content is likely a bot. Patterns in shared content are examined, as well, including key features of messages that garner engagement and time series models to better understand the speed at which information is shared. Bessi et al. (23), for example, focused on homophily and polarization as key triggers in the spread of conspiracies, while Jang et al. (44) focused on the differences in authorship between fake and real news and the alterations that each go through as they are shared online.
The second major approach to rumor control work focuses on identifying critical nodes in social networks and either removing them from the network or combatting their effects via information cascades. These works focus heavily on building and testing algorithms that can be automatically applied to large-scale data so as to identify and deal with critical nodes both quickly and at low costs. As one example, a group of researchers looked at information cascades as a method for limiting the spread of mis/disinformation (45). Their approach focuses on stifling the spread of false information by identifying it early, seeding key nodes in a social network with accurate information, and allowing those users to spread the accurate information to others before they are exposed to the false content.
The operationalization of mis/disinformation in these works generally follows that noted for content analyses. In fact, work in this area often includes a content analysis component for identifying mis/disinformation and the major sources of such content. Once mis/disinformation has been identified the authors model the information and run simulations on the data.
Public Opinion Surveys/Focus Groups/Interviews.
Surveys and focus groups are popular for understanding how different population groups perceive or are vulnerable to the problems of mis/disinformation. Studies of expert populations are common in the space of healthcare and disease. For example, Ahmad et al. (46) conducted focus groups with physicians to learn more about the benefits and risks of incorporating internet-based health information into routine medical consultations. A similar approach was taken by Dilley et al. (47), who employed surveys and structured interviews with physicians and clinical staff to learn more about the barriers to human papillomavirus vaccination.
The bulk of survey, focus group, and interview work in this area, however, has focused on lay audiences. Nyhan (13) focused on public misperceptions in the context of healthcare reform, relying on secondary survey data to show how false statements about reform by politicians and media members were linked to misperceptions among American audiences. Silver and Matthews (48) relied on semistructured interviews with survivors of a tornado to learn more about the spread of (mis)information in the aftermath of a disaster, while Kalichman et al. (49) surveyed over 300 people living with HIV/AIDS to assess their vulnerability to medical mis/disinformation.
Mis/disinformation is generally operationalized in similar ways in surveys, focus groups, and interviews. The work with expert populations will often employ attitudinal measures to understand how experts view the size of the problem within a given topic area (e.g., refs. 46, 47, 50, and 51). The work with lay audiences will more often employ measures of factual knowledge—for example, true/false items about the causes, symptoms, and possible cures for a given disease or virus (12, 52⇓–54)—or perceived knowledge or concerns (e.g., refs. 53 and 55), which might ask respondents to report how much they believe they know about a topic, or how big a problem they believe inaccurate information to be. Other work has utilized quasi-experimental stimuli to assess a respondent’s susceptibility to false content by exposing participants to different-quality webpages before asking them to rate the pages in terms of believability and trust (49). Finally, attempts have been made to distinguish mere ignorance from actual mis/disinformation by analyzing not only whether an individual holds a misperception but how strongly that misperception is tied to their self-assessed knowledge of the topic (13).
Experiments.
With the possible exception of content analysis work, experiments have been the most popular methodological approach to the issue of mis/disinformation. It is worth noting that most experiments have tended to focus on misinformation in the form of honest journalist or witness mistakes, rather than more flagrant attempts to deceive (i.e., disinformation). Much of this work has explored the role of retractions or corrections in lessening the continued influence of misinformation in the minds of the public, but other approaches have been employed, including inoculating people to misinformation prior to exposure (e.g., refs. 56 and 57), providing participants with myth–fact sheets or event statements that correct the misinformation (58⇓–60), and using the “related links” feature on Facebook, or subsequent posts in a social media newsfeed, to provide alternative viewpoints on the topic (61⇓–63).
Some work has avoided the use of retractions or inoculating information altogether by looking at intervention materials for areas where misperceptions are already common, such as vaccines (64). Still other work falls outside this general framework. Rather than attempting to reverse misperceptions in people’s minds, Nyhan and Reifler (65) used a mailed reminder of fact-checking services to see if the reminder would deter politicians from making false statements on the campaign trail.
Experimental work generally operationalizes mis/disinformation in one of several ways. First, it is oftentimes a manipulated variable, with the most common manipulations taking the form of providing a false piece of information to experimental participants. These are typically real or constructed news articles or “dispatches” (e.g., refs. 66 and 67) but might also be brief posts or headlines shared on social media (e.g., ref. 68), generic statements or statistics (e.g., refs. 60 and 69), quotes from a politician (e.g., ref. 70), or recordings of news reports (e.g., ref. 71). After exposure, participants will receive some form of retraction notice, thus turning the original information into misinformation.
Misperceptions are typically assessed after exposure to an experimental stimulus through some form of factual knowledge questions, attitudinal items, or inference queries. Factual knowledge questions might take the form of basic fact-recall items based on information in the communication to which the participant was exposed (e.g., “On which day did the accident occur?”; ref. 58). These are similar to the measures employed in survey work, and might take the form of true–false items. Some work assesses fact-recall with response booklets, where participants are asked to provide as many event-related details as possible to provide a complete account of the event (e.g., ref. 58).
Attitudinal items are usually posed around the key components of the shared mis/disinformation. For instance, a study about the false link between vaccines and autism first presented participants with misinformation then corrected that information in one of several ways before measuring attitudes related to the misinformation through a series of agree–disagree items (e.g., “Some vaccines cause autism in healthy children” or “If I have a child, I will vaccinate him or her”; ref. 61).
Inference questions are generally open-ended and allow a respondent to either reference the inaccurate content they were originally given, reference the correction to that information, or avoid the context altogether. In their study of a fictitious minibus accident, Ecker et al. (58) asked participants the following inference question: “Why do you think it was difficult getting both the injured and uninjured passengers out of the minibus?” Having first misinformed participants by noting that the passengers were elderly, and then later correcting that information, a reference to the advanced age of the passengers would be evidence of misinformation.
Finally, unique approaches to studying mis/disinformation require unique approaches to measuring outcomes. The Nyhan and Reifler (65) study that used a reminder of fact-checking to see if it deterred politicians from making false statements measured how dishonest the politicians were in their later statements by turning to PolitiFact ratings and searching LexisNexis for any media articles that challenged a statement by any of the legislators in the study.
Theoretical Underpinnings
The Continued Influence Effect.
The backbone of a significant number of studies of mis/disinformation, particularly many of the experimental approaches built around correcting the effects of misinformation on the public, is the so-called continued influence effect (CIE). The CIE refers to the tendency for information that is initially presented as true, but later revealed to be false, to continue to affect memory and reasoning (59). A relatively small group of researchers have made the most headway in this space, primarily exploring the CIE in news retraction and correction studies (e.g., refs. 6, 58⇓–60, 66, 67, and 71⇓–73).
There are multiple proposed explanations for the CIE. The first concerns “mental event models” (74, 75). People are said to build mental models of events as they unfold. However, in doing so, they are reluctant to dismiss key information, such as the cause of an event, unless a plausible alternative exists to replace the dismissed information. If no plausible alternative is available, people prefer an inconsistent model over an incomplete one, resulting in a continued reliance on the outdated information.
The second explanation for the CIE is focused on retrieval failure in controlled memory processes (6). This process can be relatively simple, such as misattributing a specific piece of information to the wrong source (e.g., recalling the subsequently retracted cause of a fire but thinking that information came from the credible police report), or it might be rather complex, having to do with dual-process theory and the automatic versus strategic retrieval of information from memory (76). While a complete overview of dual-process theory is beyond the scope of this paper, this explanation largely focuses on a breakdown in the encoding and retrieval process in memory due to things like time pressure or cognitive overload (73). In short, how we encode information impacts how quickly and with what accuracy we will retrieve information at a later time.
A third explanation for the CIE concerns processing fluency and familiarity. Oftentimes, in producing a retraction we repeat the initial false information, which may inadvertently increase the strength of that information in the receiver’s memory and their belief in it by making it more familiar (73). When the receiver is later called upon to recall the event, the mis/disinformation is more easily recalled, thereby giving it greater credence. Finally, there is some evidence that the CIE might be based on reactance effects, whereby people do not like being told what to think and push back when they are told to disregard an earlier piece of information by a retraction. This explanation has been largely tested in courtroom settings where jurors are asked to disregard a piece of evidence after being told it is inadmissible (6).
Motivated Reasoning.
Since at least the mid-20th century, scholars have noted that partisans are selective in both their choice and processing of information. The biased processing of content has come to be known as “motivated reasoning.” Motivated reasoning has become a popular concept in mis/disinformation research, particularly for issues with a strong partisan divide (e.g., refs. 13, 61, 66, 72, and 77).
Several mechanisms have been proposed to explain motivated reasoning, including the prior attitude effect, disconfirmation bias, and confirmation bias (78). The prior attitude effect occurs when “people who feel strongly about an issue … evaluate supportive arguments as stronger and more compelling than opposing arguments” (78). Disconfirmation bias argues that “people will spend more time and cognitive resources denigrating and counterarguing attitudinally incongruent than congruent arguments” (78). Individuals are engaging in confirmation bias when they choose to expose themselves to “confirming over disconfirming arguments” when they are given freedom in their information choice (78). Additional work has expanded upon the mechanisms noted here. For instance, Jacobson’s (79) selective perception argues that “people are more likely to get the message right when it is consistent with prior beliefs and more likely to miss it when it is not” (79), while his selective memory suggests that “people are more likely to remember things that are consistent with current attitudes and to forget or misremember things that are inconsistent with them” (79). In the context of mis/disinformation, motivated reasoning can help explain why some people may be resistant to new information that, for example, contradicts a believed link between vaccinations and autism (64).
Other Concepts Common to the Literature.
Factors related to the CIE and motivated reasoning that are also common in the mis/disinformation literature include echo chambers (“polarized groups of like-minded people who keep framing and reinforcing a shared narrative”; ref. 80), filter bubbles (“where online content is controlled by algorithms reflecting user’s prior choices”; ref. 44), worldviews (audience values and orientation toward the world, including their political ideology; ref. 6), and skepticism (the degree to which people question or distrust new information or information sources; ref. 6). These concepts generally help explain the resistance to correcting information that forms the foundation of the CIE.
Trends in the Findings
How Big Is the Problem?
As noted, content analysis work and computational text analyses have helped scholars better understand the scope of the mis/disinformation problem. A complete summary of the studies in this space is not feasible; however, some patterns are worth noting. First, there is often convergence in results even with vastly different approaches to studying the problem. The work on vaccine mis/disinformation represents one area where scholars have generally coalesced in their research findings. For example, Basch et al. (24) explored videos about vaccines on YouTube and found that a strong percentage reported a link between vaccines and autism, a finding that was echoed by Donzelli et al. (25) in their exploration of the same topic and platform. Those findings have been complemented by Moran et al. (19) and Panatto et al. (11), who identified similar false claims about links between vaccination and autism and Gulf War syndrome, respectively, in their samples of web pages. Computational analyses focused on climate change communication have also generally identified problems with mis/disinformation. For example, Boussalis and Coan (38) found increases in climate change mis/disinformation over time, arguing that the “era of science denial” is alive and well, while Farrell (36) found evidence that organizations that produce climate contrarian texts exert strong influence within networks and therefore wield great power in the spread of information.
At the same time, results have not always been consistent, even when exploring the same issue within the same medium. For instance, researchers conducted a search of “Ebola” and “prevention” or “cure” on Twitter, a search that returned a large set of tweets, of which 55% were said to contain medical mis/disinformation with a potential audience of more than 15 million (as compared to about 5.5 million for the medically accurate tweets) (20). Also on Twitter, Jin et al. (21) looked at rumor spread in the immediate aftermath of news of the first case of Ebola in the United States. They found rumors to represent a relatively small fraction of the overall Ebola-related content on the platform. They also found evidence that rumors typically remain more localized and are less believed than legitimate news stories on the topic. All told, the work focused on identifying the scope of the mis/disinformation problem, while oftentimes varying in approach, has consistently found evidence for at least some degree of concern, although pinpointing the exact nature of the problem has proven difficult.
Combatting the Spread of Misinformation.
Computational analyses, including algorithm creation, have allowed for a better understanding of how mis/disinformation spreads, particularly in the online environment. This work is promising for alerting people to likely pieces of false content and has potential for limiting its spread. Vosoughi et al. (41) focused on mis/disinformation identification. They explored the linguistic style of rumors, the characteristics of the people who share them, and network propagation dynamics to develop a model for the automated verification of rumors. They tested their system on 209 rumors across nearly 1 million tweets and found they were able to correctly predict 75% of the rumors, and did so faster than any other public source. Similarly, Ratkiewicz et al. (42) created the “Truthy” system, which identified misleading political memes on Twitter through tweet features like hashtags, links, and mentions.
The work on rumor control has also yielded important findings. Pham et al. (81) developed an algorithm for identifying a set of nodes in a social network that, if removed, will severely limit the spread of mis/disinformation. The authors claim that their approach is not only efficient but a cost-effective tool for combatting mis/disinformation spread. Similar algorithms have been developed by Saxena et al. (82) and Zhang et al. (83). In each case, the authors argue that their algorithmic approach can dramatically disrupt information spread, preventing exposure to a large number of nodes. Of course, the question with these works, and others not outlined here, is how nodes will ultimately be removed from a network, and under what circumstances it is ethically and legally feasible to remove or silence a social media user.
Perhaps because of these questions, Tong et al. (45) focused on stifling the spread of mis/disinformation by identifying it early, seeding key nodes in a social network with accurate information, and allowing those users to spread the accurate information to others before exposure to the false content. Their approach was found effective for rumor blocking, suggesting there are multiple promising avenues for identifying and controlling the spread of mis/disinformation online.
Combatting Misinformation within Members of the Public.
Arguably the most extensive work aimed at combatting misinformation is the experimental work on retractions and corrections, usually in the context of the CIE. Once again, this work has generally focused on honest mistakes in reporting rather than more deliberate attempts to deceive, which is likely to impact how receptive audiences are to correcting information. Work in this space has focused on altering the impact of a retraction by being more clear and direct with its wording (74), repeating it multiple times (84), altering the timeline for the presentation of the retraction (74, 85), and providing supplemental information alongside it (i.e., giving reasons why the misinformation was first assumed to be factual; ref. 86). Other work has focused on the emotionality of the misinformation (87) or has manipulated how carefully a respondent is asked to attend to the presented information (88).
Virtually no work has been successful at completely eliminating the effects of misinformation; however, some studies have shown promise for reducing misperceptions. Among the most promising involves delivering warnings at the time of initial exposure to the misinformation (6). Ecker et al. (58) found that a highly specific warning (a detailed description of the CIE) reduced but failed to fully eliminate the CIE. A more general warning having to do with the limitations of fact checking in media did very little to reduce reliance on misinformation. Cook et al. (56), as well as van der Linden et al. (57), have found promising evidence that audiences can be inoculated against the effects of false content by providing very specific warnings about issues like false-balance reporting and the use of “fake experts.” It is worth noting that warnings are most effective when they are administered prior to mis/disinformation exposure (89).
The repetition or strengthening of retractions has been found to reduce, but again not eliminate, the CIE (6). The best evidence of this is from a study by Ecker et al. (73), who varied both the strength of the misinformation (one or three repetitions) and the strength of the retractions (zero, one, or three repetitions). Their experiments revealed that after three presentations of misinformation a single retraction served to lessen reliance on misinformation, with three retractions reducing it even further. However, the repetition of misinformation also had a stronger effect on thinking than the repetition of the retraction (73). Therefore, efforts to correct a misperception through repetition of a retraction might actually result in boomerang effects as retractions oftentimes involve repeating the original misinformation (90). Further, there is at least some evidence that the repetition of a retraction produces a “protest-too-much” effect, causing message recipients to lose confidence in the retraction (86).
The provision of alternative narratives has also shown promise for reducing the CIE. An alternative narrative fills the gap in a recipient’s mind when a key piece of evidence is retracted (e.g., “It wasn’t the oil and gas [that caused the fire], but what else could it be?”; ref. 6). There is some fMRI data that corroborates this theory as it found that the continued influence of retracted information may be due to a breakdown of narrative-level integration and coherence-building mechanisms implemented by the brain (71). To maximize effectiveness, the alternative narrative should be plausible, should account for the information that was removed by the retraction, and should explain why the misinformation was believed to be correct (6).
Other factors that have been tested include recency and primacy effects, with recency emerging as a more important contributor to the persistence of misinformation as people generally rely more on recent information in their evaluations of retractions (59). Familiarity and levels of explanatory detail have also been tested (60). The authors found that providing greater levels of detail when correcting a myth produced a more sustained change in belief. They also found that the affirmation of facts worked better than the retraction of myths over the short term (1 wk), but not over a longer term (3 wk), and that this effect was most pronounced among older rather than younger adults. It is also worth noting that combining approaches can enhance their effects. For example, merging a specific warning with a plausible alternative explanation can further reduce the CIE compared with administering either of those approaches separately (58).
Source work has also been popular for combatting the effects of false information. For instance, having the refutation of a rumor come from an unlikely source, such as someone for whom the refutation runs counter to their personal or political interests, can increase the willingness of even partisan individuals to reject the rumor (66). It is worth noting, however, that the author conducted a content analysis of rumor refutation by unlikely sources in the context of healthcare reform and found it to be an exceedingly rare event.
Driven by its role in the proliferation of mis/disinformation, Bode and Vraga (61) have focused their correction studies on social media. One study did so using the “related stories” function on Facebook. This work presented participants a Facebook post that contained inaccurate content and then manipulated the related stories around it to either 1) confirm, 2) correct, or 3) both confirm and correct that information. The analysis revealed a significant reduction in misperceptions among those participants who received content designed to correct it. They later looked at source credibility in the context of information shared on Twitter and found that while a single correction from another social media user failed to significantly reduce misperceptions, a single correction from the CDC could impact misperceptions (62). In fact, corrections from the CDC worked best among those with the highest levels of initial misperception. They further investigated whether providing a source was necessary to curb misperceptions by having two individual commenters discredit the information in a Facebook and Twitter conversation (68). In one condition those users provided a link to debunking news stories from the CDC or Snopes.com, while in the other they did so without reference to any outside sources. Their results suggest a source is needed to correct misperceptions.
Finally, outside of factors related to the misinformation itself or the retraction, individual-level differences have also been tested in the context of the CIE and misinformation correction studies, including racial prejudice (72), worldview and partisanship (70, 91), and skepticism (92), with mixed results scattered across studies.
Gaps in the Literature and Moving Forward
While misinformation remains a relatively new topic of public concern, scholars have been addressing issues in this space for quite some time. The result is a large body of literature, but one with significant gaps. Perhaps most worrisome is that much of the work has focused on combatting misinformation, and, importantly, not disinformation. This distinction is subtle, but important. The bulk of the studies focused on the CIE, for instance, have focused on small journalistic errors in reporting (e.g., misrepresenting the cause of a fire) and have largely avoided issues characterized by more deliberate attempts to deceive and persuade. Of course, the major controversy surrounding false information has less to do with honest errors in writing and much more to do with deliberate attempts to deceive. The early retraction studies (e.g., refs. 58 and 87) have provided a strong foundation of initial findings, but we must push these further with highly partisan issues and audiences.
Related to the point above, relatively few studies have explored methods for inoculating individuals from mis/disinformation. As noted, progress has been made in this space with regard to issuing warnings about things like the CIE prior to misinformation exposure (58). Other work has explored factors like false-balance reporting and the use of “fake experts,” also with promising results (56, 57). While such work does not always completely prevent mis/disinformation from taking hold, it does present a promising avenue for better understanding the causes of mis/disinformation and ways to prevent its spread.
A third gap in the literature, one articulated by Lewandowsky et al. (6), has to do with the relative dearth of studies focused on individual-level differences that exacerbate or attenuate things like the CIE. The authors specifically reference intelligence, memory capacity and updating abilities, and tolerance for ambiguity as factors worthy of research attention. However, other factors, including elaborative processing, social monitoring, and a host of variables related to media use and literacy, also remain untested. Greater attention should also be paid to the role of emotion in both the processing of mis/disinformation and its spread (6).
A fourth gap in the literature has to do with better understanding the mechanisms that explain the persistence of mis/disinformation in our minds. Different pathways have been suggested for explaining why mis/disinformation is so difficult to combat. However, relatively few studies have attempted to test competing theories, instead choosing to speculate on explanations post hoc. The functional MRI work of Gordon et al. (71) is both an interesting approach and promising step in furthering our understanding of information persistence. Without more definitive attempts to explain the process through which mis/disinformation seemingly infects our brains, we are doomed to continue the uphill battle against this content.
A common thread in much of the literature cited in this paper is a focus on individuals—typically everyday citizens—and their perceptions. Of course, mis/disinformation can also influence other populations, including political elites, the media, and funding organizations. Indeed, it is arguably most impactful when these audiences are reached as they represent potentially powerful pathways to political influence. Unfortunately, there is a relative dearth of work in this space, at least as compared to studies focused on individual perceptions. Notable exceptions can be found in some of the computational work focused on climate change countermovements (e.g., refs. 36 and 38⇓–40). For example, Brulle (93) recently examined the network of political coalitions, including those in coal and oil and gas sectors, to better understand the organization and structure of a movement opposed to mandatory limits on carbon emissions. Further work focused on the nature and makeup of networks involved in the spread of false content is an especially fruitful path for future research.
Finally, it is worth noting that addressing any of the above gaps in the literature will be very difficult without paying greater attention to issues of conceptualization and operationalization that plague many of the key concepts in the space. Far too many studies have defined or measured misinformation in ways that are actually reflective of different concepts, including disinformation, ignorance, or misunderstandings. A necessary first step in improving our understanding of mis/disinformation impacts and combatting their negative effects, therefore, is to clearly and appropriately define what we mean by key terms and how we should be measuring them in empirical studies of the topic."
The novel COVID-19-SARS-CoV-2 global pandemic is generating a great deal of misinformArticle Mental health consequences of COVID-19 media coverage: the n...
"Commentary Open Access Published: 05 January 2021 Mental health consequences of COVID-19 media coverage: the need for effective crisis communication practices Zhaohui Su, Dean McDonnell, Jun Wen, Metin Kozak, Jaffar Abbas, Sabina Šegalo, Xiaoshan Li, Junaid Ahmad, Ali Cheshmehzangi, Yuyang Cai, Ling Yang & Yu-Tao Xiang Globalization and Health volume 17, Article number: 4 (2021) Cite this article28k Accesses 61 Citations 27 Altmetric Metricsdetails Abstract During global pandemics, such as coronavirus disease 2019 (COVID-19), crisis communication is indispensable in dispelling fears, uncertainty, and unifying individuals worldwide in a collective fight against health threats. Inadequate crisis communication can bring dire personal and economic consequences. Mounting research shows that seemingly endless newsfeeds related to COVID-19 infection and death rates could considerably increase the risk of mental health problems. Unfortunately, media reports that include infodemics regarding the influence of COVID-19 on mental health may be a source of the adverse psychological effects on individuals. Owing partially to insufficient crisis communication practices, media and news organizations across the globe have played minimal roles in battling COVID-19 infodemics. Common refrains include raging QAnon conspiracies, a false and misleading “Chinese virus” narrative, and the use of disinfectants to “cure” COVID-19. With the potential to deteriorate mental health, infodemics fueled by a kaleidoscopic range of misinformation can be dangerous. Unfortunately, there is a shortage of research on how to improve crisis communication across media and news organization channels. This paper identifies ways that legacy media reports on COVID-19 and how social media-based infodemics can result in mental health concerns. This paper discusses possible crisis communication solutions that media and news organizations can adopt to mitigate the negative influences of COVID-19 related news on mental health. Emphasizing the need for global media entities to forge a fact-based, person-centered, and collaborative response to COVID-19 reporting, this paper encourages media resources to focus on the core issue of how to slow or stop COVID-19 transmission effectively. Background Similar to pandemics like the 1918–1919 influenza outbreak, the Coronavirus Disease 2019 (COVID-19) is a once-in-a-century event [1]. Different from previous global health crises, the impact of COVID-19 is not distant, rather, it is close to home, catastrophic, and ongoing—as of December 1st, approximately 63.3 million confirmed cases and 1.47 million deaths were known to be caused by COVID-19 [2]. The scope and severity of the pandemic have further fueled a global mental health crisis, especially among underserved populations like older adults, healthcare professionals, and women [3]. It is estimated that in October 2020, more people in Japan have died of suicide (2153) than COVID-19 (2087) [4]. Compared to numbers in 2019, there was a 82.6% rise among Japanese women who died of suicide in October, 2020 [4].Though almost a year has passed since the first COVID-19 outbreak, epidemiologists are still working on understanding COVID-19’s clinical features [5]. In addition to its unknown viral characteristics, a key contributor fueling the destructive power of COVID-19 is its unprecedented transmissibility [6,7,8]. COVID-19’s ability to spread fast and far in a short period is rare, even among other pandemics [6,7,8]. This rapid pace of transmission, coupled with consequent spikes in infection and death, has caused a range of physical and psychological issues in individuals across the globe [9]. Challenging to identify or fully “cure”, mental health services were facing numerous, but resource-constraining pandemics like COVID-19 have exacerbated these issues [9,10,11,12].Mental health is “a state of well-being in which the individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to his or her community” [13]. Amid a global crisis, mental health issues can have severe health consequences on personal and population health, ranging from anxiety, distress or depression, to suicidal ideation or suicide [3, 14, 15]. COVID-19 has been a source of complex, multifaceted stress for many [16,17,18,19,20,21,22]. The fears and uncertainty associated with the virus, together with the anxiety and stress following from lockdowns and social distancing mandates, have exacerbated mental health issues to varying degrees throughout society [23,24,25]. Not only diminishing the mental health and well-being of individuals, COVID-19 has also limited the services people can access; the rationing of medical resources during the COVID-19 pandemic has instigated a restructuring and repurposing across mental health institutions to deal with the pandemic [26,27,28]. Well-intentioned measures, such as lockdowns and social distancing, have further diminished access to mental health services [10], with many providers forced to close; leaving people little to no access to on-site assistance [26,27,28].In addition to (1) the fear and uncertainty associated with COVID-19, (2) the anxiety and distress caused by lockdowns and social distancing mandates, and (3) limited access to mental health services [23,24,25], the unending barrage of news from legacy media outlets and social media platforms has further complicated the situation [18, 29, 30]. Media attention has disproportionately directed toward the COVID-19 infodemic, with little consideration for how pandemic-related media coverage might influence people’s mental health. Moreover, the misinformation and disinformation surrounding COVID-19 - ranging from a false and misleading “Chinese virus” narrative to using disinfectants to “cure” COVID-19 - has affected individuals’ mental and physical health and well-being [18, 19, 29, 31, 32]. Although some useful insight is available, scarce research has explored ways to mitigate the mental health consequences of COVID-19 media coverage.Evidence shows that in times of global crisis such as COVID-19, crisis communication can, cost-effectively, address multifaceted issues. Crisis communication refers to “the collection, processing, and dissemination of information required to address a crisis situation” [33]. Though many developments of the field of crisis communication occurred in the past decades (e.g., the situational crisis communication theory developed by Timothy Coombs in 1995), crisis communication has a long history and is often contributed to eminent public figures such as Caesar and Confucius [34,35,36,37]. With the help of exemplar (e.g., Johnson & Johnson’s effective management of the Cyanide-Laced Tylenol Capsules crisis), as well as inadequate crisis communication practices (e.g., the United States government’s mismanagement of Hurricane Katrina), a growing body of work has acknowledged crisis communication’s role in mitigate negative impacts of adverse events [38,39,40]. Therefore, to address this research gap, this paper aims to identify areas where legacy media reports on COVID-19 and social media-fueled infodemics can harm people’s mental health. This paper outlines potential crisis communication solutions that media and news organizations can adopt to alleviate the mental health consequences of COVID-19 coverage.Coverage of COVID-19 by legacy media Legacy media encompasses “media originally distributed using a pre-internet medium (print, radio, television), and media companies whose original business was in pre-Internet media, regardless of how much of their content is now available online” [41]. Three forms of coverage can broadly classify the impact of legacy media coverage of COVID-19 on people’s mental health issues: (1) balanced, fact-based, and truth-oriented; (2) biased and misleading; and (3) false and dishonest.Balanced, fact-based, and truth-oriented COVID-19 media coverage COVID-19 media coverage is inherently harmful; the disease represents an ongoing, deadly pandemic [2]. This intrinsic negativity, which naturally transfers to media coverage of the virus, could cause mental health issues [42]. Research on media effects has long documented that negative news can lead to mild to severe mental health issues among consumers [42]. Importantly, due to the scale and severity of COVID-19, media attention has been disproportionately focused on pandemic-related news, which could further affect individuals already facing more significant mental health challenges [42]. It is important to note that while balanced, fact-based, and truth-oriented COVID-19 media coverage might be difficult to achieve, it is important that media organizations, as pillars of the Fourth Estate [43], strive to meet these standards to their best abilities.Biased and misleading COVID-19 media coverage When news is biased and misleading, the adverse effects of COVID-19 media coverage on personal and population health and well-being could be more pronounced [44,45,46]. Previous studies found that right-leaning media outlets often issue biased and misleading reports on COVID-19 [46], which could, in turn, facilitate the spread of misinformation on the virus. Analysis of a sample of 38 million media reports from January 1 to May 25, 2020 shows that a staggering of 84% of misinformation distributed by legacy media was neither challenged or fact-checked before they reached the public, effectively exposing countless number of people to misinformation, such as “miracle cures” or the “Democratic Party hoax,” that could result in substantial human and economic consequences [47]. It is also important to note that fear and panic generated by COVID-19 related misinformation could have a long-lasting effect on people’s mental health that outlives COVID-19 media cycles [48].False and dishonest COVID-19 media coverage Perhaps the most problematic type of media coverage on COVID-19 involves content that is false and dishonest [18,19,20,21]. While legacy media practitioners uphold the founding pillars of the industry, journalistic values and ethical standards, the prevalence of narratives referring to the “Wuhan virus,” “Chinese virus,” and “China virus” in legacy media reports on COVID-19 suggests that some outlets are fully capable of producing baseless, and sensational news [18,19,20,21]. Directly associating a group of people, nation, and entire race to a virus will inevitably evoke substantial mental health concerns among those targeted [18,19,20,21].Another irreversible negative effect of legacy media’s instigation of “fake news” is the deterioration of public trust around COVID-19 [49]. It is challenging to predict what might happen if people decide to ignore COVID-19 information disseminated through legacy media outlets, where health experts and government officials share the latest developments related to the virus. What is not difficult to imagine is the human and economic consequences tied to a deliberately “ignorant” public; the results could be catastrophic [50].COVID-19 infodemics and social media COVID-19 infodemics are growing at a pandemic rate [51]. Infodemics involve the purposeful spread of misinformation and disinformation via the media, particularly on social media platforms. COVID-19 infodemics can detract from health experts’ efforts, fuelling public fear, uncertainty, and mistrust, which could have grave personal and economic consequences [51,52,53,54,55,56]. Infodemics involve an array of topics on which misinformation and disinformation are publicized through tweets and Facebook posts, oftentimes powered by interested individuals or groups with ulterior political and economic interests [55, 57]. Typical slants include QAnon conspiracies, the aforementioned “Chinese virus” narrative, and promoting the use of disinfectants to “cure” COVID-19 [51,52,53,54,55,56].Not all COVID-19 infodemics are created equal [58]. For example, the infodemic that promoted the ingestion of disinfectant to utilize its “health benefits” had direct physical and mental health implications to a number of individuals [31, 32, 58, 59]. Between May 1st and June 30th, 2020, there were 15 reported cases of methanol poisoning due to drinking disinfectant; of these cases, four individuals died, and three were discharged with visual impairment [59]. Still, others may mistakenly trust U.S. leaders’ “sarcastic” remarks on COVID-19, which are repeatedly aired on legacy media and various other social media outlets [60, 61].Resource constraints are a hallmark of COVID-19, and media resources are no exception. COVID-19 infodemics, along with smear campaigns endorsed by traditional media outlets, are an outrageous waste of public resources—global media attention should be focused on the health and well-being of the public, mainly because the pandemic is ongoing. In times of global crisis, media resources require investment in the issue of the day: how to slow or stop the spread of COVID-19 [62]. Considering the prevalence of misinformation and disinformation on legacy media and social media platforms, interventions are urgently needed to dispel COVID-19 infodemics and ensure related media coverage does not lead to unintended consequences; effective crisis communication practices are one such approach [62,63,64].Crisis communication amid COVID-19 In times of global pandemics such as COVID-19, crisis communication is indispensable in dispelling fear and uncertainty and unifying citizens in a collective fight against disease [62,63,64]. A fundamental attribution of crisis communication is that it is usually adopted as an emergency communication strategy when at least three crises are at play: (1) a crisis or unprecedented event with widespread personal and economic consequences (e.g., the COVID-19 pandemic); (2) a communication crisis that could prevent key stakeholders from working towards a solution (e.g., COVID-19 infodemics); and (3) a potential trust crisis either already present or in development, partially due to the first two crises (e.g., public trust crises).To address these triple crises, society at large must take several steps: (1) rapidly develop an evidence-based, tailored disaster preparedness plan with the potential to curb the pandemic; (2) carefully execute this plan with speed and precision; and (3) communicate this plan and corresponding procedures effectively to the public in a timely, transparent, and truth-oriented fashion (i.e., effective crisis communication). Overall, effectively sharing public health updates with society in a reasonable and honest manner is paramount.In addition to providing the public with trustworthy information, proactive decisions are needed from media professionals, health experts, and government officials to ensure effective delivery of COVID-19 updates to the public (i.e., so as not to cause unintended consequences involving mental health). In other words, crisis communication during COVID-19, especially in light of the mental health consequences associated with relevant media coverage, should have three objectives: (1) to communicate credible and reliable COVID-19 information with the public in a timely, transparent, and truth-oriented manner; (2) to eliminate misinformation and disinformation and halt connected infodemics; and (3) to ensure that the delivery of COVID-19 information to the public leads to no unintended consequences (i.e., mental health problems) (see Fig. 1).Fig. 1📷Antecedents to crisis communication and possible solutionsFull size imageCommunicate credible and reliable COVID-19 related information During the pandemic, many governments, such as the Chinese [65], Irish [66], Finnish [67], and Norwegien government [68], have managed to communicate COVID-19 strategies effectively with the public. Take the Chinese government for instance. Starting from the first outbreak, the Chinese government has been delivering timely COVID-19 updates that are (1) tailored to the general public’s needs and wants to enhance relevancy; (2) disseminated via traditional and social media outlets to increase reach and impact; and (3) presented by key health and government officials to boost message credibility are available to the public daily [69,70,71]. Along with avoiding potential mental health issues, these crisis communication efforts also have the potential to dispel people’s fear and uncertainty about COVID-19 and improve their compliance with pandemic-related health and safety procedures such as lockdowns and face mask mandates [69,70,71].Unprecedented times call for unprecedented measures [30]. Technology companies, including Google, Twitter, Facebook, and TikTok, can disseminate credible and reliable COVID-19 information by developing tailored algorithms to promote search results, tweets, or posts written by vetted epidemiologists or other health experts. Doing so could initiate a movement to communicate credible, reliable COVID-19 information with the public in a timely, transparent, and truth-focused fashion. Notably, the way public-facing messages are designed, developed, and delivered (i.e., in a persuasive manner that is relatable to the public) also influences communication outcomes [72].Eliminating COVID-19 infodemics Relying on Health organizations and government agencies alone is not enough; all key stakeholders must be involved [69,70,71, 73]. Public health campaigns that target the dangers of COVID-19 infodemics require development, and information that educates individuals on how to avoid being a conduit of misinformation or disinformation is needed. Given that a considerable proportion of the public lack the health literacy needed to distinguish credible information from misinformation or disinformation [50], educational programs should be established to ensure that infodemics will become less prevalent both during COVID-19 and in the future.Despite promising initiatives [74], media companies should assume a more significant role in controlling the spread of COVID-19 infodemics. Research shows that merely adding an accuracy reminder while people are perusing information online can substantially enhance their ability to identify fake news [75]. This finding is encouraging, as it suggests that effective measures to curb the spread of COVID-19 infodemics can be highly cost-effective. In addition to making individual decisions, perhaps social media companies should organize a collaborative response, such as through a crowdsourced and widely shared “Infodemic Response Checklist” [53]. This effort would help the social media environment at large establish a better system to protect the public from the harm of COVID-19 infodemics.Overall, health experts should lead in quelling COVID-19 infodemics. As top epidemiologists like Dr. Anthony Fauci have demonstrated, health experts need to be closely connected with their main “customers” or the general public to facilitate effective communication [76,77,78]. Health experts also need to be more participatory in the public health decision-making process; in so doing, less disinformation will be disseminated by government officials while more decisions will be grounded in scientific evidence.Fact-based and people-centered COVID-19 crisis communication strategy COVID-19 affects people of all demographics [79]. It is difficult not to form an opinion about an enduring pandemic that continues to threaten lives, livelihoods, and gross domestic product (GDP) [2]. However, given the personal and economic consequences tied to biased and misleading [44,45,46] or blatantly false and malicious [59,60,61] information, it is imperative for media professionals, health experts, and government officials to develop a fact-based, people-centered [17] COVID-19 crisis communication strategy. In the context of our study, fact-based and people-centered crisis communication strategy is defined as communication endeavors deliver facts that matter to the people without framing the numbers or statistics based on personal views or ulterior motives (e.g., political gains or economic interests).This way, well-intentioned information can be effectively delivered to the public without unintended consequences. It is important to note that educational interventions might be also needed for healthcare professionals, as a growing body of research shows that healthcare professionals often lack necessary levels of knowledge or risk perception needed to be vigilant about COVID-19 misinformation or disinformation [80,81,82]. Considering the important role healthcare professionals serve in patient education and the fact that many healthcare professionals also face substantial mental health challenges [83], educational interventions may be incremental in addressing infodemic-induced challenges these frontline workers face.Concluding remarks Overall, in times of global pandemics like COVID-19, crisis communication can play a key part in reducing fear and uncertainty while inspiring a unified fight against health threats [62,63,64, 84]. There has yet to be a national solution or unilateral communication during a pandemic, but considering the pronounced need for valuable media resources during COVID-19 for the greater good [50], health experts and media professionals have a responsibility to step up and put a stop to infodemics and smear campaigns. Stakeholders can battle inaccurate reporting with credible, reliable, and trustworthy information alongside well-developed tools and techniques in crisis communication. Transparency and legitimacy will ultimately help preserve people’s health and well-being while bringing global media attention back to a genuine public health concern: how to prevent COVID-19 from spreading.For future research directions, we believe there is a pronounced need to capitalize on media or communication resources to develop timely health solutions that have the potential to avoid immediate human consequences caused by COVID-19. Since the onset of the pandemic, in Turkey alone, approximately 100 musicians have committed suicide due to financial problems caused by COVID-19 [85]. We believe regional, national, and international health organizations and government agencies should invest more media resources into informing and emphasizing help and resources available to people amid the pandemic, compared with updates on COVID-19 infection and death tallies. In other words, it is important for media organizations to honor their roles as pillars of the Fourth Estate amid COVID-19 [43], starting by pouring media resources into issues that matter to individuals’ lives and livelihoods, rather than sensational reports that might boost Nielsen ratings, increase sales numbers, fuel infodemics, yet add limited benefits to public health and welfare [47]. Availability of data and materials No. Abbreviations COVID-19:Coronavirus disease 2019GDP:Gross domestic productReferences 1.Gates B. Responding to Covid-19 — a once-in-a-century pandemic? N Engl J Med. 2020;382(18):1677–9.CAS PubMed Article PubMed Central Google Scholar 2.John Hopkins University. The COVID-19 global map. 2020 [cited 2020 October 25]; Available from: https://coronavirus.jhu.edu/map.html.Google Scholar 3.Wang Y, et al. Health care and mental health challenges for transgender individuals during the COVID-19 pandemic. Lancet Diabetes Endocrinol. 2020;8(7):564–5.CAS PubMed PubMed Central Article Google Scholar 4.The Japan Times. Japan suicides rise as economic impact of coronavirus hits home. 2020 [cited 2020 December 1st]; Available from: https://www.japantimes.co.jp/news/2020/11/11/national/japan-suicide-rise-coronavirus/.Google Scholar 5.Bussani R, et al. Persistence of viral RNA, pneumocyte syncytia and thrombosis are hallmarks of advanced COVID-19 pathology. EBioMedicine. 2020;61:103104.PubMed PubMed Central Article Google Scholar 6.Fani M, Teimoori A, Ghafari S. Comparison of the COVID-2019 (SARS-CoV-2) pathogenesis with SARS-CoV and MERS-CoV infections. Future Virol. 2020. https://doi.org/10.2217/fvl-2020-0050. 7.Mahase E. Coronavirus: Covid-19 has killed more people than SARS and MERS combined, despite lower case fatality rate. BMJ. 2020;368:m641.PubMed Article PubMed Central Google Scholar 8.Wilder-Smith A, Chiew CJ, Lee VJ. Can we contain the COVID-19 outbreak with the same measures as for SARS? Lancet Infect Dis. 2020;20(5):e102–7.CAS PubMed PubMed Central Article Google Scholar 9.Xiang Y-T, et al. Timely mental health care for the 2019 novel coronavirus outbreak is urgently needed. Lancet Psychiatry. 2020;7(3):228–9.PubMed PubMed Central Article Google Scholar 10.Mamun MA, Griffiths MD. First COVID-19 suicide case in Bangladesh due to fear of COVID-19 and xenophobia: possible suicide prevention strategies. Asian J Psychiatr. 2020;51:102073.PubMed PubMed Central Article Google Scholar 11.Sher L. The impact of the COVID-19 pandemic on suicide rates. QJM An Int J Med. 2020;113(10):707–712. https://doi.org/10.1093/qjmed/hcaa202. 12.Hughes H, et al. Uncomfortably numb: Suicide and the psychological undercurrent of COVID-19. Ir J Psychol Med. 2020;37(3):159–160. https://doi.org/10.1017/ipm.2020.49. 13.World Health Organization. Promoting mental health: Concepts, emerging evidence, practice (Summary Report). Geneva: World Health Organization; 2004.Google Scholar 14.Goldmann E, Galea S. Mental health consequences of disasters. Annu Rev Public Health. 2014;35(1):169–83.PubMed Article Google Scholar 15.Gunnell D, et al. Suicide risk and prevention during the COVID-19 pandemic. Lancet Psychiatry. 2020;7(6):468–71.PubMed PubMed Central Article Google Scholar 16.Chung RY-N, Li MM. Anti-Chinese sentiment during the 2019-nCoV outbreak. Lancet. 2020;395(10225):686–7.CAS PubMed PubMed Central Article Google Scholar 17.Chung RY-N, et al. Using a public health ethics framework to unpick discrimination in COVID-19 responses. Am J Bioeth. 2020;20(7):114–6.PubMed Article Google Scholar 18.Zheng Y, Goh E, Wen J. The effects of misleading media reports about COVID-19 on Chinese tourists’ mental health: a perspective article. Anatolia. 2020;31(2):337–40.Article Google Scholar 19.Wen J, et al. Effects of misleading media coverage on public health crisis: a case of the 2019 novel coronavirus outbreak in China. Anatolia. 2020a;31(2):331–6.Article Google Scholar 20.Rovetta A, Bhagavathula AS. COVID-19-related web search behaviors and infodemic attitudes in Italy: Infodemiological study. JMIR Public Health Surveill. 2020;6(2):e19374.PubMed PubMed Central Article Google Scholar 21.Su Z, et al. Time to stop the use of ‘Wuhan virus’, ‘China virus’ or ‘Chinese virus’ across the scientific community. BMJ Glob Health. 2020;5(9):e003746.PubMed PubMed Central Article Google Scholar 22.Budhwani H, Sun R. Creating COVID-19 stigma by referencing the novel coronavirus as the "Chinese virus" on twitter: quantitative analysis of social media data. J Med Internet Res. 2020;22(5):e19301.PubMed PubMed Central Article Google Scholar 23.Mertens G, et al. Fear of the coronavirus (COVID-19): predictors in an online study conducted in march 2020. J Anxiety Disorders. 2020;74:102258.Article Google Scholar 24.Tull MT, et al. Psychological outcomes associated with stay-at-home orders and the perceived impact of COVID-19 on daily life. Psychiatry Res. 2020;289:113098.CAS PubMed PubMed Central Article Google Scholar 25.Rossi R, et al. COVID-19 pandemic and lockdown measures impact on mental health among the general population in Italy. Front Psychiatry. 2020;11:790.PubMed PubMed Central Article Google Scholar 26.Brown E, et al. The potential impact of COVID-19 on psychosis: a rapid review of contemporary epidemic and pandemic research. Schizophr Res. 2020;222:79–87. https://doi.org/10.1016/j.schres.2020.05.005. 27.Fiorillo A, Gorwood P. The consequences of the COVID-19 pandemic on mental health and implications for clinical practice. European Psychiatry. 2020;63(1):e32.PubMed PubMed Central Article CAS Google Scholar 28.Rajkumar RP. COVID-19 and mental health: a review of the existing literature. Asian J Psychiatr. 2020;52:102066.PubMed PubMed Central Article Google Scholar 29.Vazquez, M. Calling COVID-19 the “Wuhan Virus” or “China Virus” is inaccurate and xenophobic. 2020 [cited 2020 September 25]; Available from: https://medicine.yale.edu/news-article/23074/.Google Scholar 30.Tangcharoensathien V, et al. Framework for managing the COVID-19 infodemic: methods and results of an online, crowdsourced WHO technical consultation. J Med Internet Res. 2020;22(6):e19659.PubMed PubMed Central Article Google Scholar 31.Yamey G, Gonsalves G. Donald Trump: a political determinant of covid-19. BMJ. 2020;369:m1643.PubMed Article PubMed Central Google Scholar 32.Reihani, H., et al., Non-evidenced based treatment: An unintended cause of morbidity and mortality related to COVID-19. Am J Emerge Med, 2020: p. S0735–6757(20)30317-X. 33.Coombs WT. In: Coombs WT, Holladay SJ, editors. Parameters for crisis communication, in The handbook of crisis communication. Malden: Wiley-Blackwell; 2012. p. 17–53.Google Scholar 34.Heath RL, O'Hair HD. Handbook of risk and crisis communication. New York: Routledge; 2020.Book Google Scholar 35.Heath RL. Best practices in crisis communication: evolution of practice through research. J Appl Commun Res. 2006;34(3):245–8.Article Google Scholar 36.Yu Th, Wen WC. Crisis communication in Chinese culture: A case study in Taiwan. Asian J Commun. 2003;13(2):50–64.CAS Article Google Scholar 37.Huang Y-HC, Wu F, Cheng Y. Crisis communication in context: cultural and political influences underpinning Chinese public relations practice. Public Relat Rev. 2016;42(1):201–13.PubMed Article Google Scholar 38.Ulmer RR, Sellnow TL, Seeger MW. Effective crisis communication: Moving from crisis to opportunity. Thousand Oaks: Sage Publications; 2017.Google Scholar 39.Benoit WL. Image repair discourse and crisis communication. Public Relat Rev. 1997;23(2):177–86.Article Google Scholar 40.Roshan M, Warren M, Carr R. Understanding the use of social media by organisations for crisis communication. Comput Hum Behav. 2016;63:350–61.Article Google Scholar 41.Miel P, Faris R. News and information as digital media come of age, Berkman Center for Internet and Society. Cambridge: Harvard University; 2008.Google Scholar 42.Olagoke AA, Olagoke OO, Hughes AM, et al. Br J Health Psychol. 2020;25(4):e12427. https://doi.org/10.1111/bjhp.12427. 43.Schultz J. Reviving the fourth estate: Democracy, accountability and the media. U.K: Cambridge University Press; 1998. 44.Tasnim S, Hossain MM, Mazumder H. Impact of rumors and misinformation on covid-19 in social media. J Prev Med Public Health. 2020;53(3):171–4.PubMed PubMed Central Article Google Scholar 45.Simonov, A., et al., The persuasive effect of fox news: Non-compliance with social distancing during the covid-19 pandemic. 2020, National Bureau of Economic Research.Book Google Scholar 46.Motta M, Stecula D, Farhart C. How right-leaning media coverage of COVID-19 facilitated the spread of misinformation in the early stages of the pandemic in the U.S. Canadian Journal of Political Science. Revue Canadienne De Science Politique; 2020. p. 1–8.Google Scholar 47.Evanega S, et al. Coronavirus misinformation: Quantifying sources and themes in the COVID-19 ‘infodemic’. Ithaca: Cornell University; 2020.Google Scholar 48.Ahmad AR, Murad HR. The impact of social media on panic during the covid-19 pandemic in Iraqi Kurdistan: online questionnaire study. J Med Internet Res. 2020;22(5):e19556.PubMed PubMed Central Article Google Scholar 49.Bunker D. Who do you trust? The digital destruction of shared situational awareness and the COVID-19 infodemic. Int J Inf Manag. 2020;55:102201–1. https://doi.org/10.1016/j.ijinfomgt.2020.102201. 50.Okan O, et al. Coronavirus-Related Health Literacy: A Cross-Sectional Study in Adults during the COVID-19 Infodemic in Germany. Int J Environ Res Public Health. 2020;17(15):5503. https://doi.org/10.3390/ijerph17155503. 51.Kouzy R, et al. Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on twitter. Cureus. 2020;12(3):e7255.PubMed PubMed Central Google Scholar 52.Orso D, et al. Infodemic and the spread of fake news in the COVID-19-era. Eur J Emerge Med. 2020. https://doi.org/10.1097/MEJ.0000000000000713. 53.Mheidly N, Fares J. Leveraging media and health communication strategies to overcome the COVID-19 infodemic. J Public Health Policy. 2020;41(4):410–420. https://doi.org/10.1057/s41271-020-00247-w. 54.Zandifar A, Badrfam R. Iranian mental health during the COVID-19 epidemic. Asian J Psychiatr. 2020;51:101990.PubMed PubMed Central Article Google Scholar 55.Brennen JS, et al. Types, sources, and claims of Covid-19 misinformation. Reuters Institute. 2020;7:3.1.Google Scholar 56.Ferrara E. What types of COVID-19 conspiracies are populated by twitter bots? First Monday; 2020.Google Scholar 57.Dyer O. Trump claims public health warnings on covid-19 are a conspiracy against him. BMJ. 2020;368:m941.PubMed Article PubMed Central Google Scholar 58.Liu M, et al. Internet searches for unproven COVID-19 therapies in the United States. JAMA Intern Med. 2020;180(8):1116–8.CAS PubMed PubMed Central Article Google Scholar 59.Yip L, et al. Serious adverse health events, including death, associated with ingesting alcohol-based hand sanitizers containing methanol—Arizona and New Mexico, may–June 2020. Morb Mortal Wkly Rep. 2020;69(32):1070.CAS Article Google Scholar 60.Finset A, et al. Effective health communication - a key factor in fighting the COVID-19 pandemic. Patient Educ Couns. 2020;103(5):873–6. https://doi.org/10.1016/j.pec.2020.03.027. 61.Chary, M., et al., Geospatial correlation between COVID-19 health misinformation on social media and poisoning with household cleaners. medRxiv, 2020: p. 2020.04.30.20079657. 62.Fauci AS, Lane HC, Redfield RR. Covid-19 - navigating the uncharted. N Engl J Med. 2020;382(13):1268–9.CAS PubMed PubMed Central Article Google Scholar 63.The Lancet. COVID-19: fighting panic with information. Lancet. 2020;395(10224):537.CAS PubMed PubMed Central Article Google Scholar 64.Wu AW, Connors C, Everly GS. COVID-19: peer support and crisis communication strategies to promote institutional resilience. Ann Intern Med. 2020;172(12):822–3.PubMed Article PubMed Central Google Scholar 65.Li Y, Chandra Y, Kapucu N. Crisis coordination and the role of social media in response to COVID-19 in Wuhan, China. Am Rev Public Adm. 2020;50(6–7):698–705. https://doi.org/10.1177/0275074020942105. 66.Colfer B. Herd-immunity across intangible borders: Public policy responses to COVID-19 in Ireland and the UK. Europ Policy Analysis. 2020;6(2):203–225. https://doi.org/10.1002/epa2.1096. 67.Kabiraj S, Lestan F. COVID-19 outbreak in Finland: Case study on the management of pandemics. In: Babu G, Qamaruddin M, editors. International Case Studies in the Management of Disasters: Emerald Publishing Limited; 2020. p. 213–29. 68.Christensen T, Lægreid P. Balancing governance capacity and legitimacy: how the Norwegian government handled the covid-19 crisis as a high performer. Public Adm Rev. 2020;80(5):774–9. https://doi.org/10.1111/puar.13241. 69.Liu W, Yue X-G, Tchounwou PB. Response to the COVID-19 epidemic: The Chinese experience and implications for other countries. Int J Environ Res Public Health. 2020;17(7). 70.Trevisan M, Le LC, Le AV. The COVID-19 pandemic: a view from Vietnam. Am J Public Health. 2020;110(8):1152–3.PubMed PubMed Central Article Google Scholar 71.Hale T, et al. Variation in government responses to COVID-19. Blavatnik School of Government Working Paper; 2020. p. 31.Google Scholar 72.Chen Q, et al. Unpacking the black box: how to promote citizen engagement through government social media during the COVID-19 crisis. Comput Hum Behav. 2020;110:106380–106380. https://doi.org/10.1016/j.chb.2020.106380. 73.Zhang L, Li H, Chen K. Effective risk communication for public health emergency: Reflection on the COVID-19 (2019-nCoV) outbreak in Wuhan, China. Healthcare (Basel, Switzerland). 2020;8(1):64. https://doi.org/10.3390/healthcare8010064. 74.Roth Y, Pickles N. Updating our approach to misleading information: Twitter, Inc; 2020. 75.Pennycook G, et al. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020;31(7):770–80.PubMed PubMed Central Article Google Scholar 76.Abbasi J. Anthony Fauci, MD, on COVID-19 vaccines, schools, and Larry Kramer. JAMA. 2020;324(3):220–2.PubMed Article PubMed Central Google Scholar 77.Folkenflik, D. Dr. Anthony Fauci is talking to just about anyone about the coronavirus. 2020 [cited 2020 September 17]; Available from: https://www.npr.org/2020/04/01/825499536/dr-anthony-fauci-is-talking-to-just-about-anyone-about-the-coronavirus.Google Scholar 78.Cohen, J. ‘I’m going to keep pushing.’ Anthony Fauci tries to make the White House listen to facts of the pandemic. 2020 [cited 2020 September 17]; Available from: https://www.sciencemag.org/news/2020/03/i-m-going-keep-pushing-anthony-fauci-tries-make-white-house-listen-facts-pandemic.Book Google Scholar 79.Chen H, et al. Clinical characteristics and intrauterine vertical transmission potential of COVID-19 infection in nine pregnant women: a retrospective review of medical records. Lancet. 2020;395(10226):809–15.CAS PubMed PubMed Central Article Google Scholar 80.Bhagavathula AS, et al. Knowledge and perceptions of COVID-19 among health care workers: cross-sectional study. JMIR Public Health Surveill. 2020;6(2):e19160.PubMed PubMed Central Article Google Scholar 81.Taghrir MH, Borazjani R, Shiraly R. COVID-19 and Iranian medical students; a survey on their related-knowledge, preventive behaviors and risk perception. Arch Iran Med. 2020;23(4):249–54.Article Google Scholar 82.Olum R, et al. Coronavirus disease-2019: knowledge, attitude, and practices of health care workers at makerere university teaching hospitals, Uganda. Front Public Health. 2020;8:181.PubMed PubMed Central Article Google Scholar 83.Spoorthy MS, Pratapa SK, Mahant S. Mental health problems faced by healthcare workers due to the COVID-19 pandemic-a review. Asian J Psychiatr. 2020;51:102119.PubMed PubMed Central Article Google Scholar 84.Su Z, et al. A race for a better understanding of COVID-19 vaccine non-adopters. Brain Behav Immun Health. 2020;9:100159.PubMed PubMed Central Article Google Scholar 85.Tokyay, M. Pandemic threatens livelihood of Turkish musicians, driving many to suicide. 2020 [cited 2020 October 3]; Available from: https://arab.news/5vxz5.Google ScholarDownload references Acknowledgements The authors wish to express their gratitude for the constructive input offered by the editor and reviewers. Funding None. Author information Affiliations Center on Smart and Connected Health Technologies, Mays Cancer Center, School of Nursing, UT Health San Antonio, 7703 Floyd Curl Drive, San Antonio, TX, 78229, USAZhaohui Su Department of Humanities, Institute of Technology Carlow, Carlow, Ireland, R93 V960Dean McDonnell School of Business and Law, Edith Cowan University, Perth, WA, 6027, AustraliaJun Wen School of Tourism, Dokuz Eylül University, 35680 Foça, İzmir, TurkeyMetin Kozak Antai College of Economics and Management, and School of Media and Communication, Shanghai Jiao Tong University, Shanghai, 200240, ChinaJaffar Abbas Department of Microbiology, Faculty of Medicine, University of Sarajevo, 71000, Sarajevo, Bosnia and HerzegovinaSabina Šegalo Program of Public Relations and Advertising, Beijing Normal University-Hong Kong Baptist University United International College, Zhuhai, Guangdong, ChinaXiaoshan Li Prime Institute of Public Health, Peshawar Medical College, Warsak Road, Peshawar, 25160, PakistanJunaid Ahmad Head of Department of Architecture and Built Environment; Professor of Architecture and Urban Design, Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, Zhejiang, 315100, ChinaAli Cheshmehzangi The Network for Education and Research on Peace and Sustainability (NERPS), Hiroshima University, Hiroshima, JapanAli Cheshmehzangi School of Public Health, Shanghai Jiao Tong University School of Medicine, Shanghai, ChinaYuyang Cai China Institute for Urban Governance, Shanghai Jiao Tong University, Shanghai, ChinaYuyang Cai Department of Geriatrics, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, ChinaLing Yang Unit of Psychiatry, Institute of Translational Medicine, Faculty of Health Sciences; & Center for Cognition and Brain Sciences, University of Macau, Macao SAR, ChinaYu-Tao XiangContributions ZS conceived the work, reviewed the literature, drafted, and edited the manuscript. DMD, JW, MK, JA, SS, XL, JA, AC, YC, LY, and YTX reviewed the literature and edited the manuscript. All authors approved the manuscript for submission.Corresponding authors Correspondence to Zhaohui Su or Yu-Tao Xiang. Ethics declarations Ethics approval and consent to participate Not applicable.Consent for publication Not applicable.Competing interests None. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.Reprints and Permissions About this article 📷Cite this article Su, Z., McDonnell, D., Wen, J. et al. Mental health consequences of COVID-19 media coverage: the need for effective crisis communication practices. Global Health 17, 4 (2021). https://doi.org/10.1186/s12992-020-00654-4Download citation Received03 October 2020 Accepted17 December 2020 Published05 January 2021 DOIhttps://doi.org/10.1186/s12992-020-00654-4
[end of Quoted Material]
SOURCE: Article Mental health consequences of COVID-19 media coverage: the n...
The publications of the American Psychological Association are a good source of resources for scientific research on the subject of the spread of misinformation:
The coronavirus pandemic has evidently given rise to an INFODEMIC!
https://www.apa.org/monitor/2021/03/controlling-misinformation
"Controlling the spread of misinformation" "Psychologists’ research on misinformation may help in the fight to debunk myths surrounding COVID-19" / By Zara Abrams
Date created: March 1, 2021 / 17 min read / Vol. 52 No. 2 Print version: page 44 - Journalism and Facts
37📷 "Misinformation on COVID-19 is so pervasive that even some patients dying from the disease still say it’s a hoax. In March 2020, nearly 30% of U.S. adults believed the Chinese government created the coronavirus as a bioweapon (Social Science & Medicine, Vol. 263, 2020) and in June, a quarter believed the outbreak was intentionally planned by people in power (Pew Research Center, 2020).Such falsehoods, which research shows have influenced attitudes and behaviors around protective measures such as mask-wearing, are an ongoing hurdle as countries around the world struggle to get the virus under control.
Psychological studies of both misinformation (also called fake news), which refers to any claims or depictions that are inaccurate, and disinformation, a subset of misinformation intended to mislead, are helping expose the harmful impact of fake news—and offering potential remedies. But psychologists who study fake news warn that it’s an uphill battle, one that will ultimately require a global cooperative effort among researchers, governments, and social media platforms.
“The fundamental problem with misinformation is that once people have heard it, they tend to believe and act on it, even after it’s been corrected,” says Stephan Lewandowsky, PhD, a professor of psychology at the University of Bristol in the United Kingdom. “Even in the best of all possible worlds, correcting misinformation is not an easy task.”
When are we susceptible to misinformation? Starting in the 1970s, psychologists showed that even after misinformation is corrected, false beliefs can still persist (Anderson, C. A., et al., Journal of Personality and Social Psychology, Vol. 39, No. 6, 1980).“When we hear new information, we often think about what it may mean,” says Norbert Schwarz, PhD, a professor of psychology and marketing at the University of Southern California. “If we later hear a correction, it doesn’t invalidate our thoughts—and it’s our own thoughts that can maintain a bias, even when we accept that the original information was false.”
Schwarz identified five criteria that people use to decide whether information is true: compatibility with other known information, credibility of the source, whether others believe it, whether the information is internally consistent, and whether there is supporting evidence (“Metacognition,” in APA Handbook of Personality and Social Psychology, 2015). His studies also show that people are more likely to accept misinformation as fact if it’s easy to hear or read (Consciousness and Cognition, Vol. 8, No. 3, 1999).Since the 2016 U.S. presidential election, when misinformation spread widely on Facebook and other social media platforms, psychological research on the topic has accelerated. Studies of motivated reasoning by psychologist Peter Ditto, PhD, of the University of California, Irvine, show that people deploy skepticism selectively—for instance, when they’re less critical of ideas that align with their political beliefs (Gampa, A., et al., Social Psychological and Personality Science, Vol. 10, No. 8, 2019).Others have built on Schwarz’s early findings, showing that people are more likely to fall for misinformation when they fail to carefully deliberate the material, whether or not it’s aligned with their political views (Bago, B., et al., Journal of Experimental Psychology: General, Vol. 149, No. 8, 2020). The lead author of one such analysis, Gordon Pennycook, PhD, an assistant professor of psychology at the University of Regina in Saskatchewan, Canada, says this suggests that passive sharers, rather than malicious actors, may be the bigger problem in the fake news phenomenon (Cognition, Vol. 188, 2019).Six “degrees of manipulation”—impersonation, conspiracy, emotion, polarization, discrediting, and trolling—are used to spread misinformation and disinformation, according to Sander van der Linden, PhD, a professor of social psychology in society at the University of Cambridge in the United Kingdom and director of the Cambridge Social Decision-Making Lab, and his colleagues. For instance, a false news story may quote a fake expert, use emotional language, or propose a conspiracy theory in order to manipulate readers.
Research also reveals individual differences in susceptibility to misinformation. For one, people who use an intuitive reasoning style tend to believe fake news more often than those who rely primarily on analytical reasoning (Journal of Personality, Vol. 88, No. 2, 2020). Political ideology also appears to play a role, with those holding extreme beliefs—particularly on the far right—being most susceptible to misinformation (Baptista, J. P., & Gradim, A., Social Sciences, Vol. 9, No. 10, 2020).Further research is needed to understand the complex interactions between demographic factors such as age and misinformation. Early data indicate that older adults—who are more affected by COVID-19—are sharing more news in general about the virus, including fake news (The State of the Nation: A 50-State COVID-19 Survey, Report #18, October 2020), but they may be less likely to believe it (Royal Society Open Science, Vol. 7, No. 10, 2020). In fact, research has shown that younger people, regardless of political group, are more likely to believe COVID-19 misinformation than older people (The State of the Nation, 2020).COVID-19 and the infodemic Regardless of why it’s shared, misinformation surrounding COVID-19 has been so rampant that the World Health Organization (WHO) declared a parallel “infodemic” to describe the scale of fake news and its potential impact on efforts to limit the virus’s spread.
“There’s often a lot of uncertainty in crisis situations, so people come together and start sharing information in a sort of collective sense-making process,” says Kate Starbird, PhD, an associate professor of human-centered design and engineering at the University of Washington, who studies how information travels during crises. “That process can get things right, but it can also get things wrong, producing rumors that turn out to be false.”
For example, when stay-at-home orders first went into effect in March 2020, Starbird and her colleagues tracked how one Medium article, which misrepresented the scientific evidence on social distancing, went viral after several Fox News personalities shared it (Washington Post, May 8, 2020).Researchers have also started to document the scope of the infodemic. A study that surveyed more than 1,000 U.S. adults in March and July 2020, led by psychologist Daniel Romer, PhD, research director of the University of Pennsylvania’s Annenberg Public Policy Center, found that about 15% believed the pharmaceutical industry created the coronavirus and more than 28% thought it was a bioweapon made by the Chinese government. Those beliefs predicted a subsequent decrease in willingness to wear a mask or take a vaccine (Social Science & Medicine, Vol. 263, 2020).That pattern also holds in other countries. An analysis of misinformation from five samples across the United States, Europe, and Mexico showed that substantial portions of each population—anywhere from 15% to 37%—believed misinformation about COVID-19 in April and May 2020, representing what the authors call a “major threat to public health.” People who were more susceptible to misinformation were less likely to report complying with public health recommendations and less likely to say they’d get vaccinated (Royal Society Open Science, Vol. 7, No. 10, 2020).Though research directly tying misinformation to behavior is still limited, exposure to fake news does have real-world consequences. In the political domain, it is correlated with declining trust in mainstream media organizations (Ognyanova, K., et al., The Harvard Kennedy School Misinformation Review, 2020) and likely impacts voting behavior, though more research is needed on the nuances of that relationship (Lazer, D. M. J., et al., Science, Vol. 359, No. 6380, 2018). Misinformation has even spurred violence, for instance when a conspiracy theorist fired a gun inside Washington, D.C.-based pizzeria Comet Ping Pong in 2016.And on the coronavirus front, “the causal link between misinformation and behavior is actually quite direct and visible,” van der Linden says.
He points to attacks on 5G cellular towers in the United Kingdom after an online conspiracy theory linked 5G technology to the virus’s spread, and methanol poisonings in Iran following false claims that alcohol cures COVID-19 (Shokoohi, M., Alcohol, Vol. 87, 2020 ). One study documents hundreds of deaths and thousands of hospitalizations around the world associated with COVID-19 misinformation, including rumors, conspiracy theories, and stigmas (Islam, M. S., et al., The American Journal of Tropical Medicine and Hygiene, Vol. 103, No. 4, 2020 ).Cognitive psychologist Briony Swire-Thompson, PhD, a senior research scientist at the Network Science Institute at Northeastern University, cautions that data collected early in the pandemic may not reflect current beliefs. For example, some people who indicated in the spring or summer of 2020 that they were not willing to take a vaccine may have adjusted their stance as the pandemic has progressed. And misinformation isn’t the only factor in hesitancy toward COVID-19 vaccines. Their speedy development, in addition to well-grounded skepticism of the medical establishment among minority groups, also contribute to public uncertainty.
Still, 21% of U.S. adults said in November 2020 that they don’t plan to get vaccinated, even if more information becomes available (Pew Research Center, 2020)—and psychologists say that countering coronavirus misinformation is necessary for breaking the virus’s grip on society.
Efforts to stop the spread Psychological research backs several methods of countering misinformation. One is to debunk incorrect information after it has spread. Much more effective, though, is inoculating people against fake news before they’re exposed—a strategy known as “prebunking.”“Like a vaccine, we expose people to a small dose of misinformation and explain to them how they might be misled,” says Lewandowsky. “If they then encounter that misinformation later, it no longer sticks.
”That’s best achieved by warning people that a specific piece of information is false and explaining why a source might lie or be misinformed about it before they encounter the information organically, says Schwarz. Lewandowsky, Schwarz, van der Linden, and others have shown that prebunking can neutralize misinformation on climate change, vaccines, and other issues (Global Challenges, Vol. 1, No. 2, 2017; Jolley, D., & Douglas, K. M., Journal of Applied Social Psychology, Vol. 47, No. 8, 2017).Van der Linden and Jon Roozenbeek, PhD, a postdoctoral researcher at the University of Cambridge, developed and tested this technique using “Bad News,” a gamified intervention that simulates a social media feed to teach participants how to distinguish between real and fake news headlines on politicized topics such as climate change or the European refugee crisis. Tests of the game—which more than a million people have played—show that playing it once can boost participants’ ability to identify misinformation, but that the inoculation effect decays after about two months (Maertens, R., et al., Journal of Experimental Psychology: Applied, 2020).“We also found that if we reengage people following the initial intervention, we can boost their response so that the inoculation lasts longer,” van der Linden says.
When the infodemic struck, van der Linden and Roozenbeek built a new online game, “Go Viral!,” which aims to prebunk common misinformation surrounding COVID-19. Players assume the role of a manipulator and practice interacting with others in a social media simulation. The game draws on van der Linden’s six degrees of manipulation (describing the six common ways misinformation is produced), teaching players how emotional language, fake experts, and conspiracy theories can be used to mislead. Through partnerships with the U.K. Cabinet Office, the WHO, and the United Nations, the game has already reached thousands of people. For example, the WHO lists “Go Viral!” as a resource for tackling online misinformation and has featured the game in its newsletters.
Initial results may be promising, but van der Linden says his team hasn’t yet tested their interventions on more skeptical groups, such as people who intentionally spread disinformation. He says his team hopes to reach those groups through its partnerships with organizations like the WHO, which can market the game on Facebook, Twitter, and other social media platforms.
An
other way to address misinformation is to encourage people to reflect on the veracity of claims they encounter. A test of COVID-19 misinformation led by Pennycook and his colleagues found that a simple accuracy nudge increased participants’ ability to discern between real and fake news. Participants saw a series of headlines—some true, some false—and rated whether they would share each item. Those in the experimental condition, who were also asked to rate the accuracy of each headline, shared more accurate news content compared with participants in the control group (Psychological Science, Vol. 31, No. 7, 2020).“We tripled the difference in the probability of sharing true versus false information when we drew people’s attention toward accuracy,” Pennycook says.
Media literacy organizations such as the News Literacy Project (NLP) and First Draft are applying such strategies in an effort to dispel misinformation and disinformation on COVID-19 and other issues. NLP’s virtual classroom offers 14 lessons on topics such as conspiracy theories and misinformation, drawing on psychological insights on motivated reasoning, confirmation bias, and cognitive dissonance. Nearly 200,000 middle- and high-school students have completed those courses and the organization’s newsletters reach about 40,000 people each week.
Other groups have created media literacy resources geared toward older adults, who are just as capable of spotting hoaxes but have been disproportionally targeted by disinformation sources (Brashier, N. M., & Schacter, D. L., Current Directions in Psychological Science, Vol. 29, No. 3, 2020). These resources include the Poynter Institute’s MediaWise for Seniors program and AARP’s Fact Tracker interactive videos.
“We want people to understand that disinformation is fundamentally exploitative—that it tries to use our religion, our patriotism, and our desire for justice to outrage us and to dupe us into faulty reasoning,” says Peter Adams, NLP’s senior vice president of education. “Much of that is a psychological phenomenon.”
What’s next in misinformation research One key to stanching the deluge of misinformation is to halt its spread on social media platforms, but that requires industry buy-in, which has been slow. During the 2020 presidential election, Twitter flagged tweets that contained misleading information about election results—a form of prebunking—and in December, Facebook announced that it would begin removing posts with false claims about COVID-19 vaccines. In a reversal from previous stances, multiple social media companies suspended or banned President Trump from their platforms for inciting violence at the U.S. Capitol in January, while Congress was certifying the electoral vote of the 2020 presidential election.
“This has made an impact, but the problem has certainly grown faster than the solutions,” Starbird says.
Psychologists say that countering misinformation will ultimately require stronger partnerships with social media platforms, which can help disseminate tools such as “Go Viral!” and provide internal data to researchers studying fake news.
“We need to figure out what’s actually happening on these platforms—how often people see false content, for instance—and that’s very hard to do without buy-in,” says Pennycook.
Meanwhile, research is underway to further characterize the spread of misinformation and its effects on behavior. For example, Chrysalis Wright, PhD, an associate lecturer and director of the Media and Migration Lab at the University of Central Florida, is studying how misinformation on COVID-19 affects anti-Asian sentiment. And Starbird is analyzing discourse on mask-wearing on Twitter to understand how people invoke science to prove a point.
What’s most needed, though, is research that shows whether media literacy efforts are effective outside of the context in which they’re taught, says Schwarz. Just because people know how to fact-check doesn’t guarantee they’ll do it in the right context.
“So far, the studies are basically like school tests,” he says. “Developing that skill is a start—but do I recognize when I need to use it?”
Misinformation & psychology's response Early efforts to misinform 44 B.C.–A.D. 1439 Coordinated misinformation efforts have been documented throughout recorded history, starting with a political smear campaign against Roman general Mark Antony regarding his relationship with Cleopatra, which used slogans carved on coins. In 1439, the invention of the printing press enabled deceivers to spread falsehoods farther and faster.1Insights on persuasion and belief 1960s–1980s Psychological research enhanced our understanding of belief—for example, how people evaluate a source’s credibility—and what types of messages tend to be persuasive. Researchers also observed that beliefs persist even after misinformation is corrected and began to test interventions for resisting persuasion.2Refining the psychological literature 1990s–2000s The field pursued research on dual process theory, which distinguishes between implicit and explicit cognitive processing, and perceptual fluency, which shows that people are more likely to accept false statements as true if they are easy to hear or read. These findings set the stage for later work that tied belief in misinformation to a failure to reflect carefully on material.3Birth of online social media 2004–2006 Facebook and Twitter, launched respectively in 2004 and 2006, facilitated even faster and more efficient dissemination of material. Online social networks meet several of the criteria known by psychologists to make statements persuasive. For example, posts promoting unvetted claims can be endorsed and shared by friends and family. “Social media are practically built for spreading fake news,” says Norbert Schwarz, PhD, a psychologist who studies misinformation.
Research accelerates following the U.S. presidential election 2016 Fake news on social media reached a crescendo surrounding the 2016 U.S. presidential election. Facebook officials testified that up to 60 million bots spread misinformation on its platform, while a study found that a quarter of pre-election tweets linking to news articles shared false or extremely biased information. In response, psychologists accelerated their research on the spread of online misinformation and how to address it.4Psychological interventions tackle misinformation 2018–Present Psychologists have ramped up efforts to address misinformation, building on years of laboratory and field tests on combating rumors. Key strategies include debunking, preemptive inoculation, and nudges to assess the accuracy of material.5
Sources1Posetti, J., & Matthews, A. (2018). A short guide to the history of “fake news” and disinformation. International Center for Journalists.2
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion. Springer;
Anderson, C. A., et al. (1980). Perseverance of social theories: The role of explanation in the persistence of discredited information. Journal of Personality and Social Psychology, 39(6), 1037–1049;
McGuire, W. J. (1964). Some contemporary approaches. Advances in Experimental Social Psychology, 1, 191–229.3
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux; Reber, R., & Schwarz, N., Effects of perceptual fluency on judgments of truth. (1999); Consciousness and Cognition, 8(3), 338–342; Pennycook, G., & Rand, D. G. (2019).
Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 39, 39–50.4Lazer, D. M. J., et al. (2018).
The science of fake news. Science, 359(6380), 1094–1096; Bovet, A., & Makse, H. A. (2019). Nature Communications, 10, Article 7.5
Lewandowsky, S., et al. (2012). Psychological Science in the Public Interest, 13(3)
Who believes misinformation? Psychological research looks at individual differences in demographic, personality, and other traits of those who are more likely to believe misinformation and conspiracy theories, with the ultimate goal of characterizing the underlying processes that lead people to accept such claims.
The following findings outline some individual differences psychologists have identified, but they should not be used to generalize across groups regarding belief in misinformation.
Broadly, political conservativism and lower levels of educational attainment are correlated with an increase in susceptibility to fake news (Roozenbeek, J., & van der Linden, S., Humanities & Social Sciences Communications, Vol. 5, 2019).
Conspiracy theories, including around COVID-19, receive more support from men than women (Cassese, E. C., et al., Politics & Gender, Vol. 16, No. 4, 2020).
Thought processes more common among those who hold far-right political beliefs, such as paranoid ideation and distrust of authority, also correlate with an increased endorsement of conspiratorial narratives (van Prooijen, J.-W., et al., Social Psychology and Personality Science, Vol. 6, No. 5, 2015; van der Linden, S., Political Psychology, online first publication, 2020).
In addition, personality traits such as lower levels of agreeableness, conscientiousness, and humility are associated with conspiracy theory belief (Bowes, S. M., et al., Journal of Personality, online first publication,2020).
One study found that more than half of the variance in endorsement of 9/11 conspiracy theories is explained by personality and individual traits such as political cynicism, agreeableness, and attitudes toward authority (Swami, V., et al., Applied Cognitive Psychology, Vol. 24, No. 6, 2010).
A tendency to see the world as a threatening, nonrandom place without fixed definitions of morality—or to use intuition over analytical thinking when processing information—further predicts conspiratorial belief (Moulding, R., et al., Personality and Individual Differences, Vol. 98, 2016; Swami, V., et al., Cognition, Vol. 133, No. 3, 2014).
When it comes to COVID-19, better performance on numeracy tasks and higher reported trust in scientists correlate with lower susceptibility to misinformation. In several samples, older adults were also less likely to believe coronavirus fake news (Roozenbeek, J., et al., Royal Society Open Science, Vol. 7, No. 10, 2020). Psychologists say more research is needed to understand whether susceptibility to misinformation is a general or “context-dependent” trait—for example, whether people who believe political fake news are the same people who believe COVID-19 fake news (Scherer, L. D., & Pennycook, G., American Journal of Public Health, Vol. 110, No. S3, 2020).Further reading Tackling misinformation: What researchers could do with social media data Pasquetto, I. V., et al., The Harvard Kennedy School Misinformation Review, 2020The debunking handbook 2020 Lewandowsky, S., et al., 2020Coronavirus misinformation: Quantifying sources and themes in the COVID-19 ‘infodemic’ Evanega, S., et al., Cornell Alliance for Science, 2020The psychology of fake news: Accepting, sharing, and correcting misinformation Greifeneder, R., et al. (Eds.), Routledge, 2020
[End of Quoted Matter]
SOURCE: https://www.apa.org/monitor/2021/03/controlling-misinformation
This is a transcript of an interview sponsored by the American Psychological Association. If you click on the Link, you should be able to access the Video presentation:
https://www.apa.org/research/action/speaking-of-psychology/fake-news
"MEMBERS
Home//Psychological Science//Research in Action//Speaking of Psychology//Speaking of Psychology: Fake news and...
Speaking of Psychology: Fake News and Why It Matters
Bonus Episode – Fake News and Why It Matters
📷In a special bonus episode filmed at APA 2019, the annual meeting of the association, APA director of research and special projects Vaile Wright, PhD, talks with Chrysalis Wright, PhD, associate lecturer at the University of Central Florida, about fake news, how it spreads and why we should care about it.
Streaming Audio
Video
Transcript
Vaile Wright: Hello and welcome to Speaking of Psychology, a podcast by the American Psychological Association. I'm Dr. Vaile Wright, the director of research and special projects at APA, and I'm guest hosting this podcast today coming to you from APA 2019 in Chicago.
Joining us today is Dr. Chrysalis Wright. She's the director of media and migration lab at the University of Central Florida and we're gonna be talking about fake news. I mean my goodness fake news. It's 2017's word of the year, it just made it into the Oxford English Dictionary and it's everywhere. So, I'm hoping we can really talk about and have a dialogue about what does it mean when intentionally fabricated content is spread as fact and why should we care about it? So, that's what we're going to talk about today. So, thank you for joining us.
Chrysalis Wright: Thank you for having me.
Vaile Wright: Absolutely. I thought maybe one of the ways to start would be if you could tell a little bit about your background and how you kind of came to studying this type of thing, because I don't think a lot of psychologists necessarily go into this.
Chrysalis Wright: Well, my doctorate degree is in developmental psychology, and I'm in the psychology department at the University of Central Florida, and the majority of my research up until I don't know a couple years ago focused on the influence of music. So, most of my research focuses on the impact of like entertainment media and other types of media on consumers, and I started to become interested in fake news because of things I started seeing in 2015, 2016 it seemed like that term started to be thrown around a lot, everyone was accusing everyone else of presenting information and everything was called fake news, but alongside that I started to, I think we all started to see an increase in very overt prejudicial behavior that was presented to consumers through, you know through various media avenues and I wanted to know that why that was happening. So, that's why I started looking at fake news.
Vaile Wright: And we're all kind of consumers, right? Yeah when you're talking, but we're all consumers of sort of what's going on. So, when we're thinking about the fact that we're all consumers, our family are consumers, you know, how do we explain what is fake news? Why is it a problem?
Chrysalis Wright: Well there there's, it's kind of hard to define because everyone is using the term but the way I use it or define it in terms of a research perspective might be a little bit different. So, when I'm looking at fake news in terms of research, I'm looking at false, completely false information that kind of perpetrates rumors, spreads misinformation with the intent of providing false information to consumers which is different than say a news report that's presented from a biased perspective.
Vaile Wright: And so, where does satire come into this? Because I came across this website the other day called Taters gonna Tate.
Chrysalis Wright: Oh, I actually was going to talk about that today.
Vaile Wright: Excellent, so, I mean where do you fit that in? I mean it's a satirical website with very provocative headlines that are meant to be provocative, and it says all over there's these disclaimers for satire, for satire.
Chrysalis Wright: So, that's the problem, people not everyone reads the disclaimer. I mean, you know this not everyone reads the fine print, right? Like usually we just for instance if you go to download an app on your phone and you have to agree to the terms and conditions, we don't actually read that we just hit the box and say whatever and go straight to it. So, we don't read the fine print, but consumers aren't provided with the tools necessary to be able to decipher satire versus actual fake news. And we saw a recent example of how misinterpreting information how it can actually influence attitudes and even behaviors.
So recently there was an instance that involved two police officers in Louisiana, which might be how you came across the website Taters Gonna Tate in the first place. So, we had a situation where there was a police officer who was on social media, I believe he was on Facebook, and he saw a post that I believe he believed it was true. And the post talked about how one of our congresswomen, it said something to the effect that she believed our military were overpaid for their services. And so, he read this article or whatever the information was on Facebook, felt really strongly about it and then posted a comment that was interpreted rightly as a death threat against the congresswoman. And then another police officer liked his post. Both of the police officers were fired, especially because of the post that the death threat was a bit much and that's why they you know they were fired from the force. But the post that they were responding to was from Taters Gonna Tate like that's the name of a website right? But they have a social media account, they posted this post it was intended to be satire but obviously when they read it, they didn't understand that it was satire, believed it was factual information and it led to you know strong emotions, feeling upset which led to the posting. And you know, we know that you know, those types of things can lead to changing our attitudes, we're not certain to the extent at which it can impact our actual behaviors. We know attitudes can influence behaviors, but we have to do more research to connect those dots and make sure that there's a clear path So, that we better understand how fake news and misinformation, false information can impact consumers.
Vaile Wright: And I sort of feel like part of it is that there's just So, much information out there that it can be hard to decipher which is which. And what seems different to me about today is the way in which political figures are participating in this. People that we are supposed to trust in leadership roles just don't seem to be discerning the information either.
Chrysalis Wright: Well, you can't call everything fake news. Just because information doesn't necessarily align with your personal views that you have already doesn't mean that it's fake. We had a session this morning about fake news and one thing that was brought up is just because something is your truth does not necessarily mean it's the truth. There's a difference there. So, just because people disagree with you doesn't mean that the information that they're presenting is fake or false. It doesn't help that we are bombarded with all of this information all the time. A lot of the information that's you know kind of determined to be fake news is spread through social media. Almost everyone has a social media account that includes Facebook, it includes Twitter, it includes YouTube. So, YouTube is still considered a social media platform.
The information, you know, we see something on there most of the time the information that we view already agrees with views we have because the information that is presented to us is based on our previous likes and clicks and all of that anyways. That in itself is problematic because it does not present us with the whole picture on any topic. And if we only see one side of the story or whatever it is all of the time we begin to believe that that is all of the information that we have. But we come across information, it makes us feel emotional, we get upset about it, we think other people should have this information, So, we share it, we like it, we send it to all of our friends and family and then it just, the cycle just continues.
We don't have the tools right now to decipher all of this information that's coming at us and we live in a like a digital world where we want to know information right now. So, if you have a question, you can pull out your phone, you can google it, it gives you an answer and you want the information right then. We don't really pause to look at the source of the information, where it came from, the credentials for the source, you know we want our information immediately and that's that kind of feeds into the issue also.
Vaile Wright: Right, and I've heard people refer to it sort of as like it's not that people are bad or that they're even partisan it's more kind of like lazy thinking. Like we just don't go to the effort to actually figure out whether what we're looking at is true or not. And I thought what was interesting about this morning that was mentioned is that it's often individuals in the baby boomer generation. That are often retweeting or reading these sorts of fake news. It's not necessarily younger generations, and I come across that in my own family. I have older cousins where I see that they have liked something or shared it on their Facebook and I can tell it's not true. I think you were telling a story earlier about something you saw on Facebook.
Chrysalis Wright: Right, I was giving an example of how social media has been used to share fake news or false with misinformation. So, there was a post I came across a couple years ago, it was a picture of a wall, and it was maybe 2016-2017. It was a picture of a wall and it said something to the effect that you know, Mexico has the ability to protect its borders why can't we? So, the intent was to compare this wall that is supposedly in Mexico separating it from I believe Guatemala to the proposed border wall that's been you know proposed to be built here between the United States and Mexico. So, a couple of things, yes, it's a picture of a wall but it is not a picture and of a wall in Mexico or Guatemala. It's not a wall that separates those two areas, instead it's a picture of a wall that is in Israel, it was built in 2013, has nothing to do with our political issue whatsoever. But people saw it, they shared it, retweeted it and just it spread like wildfire. And you can see consumers they believe it, and they don't verify the source of the information and they want everyone to you know, they see this and they're like see if Mexico can protect their borders why can't we do the same, but that's not that's not real. That's fake news. That's false information, it's misinformation and it's misleading consumers. So, that's another thing in terms of how we define fake news. What's the intent of the author that put it together? So, if you tell me something, well I heard bla bla bla, if that's really what you heard and you really believe it you telling me does not necessarily mean you're sharing fake news, but the author, if the original author of that information knew it was false information, intended to deceive then that's different. So, it's the intent behind it. We can't we can't really blame everyone for sharing information that they come across on social media. We really need to look at where the information is originating from and the author's intent in creating it in the first place.
Vaile Wright: Right, because it's the intent of the author that matters, not the resharer or the retweeter who might just be doing it because like you said before it evokes some sort of emotional reaction in them and they felt compelled to share it with others or because you know it reaffirmed something they already believed in. I think the other thing that we were talking about this morning was this idea of wanting to be right.
Chrysalis Wright: Right, we all want to be right about everything.
Vaile Wright: So, it ends up becoming what we call in psychology a confirmation bias, So, that we seek out information that confirms the beliefs we already have as opposed to seeking out information that might counter it. Right and the algorithms on social media does not help that whatsoever, it makes it worse. So, I mean we've hit on this a little bit but like what are the consequences of, you know all this sort of misinformation and the negative intent around it, why do we care should we care, why does it matter?
Chrysalis Wright: If people start to believe false information then they start to doubt accurate information that is accurate. So, they start to just dispute scientific evidence. You know, they start to fully believe in this misinformation they start to kind of fall into numerous conspiracy theories and those types of things. And depending on the type of content that's in the fake news or misinformation that they're consuming, it can impact their attitudes.
So the research I've conducted has specifically looked at outcomes related to immigration attitudes, immigration policy attitudes and Islamophobia and what I found you know multiple times is that people who are exposed to negative images related to these particular you know hot topics, controversial topics through fake news tend to have, they developed more fake, more negative attitudes. So, it amplifies negative attitudes that they might already have. What we don't know is how the amplification of those negative attitudes how it impacts behaviors.
As a researcher, I'm interested in for instance the recent mass shootings that took place. I'm interested in better understanding the social media profiles of those perpetrators. I want to know exactly what they were looking at in terms of social media. What were they looking at on Facebook? What were they looking at on YouTube? Obviously they had negative attitudes, something is off, something is there. So, we all have biases and we all have you know our personal opinions but there's something different when people kind of take it to the next level and cause terror.
Vaile Wright: Right. Or you know, I'm thinking sometimes it can be politically slanted but there are other ways that fake news has had consequences like around vaccines, right? And so, you know you have these false stories around vaccines based off of false research in certain ways, but when you look at the real science, of course, it tells a different story. But at this point you almost can't rewind the clock. Like once people start to believe something and they have lost trust in institutions, then what do we do?
Chrysalis Wright: We need to better arm our consumers. We need to make sure that they are better educated in terms of media literacy. We need to make sure that they understand that sometimes, not all the time, but sometimes authors who put out information are trying to trick them. We need to arm them So, that they're aware of these facts, that it is not fake news. You know that they're aware of these facts and that they can better protect themselves. We need to make sure that consumers know how, because identifying information that's false is very hard, it's difficult. So, we need to make sure that consumers know how to pause, check the source of the information, where is it coming from. If they can't find the source just leave it because if you're not able to verify the source of the information I would just consider it false information and not even go there. But consumers also need to kind of, especially in terms of social media, pause when they see something that makes them emotional. Whether it's negative or positive emotion, pause for a minute think about what is it about this piece of information that's got me upset, why do I want to share it with other people and make sure that they understand that you know if you if you share it then to a certain extent you're somewhat you know responsible for spreading false information. So, if we can better arm our consumers and help them to be better able to identify false information, I think it will help a little bit. But once that trust is gone, we have to work to get it back.
Vaile Wright: And I think like you said it can be really challenging because the producers of the false information are using visuals and these provocative headlines and trying to go for shock, right So, that it gets you worked up and it gets you reactionary instead of pausing.
Chrysalis Wright: And that's exactly why false news or fake news is So, memorable, why it's persistent. I mean there's been research that has shown that even if you tell people okay the information that you just shared is false here is proof that it's false, here's a scientific study that proves the information you just told me is false, they still believe it. Because of how shocking it is because of how sensational it is it has more of an effect than real information. So, that in itself is part of the issue. People need to understand if you see a headline that just seems completely out of this world completely unrealistic it probably is, you know. The headlines should just be from more hard news or reputable news sources, they tend to just relay the facts. If you see something that is shocking whether it's images, the headline itself, the way that the information is shared then there's a reason for that you know that it makes it more memorable, but it doesn't make it true right, you know.
Vaile Wright: And I think you know in the wake of sort of this phenomenon some social media companies have tried to take counter measures right, sort of labeling things as false or putting tags on them. Seems like the research is pretty mixed on whether or not that's effective. Is that an accurate read?
Chrysalis Wright: Well for one when YouTube did that it totally messed up one of my research studies, So, I'm not.
Vaile Wright: So, you're not a fan?
Chrysalis Wright: Well I think we have to be aware of where the, who's the, who's to blame, right? Who's responsible for the information? So, you have social media platforms that don't want to be responsible for the sharing of misinformation that could potentially lead to whatever negative consequences. And I think that's why they have started to step in and start to block content and remove stuff like the Alex Jones YouTube channel, which is I was using that as a fake news example in a research study So, I had to stop data collection all together and kind of use what I already had, So, we'll see how that goes. But they did it because they don't want to be responsible for the information.
So but what we're looking at is not it is not a technological problem, it is not a social media problem, it is a people problem. You know, looking for information that already confirms what we believe, being drawn to information that is shocking, sensational, that causes us to be emotional, wanting to tell everyone information that we think we know, those are areas that are problematic that are kind of increasing the spread of social media, you know, the fake news through social media. So, we have to be very cautious about taking stuff down. Who makes the decision to take it down, you know, are they biased in some way? So, is that their bias that is leading them to want to remove content or do they know for sure, have they done their fact-checking and they know for sure that it's false information. So, it becomes kind of sticky because you know, we have the right to free speech, but that doesn't mean we have the right for hate speech for instance or free speech without consequences. So, there's a line and we have to as a society kind of figure out exactly where that is and what's protected and what isn't.
Vaile Wright: Do we have an administration that can do that?
Chrysalis Wright: I think something like that has to be a bipartisan decision. So, both sides of the aisle have to come together to kind of solve this, help solve this issue. And one thing we have to do is quit calling everything fake news. Just because I tell you something is fake news doesn't mean that it's fake. So, we have to stop throwing that term around as if, I mean we hear it every day. We have to stop that and pay attention and make sure that what we're sharing is factual information you know regardless of who it is, and just stay off Twitter.
Vaile Wright: She says after I've already tweeted out that I am here right now, doing this podcast. You know, one of the things you brought up earlier that made me think was around Alex Jones right and his YouTube channel. And I think one of the other motives around spreading misinformation is for profit. So, a lot of his, if you've ever watched his work is really around selling a lot of products that he has. And he gets people to click on his stuff and come to his page because he's being sensationalist and loud and at the end of the day he is really trying to sell you something. How, what role do you think that is, that sort of selling the profit component of sort of behind all of this spreading, the intent of this spreading misinformation.
Chrysalis Wright: Well, we see the for-profit issue in hard news or mainstream news also. Everything is for profit, right. So, even in mainstream, traditional news outlets they have commercials that they're trying to sell you in between news breaks and the way they present information is done in such a way where they want you as a consumer to watch their show rather than the other one So, that they can make more money. So, it kind of boils down to profit regardless of if it's hard news or fake news. Fake news gets a little bit more tricky because for instance yes Alex Jones was selling products and that type of stuff, but not everyone who creates fake news is getting a profit in that way, it doesn't always have to be a monetary profit. It could be well I want a lot of people to vote this particular way So, I'm going to create some sensational information and put it out there to try to convince people to do what I want them to do. To influence them in some in some way, it doesn't necessarily have to be a monetary issue.
Vaile Wright: So, you know for our listeners, other than being more media literate or media savvy, what can people do to help figure out what news is legitimate and what news might not be?
Chrysalis Wright: Well, you can fact check the news that would be helpful.
Vaile Wright: But let's say I don't have time to fact check the news?
Chrysalis Wright: Then you shouldn't be reading, then don't read it. Because we have to be wise consumers, you know, we live in an age where we want everything right now, we want everything instant. We don't really want to, we don't have time to do that. We're all working, everyone is busy. But we need to make the time. If we want accurate information, we have to work to get it. And one of the best things about social media and the internet is that we have all of the information we could possibly think of at our fingertips but it's how we're using those tools that starts to become problematic because people are creating information that's not real, its fake, it's false, and the more you spread it the more people start to believe it. So, we need to make sure that we're checking the source and we have to engage in some type of self-reflection. Why is this information important to me? How does it make me feel? Why do I want to share it? And we need to pause and really kind of consider those things before we share information we come across on social media, before we tell our friends and family. Why is it important to us?
Vaile Wright: I also thought it was interesting this morning talking about how we only maybe receive half of the story. And So, what that made me think about is can we be more inquisitive right as individuals? Can we ask ourselves okay, I've heard this part of the story, is there more, what's the and?
Chrysalis Wright: There's always more. There's always more and we do get half the story. So, even from like mainstream news sources CNN and Fox. Those are, both of those avenues are very biased. So, on the way here I was kind of flipping back and forth between how Fox News and how CNN was discussing the mass shootings that recently took place and the president's tweets and rhetoric and those types of things that have happened prior to and shortly after, and both of those sources are presenting the same story. Right, So, both of them have facts in there, but the way they're describing it, the slant they're putting on it and all of that is very biased. So, if you're only watching say CNN, you only get half the story, or if you're only watching Fox, you're only getting half the story. We need to make sure that we're getting our information from multiple sources, engaging in fact-checking, doing our own research. No one wants to do it, but you have to do it if you're going to be a responsible consumer. You know at some point it's our responsibility to make sure that what we believe as truth for ourselves is actually the truth.
Vaile Wright: Is there a reason to feel hopeful?
Chrysalis Wright: There's always reasons to be hopeful. Because we can always make a change, that's one of the things that's great about being people is if you see something that you don't like the way things are going then you can put in effort and energy to make a change. And we have, as a society, we have to all come together and decide that that's something that we want to do. If we don't then we're just going to see this type of thing just progress and probably snowball and just continuously get worse. But if as a society we come together and say okay we want to make sure that the information that we are getting is accurate, true information, then we can make sure that that happens.
Vaile Wright: So, it's on us.
Chrysalis Wright: On consumers, yes.
Vaile Wright: And probably some people in charge.
Chrysalis Wright: Hopefully. If we're passing laws and that type of thing it really has to be something that's bipartisan. They have to be able to put their political differences aside and figure out a way to work together. Like you can be Democrat, I could be Republican, that's fine. You have to be able to put your differences aside and work together to benefit society as a whole.
Vaile Wright: Right, and I think you know as we talked, society benefits from facts.
Chrysalis Wright: Yes, factual information is helpful.
Vaile Wright: So, that then you can make informed decisions about what you do. Well thank you So, much for being here today.
Chrysalis Wright: Thank you for having me.
Vaile Wright: I really want to thank Dr Wright for sharing your expertise on this really timely and important topic. If you liked what you heard today you can always email us with your ideas and thoughts at [email protected]. If you're interested in hearing more of our podcast you can get them on iTunes, on Stitcher wherever you get your podcasts. They're also available on our website SpeakingofPsychology.org. I'm Dr. Vaile Wright, it's really been a pleasure to guest host this podcast. I want to thank all of our listeners and everybody. Take care.
Date created: August 2019
📷Download Episode Bonus Episode: Fake News and Why It Matters / Save the MP3 file linked above to listen to it on your computer or mobile device. / All episodes / Speaking of Psychology Speaking of Psychology is an audio podcast series highlighting some of the latest, most important, and relevant psychological research being conducted today. / Produced by the American Psychological Association, these podcasts will help listeners apply the science of psychology to their everyday lives. / Subscribe and download via: Apple 📷Stitcher 📷Spotify 📷About the host: Vaile Wright, PhD Guest host Vaile Wright, PhD, is the senior director of health care innovation at the American Psychological Association, where she focuses on developing strategies to leverage technology and data to address issues within health care, including access, triage, patient/provider matching, performance, measuring care, and optimizing treatment delivery. In addition, as a spokesperson for APA, she has been interviewed by television, radio, print and online media including NBC News, "Today Show," CSPAN, The Washington Post, The New York Times, and NPR on a range of topics including stress, politics, discrimination and harassment, trauma, serious mental illness, telehealth and technology, and access to mental health care. / CONTACT APA
Advancing psychology to benefit society and improve lives" [End of Quoted Matter]
SOURCE: https://www.apa.org/research/action/speaking-of-psychology/fake-news
This brief general interest article provides pointers about how to detect false or misleading information:
https://silverliningsclinic.com/blog/fake-news-mental-health-what-is-reliable
""FAKE NEWS" & MENTAL HEALTH: WHAT IS RELIABLE?? With all the available information at our fingertips now due to the internet, it can be difficult to discern what is accurate, reliable information and what is just “fake news”. This is particularly true regarding our health. In this post, we're going to dive into a few tips that you can use to make sure that what you are reading has hard science behind it.📷Step 1: Consider the Source
One of the first things you need to look at on a website is where the article/blog is located. There are trusted websites to find information where the information they are providing they can back up using research and previous studies.
Any reliable website should have data and sources to support any statements they make. A "source" can be any research article, institution, or clinical professional, but even this should be considered carefully. A general magazine article doesn't count! You can usually find these at the bottom of the page, or imbedded into the article.
Step 2: Go Directly to the Source
It’s one thing to read articles summarizing research, but you can also actually read the research yourself. The article you read should give you a basic research question that the authors intend to answer, their predicted outcome (hypothesis), an operational definition of all study variables (i.e., they give you a measurable, observable definition of what they are assessing) as well as information on how they assessed it (e.g., tests, questionnaires, demographic of participants, etc.). In other words, they clearly follow the Scientific Method.
Research articles can be accessed using websites like Google Scholar. Google Scholar will help you make sure that the research you are reading is empirical and peer reviewed. Empirical research simply means that the research is based on something that was measured and observed. Peer reviewed means that the article, before being published in a journal, was reviewed by other experts in the field and determined to be valid. Note, however, that only some articles are available to the general public and for free. Some articles do require that you either purchase the article or are affiliated with a research institution (i.e., university). Still, there are many free sources available.
Before a scientist can publish an article, they must submit that article to a journal (e.g., Journal of the American Medical Association). Then, lead experts in that field are appointed by the journal to thoroughly review the article. Their objective is to dissect it to make sure that the article meets standards and contains valid research, without significant gaps. It is very difficult to get articles published through these means, as journals can easily and often do reject publications and/or suggest edits or revisions based on either the protocol or explanation of findings. This means that articles that do get published meet a high standard of writing, content, and research quality and have sometimes been reviewed and edited many times prior to publication (https://library.sdsu.edu/reference/news/what-does-peer-review-mean ).
Even when a set of authors have jumped through all of these hoops and made it to publication, you the reader should still look at these studies with a critical eye. One good question to ask yourself is “Was I given enough information to recreate this study and test these results, or are there significant gaps in the study?” (https://guides.libraries.psu.edu/emp).
Step 3: Know What You're Reading
Now, as research articles are typically meant for fellow researchers and colleagues, they frequently contain jargon, or words that relate to a specific field or profession that are hard for others to understand that aren’t familiar with that field. To break it down for you, research articles are typically divided into distinct sections.
You may see journal articles that are a “meta analysis”. This means that the researchers went through and ran analyses on several similar research studies in order to globally state the current understanding of a topic. In other words, these analyses are not producing new results, but they are collecting all of the results into one study (https://www.meta-analysis.com/pages/why_do.php?cart=). This is used to help summarize and make determinations from data on a large scale, which provides helpful direction for future research.
***Red Flags That Should Make You Suspicious***
-ANY ARTICLE THAT CLAIMS THAT A CAUSES B.
Causation is a strong term and it is very difficult to say that one thing causes another. For example, there is a common misconception that vaccines cause autism, but there is no empirical, peer reviewed research showing this (https://www.ncbi.nlm.nih.gov/pubmed/24814559 ). If this statement alarms you, we refer you back to the review of what empirical and peer reviewed actually means - this has been heavily studied.
Another example: I could say that my headache was caused by temperature change outside, but it could also be because I’m dehydrated, my neck is tight, etc. That’s why it is more accurate to say that my headache is correlated with (or related to) the temperature change.
-ANY ARTICLE THAT IS PUBLISHED IN A MAGAZINE OR A BLOG THAT IS MAKING STRONG CLAIMS.
Again, this is why research can be difficult to get published because it has to go through rigorous study by professionals. Everyone is entitled to their own opinion, but remember that just because something works for one person doesn’t mean that it will work for everyone. This is another reason why having many people in a study is important to make sure that the results can be generalized to the entire population.
With this information, I hope that you will see medical research in a new light and really check to make sure that you are reading and learning from established sources!
Stay tuned for a future blog post in the coming months that will dive into some common mental health myths that are still around, and are largely based on misguided interpretation of research.
Happy Reading!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Kelsey McElroy, MA, CSP
Certified Specialist of Psychometry
SHARE THIS POST
[END OF QUOTED MATERIAL]
Source: https://silverliningsclinic.com/blog/fake-news-mental-health-what-is-reliable
This is an interesting and relevant scientific research article in The Journal of Medical Internet Research:
https://www.jmir.org/
" Journal of Medical Internet Research Journal Information Browse Journal Select... 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 Submit Article
Published on 20.1.2021 in Vol 23, No 1 (2021): January
Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/17187, first published November 24, 2019.
📷
Prevalence of Health Misinformation on Social Media: Systematic Review
Prevalence of Health Misinformation on Social Media: Systematic Review
Victor Suarez-Lledo 1, 2 📷 ; Javier Alvarez-Galvez 1, 2 📷Authors of this article:
ArticleAuthorsCited by (10)Tweetations (39)Metrics
Abstract Background:
Although at present there is broad agreement among researchers, health professionals, and policy makers on the need to control and combat health misinformation, the magnitude of this problem is still unknown. Consequently, it is fundamental to discover both the most prevalent health topics and the social media platforms from which these topics are initially framed and subsequently disseminated.
Objective: This systematic review aimed to identify the main health misinformation topics and their prevalence on different social media platforms, focusing on methodological quality and the diverse solutions that are being implemented to address this public health concern.
Methods: We searched PubMed, MEDLINE, Scopus, and Web of Science for articles published in English before March 2019, with a focus on the study of health misinformation in social media. We defined health misinformation as a health-related claim that is based on anecdotal evidence, false, or misleading owing to the lack of existing scientific knowledge. We included (1) articles that focused on health misinformation in social media, including those in which the authors discussed the consequences or purposes of health misinformation and (2) studies that described empirical findings regarding the measurement of health misinformation on these platforms. Results: A total of 69 studies were identified as eligible, and they covered a wide range of health topics and social media platforms. The topics were articulated around the following six principal categories: vaccines (32%), drugs or smoking (22%), noncommunicable diseases (19%), pandemics (10%), eating disorders (9%), and medical treatments (7%). Studies were mainly based on the following five methodological approaches: social network analysis (28%), evaluating content (26%), evaluating quality (24%), content/text analysis (16%), and sentiment analysis (6%). Health misinformation was most prevalent in studies related to smoking products and drugs such as opioids and marijuana. Posts with misinformation reached 87% in some studies. Health misinformation about vaccines was also very common (43%), with the human papilloma virus vaccine being the most affected. Health misinformation related to diets or pro–eating disorder arguments were moderate in comparison to the aforementioned topics (36%). Studies focused on diseases (ie, noncommunicable diseases and pandemics) also reported moderate misinformation rates (40%), especially in the case of cancer. Finally, the lowest levels of health misinformation were related to medical treatments (30%). Conclusions: The prevalence of health misinformation was the highest on Twitter and on issues related to smoking products and drugs. However, misinformation on major public health issues, such as vaccines and diseases, was also high. Our study offers a comprehensive characterization of the dominant health misinformation topics and a comprehensive description of their prevalence on different social media platforms, which can guide future studies and help in the development of evidence-based digital policy action plans.
J Med Internet Res 2021;23(1):e17187 doi:10.2196/17187Keywords social media (798); health misinformation (2); infodemiology (160); infodemics (2); social networks (24); poor quality information (1); social contagion (2) Introduction Over the last two decades, internet users have been increasingly using social media to seek and share health information [1]. These social platforms have gained wider participation among health information consumers from all social groups regardless of gender or age [2]. Health professionals and organizations are also using this medium to disseminate health-related knowledge on healthy habits and medical information for disease prevention, as it represents an unprecedented opportunity to increase health literacy, self-efficacy, and treatment adherence among populations [3-9]. However, these public tools have also opened the door to unprecedented social and health risks [10,11]. Although these platforms have demonstrated usefulness for health promotion [7,12], recent studies have suggested that false or misleading health information may spread more easily than scientific knowledge through social media [13,14]. Therefore, it is necessary to understand how health misinformation spreads and how it can affect decision-making and health behaviors [15].Although the term “health misinformation” is increasingly present in our societies, its definition is becoming increasingly elusive owing to the inherent dynamism of the social media ecosystem and the broad range of health topics [16]. Using a broad term that can include the wide variety of definitions in scientific literature, we here define health misinformation as a health-related claim that is based on anecdotal evidence, false, or misleading owing to the lack of existing scientific knowledge [1]. This general definition would consider, on the one hand, information that is false but not created with the intention of causing harm (ie, misinformation) and, on the other, information that is false or based on reality but deliberately created to harm a particular person, social group, institution, or country (ie, disinformation and malinformation).The fundamental role of health misinformation on social media has been recently highlighted by the COVID-19 pandemic, as well as the need for quality and veracity of health messages in order to manage the present public health crisis and the subsequent infodemic. In fact, at present, the propagation of health misinformation through social media has become a major public health concern [17]. The lack of control over health information on social media is used as evidence for the current demand to regulate the quality and public availability of online information [18]. In fact, although today there is broad agreement among health professionals and policy makers on the need to control health misinformation, there is still little evidence about the effects that the dissemination of false or misleading health messages through social media could have on public health in the near future. Although recent studies are exploring innovative ways to effectively combat health misinformation online [19-22], additional research is needed to characterize and capture this complex social phenomenon [23].More specifically, four knowledge gaps have been detected from the field of public health [1]. First, we have to identify the dominant health misinformation trends and specifically assess their prevalence on different social platforms. Second, we need to understand the interactive mechanisms and factors that make it possible to progressively spread health misinformation through social media (eg, vaccination myths, miracle diets, alternative treatments based on anecdotal evidence, and misleading advertisements on health products). Factors, such as the sources of misinformation, structure and dynamics of online communities, idiosyncrasies of social media channels, motivation and profile of people seeking health information, content and framing of health messages, and context in which misinformation is shared, are critical to understanding the dynamics of health misinformation through these platforms. For instance, although the role of social bots in spreading misinformation through social media platforms during political campaigns and election periods is widely recognized, health debates on social media are also affected by social bots [24]. At present, social bots are used to promote certain products in order to increase company profits, as well as to benefit certain ideological positions or contradict health evidence (eg, in the case of vaccines) [25]. Third, a key challenge in epidemiology and public health research is to determine not only the effective impact of these tools in the dissemination of health misinformation but also their impact on the development and reproduction of unhealthy or dangerous behaviors. Finally, regarding health interventions, we need to know which strategies are the best in fighting and reducing the negative impact of health misinformation without reducing the inherent communicative potential to propagate health information with these same tools.In line with the abovementioned gaps, a recent report represents one of the first steps forward in the comparative study of health misinformation on social media [16]. Through a systematic review of the literature, this study offers a general characterization of the main topics, areas of research, methods, and techniques used for the study of health misinformation. However, despite the commendable effort made to compose a comprehensible image of this highly complex phenomenon, the lack of objective indicators that make it possible to measure the problem of health misinformation is still evident today.Taking into account this wide set of considerations, this systematic review aimed to specifically address the knowledge gap. In order to guide future studies in this field of knowledge, our objective was to identify and compare the prevalence of health misinformation topics on social media platforms, with specific attention paid to the methodological quality of the studies and the diverse analytical techniques that are being implemented to address this public health concern.Methods Guideline This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [26].Inclusion Criteria Studies were included if (1) the objectives were to address the study of health misinformation on social media, search systematically for health misinformation, and explicitly discuss the impact, consequences, or purposes of misinformation; (2) the results were based on empirical results and the study used quantitative, qualitative, and computational methods; and (3) the research was specifically focused on social media platforms (eg, Twitter, Facebook, Instagram, Flickr, Sina Weibo, VK, YouTube, Reddit, Myspace, Pinterest, and WhatsApp). For comparability, we included studies written in English that were published after 2000 until March 2019.Exclusion Criteria Articles were excluded if they addressed health information quality in general or if they partially mentioned the existence of health misinformation without providing empirical findings. We did not include studies that dealt with content posted on other social media platforms. During the screening process, papers with a lack of methodological quality were also excluded.Search Strategy We searched MEDLINE and PREMEDLINE in March 2019 using the PubMed search engine. Based on previous findings [16], the query searched for MeSH terms and keywords (in the entire body of the manuscript) related to the following three basic analytical dimensions that articulated our research objective: (1) social media, (2) health, and (3) misinformation. The MeSH terms were social media AND health (ie, this term included health behaviors) AND (misinformation OR information seeking behavior OR communication OR health knowledge, attitudes, practice). Based on the results obtained through this initial search, we added some keywords that (having been extracted from the articles that met the inclusion criteria) were specifically focused on the issue of health misinformation on social media. The search using MeSH terms was supplemented with the following keywords: social media (eg, “Twitter” OR “Facebook” OR “Instagram” OR “Flickr” OR “Sina Weibo” OR “YouTube” OR “Pinterest”) AND health AND misinformation (eg, “inaccurate information” OR “poor quality information” OR “misleading information” OR “seeking information” OR “rumor” OR “gossip” OR “hoax” OR “urban legend” OR “myth” OR “fallacy” OR “conspiracy theory”). This initial search retrieved 1693 records. Additionally, this search strategy was adapted for its use in Scopus (3969 records) and Web of Science (1541 records). A full description of the search terms can be found inMultimedia Appendix 1.Study Selection In total, we collected 5018 research articles. After removing duplicates, we screened 3563 articles and retrieved 226 potentially eligible articles. In the next stage, we independently carried out a full-text selection process for inclusion (k=0.89). Discrepancies were shared and resolved by mutual agreement. Finally, a total of 69 articles were included in this systematic review (📷Figure 1).
📷Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow chart.View this figure Data Extraction In the first phase, the data were extracted by VSL and then checked by VSL and JAG. In order to evaluate the quality of the selected studies and given the wide variety of methodologies and approaches found in the articles, we composed an extraction form based on previous work [27-29]. Each extraction form contained 62 items, most of which were closed questions that could be answered using predefined forms (yes/good, no/poor, partially/fair, etc). Following this coding scheme, we extracted the following four different fields of information: (1) descriptive information (27 items), (2) search strategy evaluation (eight items), (3) information evaluation (six items), and (4) the quality and rigor of methodology and reporting (15 items) for either quantitative or qualitative studies (Multimedia Appendix 1). Questions in field 2, which have been used in previous studies [27], assessed the quality of information provided to demonstrate how well reported, systematic, and comprehensive the search strategy was (S score). The items in field 3 measured how rigorous the evaluation was (E score) for health-related misinformation [27]. Field 4 contained items designed for the general evaluation of quality in the research process, whether quantitative [28] or qualitative [29]. This Q-score approach takes into account general aspects of the research and reporting, such as the study, methodology, and quality of the discussion. For each of the information fields, we calculated the raw score as the sum of each of the items by equating “yes” or “good” as 1 point, “fair” as 0.5 points, and “no” or “poor” as 0 points (Multimedia Appendix 2). The purpose of these questions is to guarantee the quality of the selected studies.Furthermore, in order to be able to compare the methods used in the selected studies, the studies were classified into several categories. The studies classified as “content/text analysis” used methods related to textual and content analysis, emphasizing the word/topic frequency, linguistic inquiry word count, n-grams, etc. The second category “evaluating content” grouped together studies whose methods were focused on the evaluation of content and information. In general, these studies analyzed different dimensions of the information published on social media. The third category “evaluating quality” included studies that analyzed the quality of the information offered in a global way. This category considered other dimensions in addition to content, such as readability, accuracy, usefulness, and sources of information. The fourth category “sentiment analysis” included studies whose methods were focused on sentiment analysis techniques (ie, methods measuring the reactions and the general tone of the conversation on social media). Finally, the “social network analysis” category included those studies whose methods were based on social network analysis techniques. These studies focused on measuring how misinformation spreads on social media, the relationship between the quality of information and its popularity on these social platforms, the relationship between users and opinions, echochambers effects, and opinion formation.Of the 226 studies available for full-text review, 157 were excluded for various reasons, including research topics that were not focused on health misinformation (n=133). We also excluded articles whose research was based on websites rather than social media platforms (n=16), studies that did not assess the quality of health information (n=6) or evaluated institutional communication (n=5), nonempirical studies (n=2), and research protocols (n=1). In addition, two papers were excluded because of a lack of quality requirements (Q score
Maggie Aradat I think Prof. Nancy Ann Watanabe have solved your problem of "scarcity" of research on your topic of interest :) Best of luck.
If there is a lack/scarcity/paucity of information in the topic you choose, I encourage to make the topic broader. I recently read a thesis and they quite right that there was not a lot in the literature on that topic and then they proceeded to do a review based on about 3 pieces of relevant literature. While it was true there wasn't a lot on that specific topic, if they made it a bit wider, then it would be a lot less painful to read!
It's possible to research on topic still. There's no minimum number of articles required for nar. review. If you can't find articles in regard to the topic, You can do a nar. review on mental health misinformation generally and it will be considered.
I agree with the opinions that have been written above. This is a very important and poorly researched topic. If you work on it, it will be a great contribution to science.
A few years ago, I worked on a similar topic concerning the stigma of mentally ill people and the impact of movies on social beliefs. I didn't publish a paper on it, but I only presented data at conferences. If you are interested, I am open to collaboration and would be happy to help you using any materials I have.
Here on page 46/49 is one of the abstracts I published: https://zdrowiepubliczne.org/files/ksiazka_abstraktow_kzp_2019.pdf
Thank you all so much for your contributions! Much appreciated, thank you.
Dear Maggie Aradat,
I would suggest you start your search by defining clear search terms following the ("X" OR "Y") AND "A" NOT "B" approach based on population, intervention, comparison, and outcome terms from the aim or hypothesis. (Remember to save your search terms as a text file for the "methods" section of the review!)
Then look at a curated and reliable database like Scopus to search for papers (and export the .CSV of the search results!) to do a Citation Network Analysis using CitNet Explorer or VOS viewer.
CitNet: https://www.citnetexplorer.nl/
VOS: https://www.vosviewer.com/
You can use a package called Scopus2CitNet to convert the .CSV table of citations downloaded from Scopus to a format that is readable by CitNet Explorer. The tutorial video and links to download the package is here: https://www.youtube.com/watch?v=g_1ClVf77AE (Tutorial) https://github.com/MichaelBoireau/Scopus2CitNet/ (R package)
For VOS I would suggest you also try getting data as a .CSV from other databases like "Dimensions" available from here, using the same search string: https://app.dimensions.ai/discover/publication
This can then be supplemented with a search of Google Scholar or maybe "Lens" (https://www.lens.org/) to find the most recent literature. Don't forget to also look at "grey literature" which should include dissertation/thesis databases (like https://oatd.org/ or https://www.ebsco.com/), pre-print servers (like https://www.biorxiv.org/ or https://www.researchsquare.com/), and books (I'd check out the World Catalogue for those - https://www.worldcat.org/ - in addition my own/local library)
I would recommend you start "abstracting" the papers you want to include early on in the review/search process by Author, Publication, Method, Sample Size and Statistics. (If you are unsure of the statistical strength for a given sample size you can use something like G*Power - https://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower - to estimate the minimum numbers required for generally reliable results)
Hope some of this helps!