Should researchers and educators do more to encourage policy-makers, media influencers, other opinion leaders, friends, family, neighbors, etc. to productively address propaganda?
El folletín mediático es una metodología de lectura de la prensa digital que propone pasar del microrrelato (noticia), a una unidad de rango superior como el macrorrelato, el objetivo es acceder a una lectura global de un acontecimiento de interés público. Se construye a partir de la selección de noticias serializadas y cronológica teniendo en cuenta la estructura del discurso polémico o agonal (discurso-contradiscurso) y posteriormente aplicando a dicho macrorrelato análisis semiótico del discurso.
Exactly. Thank you for noting the perspective of media as suspended narrative (as serial or series). As you may know, Françoise Revaz provides interesting analyses of a diversity of both "factual" and fictional narratives in "The suspended narrative, a transmedial narrative genre" (https://books.openedition.org/pul/3069 - pp. 117-134). She suggests "The narrator-journalist of a media soap opera offers as the days go by the snippets of an emerging story..." Her contrast of corpora and reflections on postulates about the genre are interesting. These can be directly useful to illustrate ways that the media sustains reader/listener curiosity and (however minimal) suspense. Thank you again.
Sir, I would say yes to all 6 suggestions. However, in countries like India, media literacy is still at its nascent stage. Once people are familiar with how or what media does and more specifically, why, then I guess it will be easy for academics & others alike to try and bring about a change in the mind-set of people. The only way to counter 'Bad Media' is to strengthen & celebrate 'Good Media'. In fact we all should come together for exactly such a move. Come up with a book that showcases 6 case studies of Good & Bad Media practices, respectively, across the globe. I would love to work on India, Pakistan; this side of the world. We can pool in others to represent other parts of the world as well.
So we're supposed to brainstorm how to solve the problem that significant share of population started to trust more some bizarre accounts on social media than legacy mass media or government officials, right?
Maybe those who suffer from credibility problem should carefully analyze which their decisions ruined their own reputation and implement some corrective action?
To Drs. Suparna Naresh & Marcin Piotr Walkowiak: Thank you for your responses, which may be somewhat complementary. Yet, how might criteria for "good & bad" practices for media or more broadly be agreed and stimulated? It remains a substantial challenge to incentivize civic leaders, media practitioners, or others to take corrective actions that (ironically) may often enhance their own credibility or effectiveness. By way of example, an interesting exploration of civic literacy as an approach to counter disinformation in Sweden is cited on ResearchGate at: Article Civic Literacy and Disinformation in Democracies
"Yet, how might criteria for "good & bad" practices for media or more broadly be agreed and stimulated? It remains a substantial challenge to incentivize civic leaders, media practitioners, or others to take corrective actions that (ironically) may often enhance their own credibility or effectiveness."
Well, apparently they'd not like to hear that possibly the answer is for them to wean off activism and being much more objective in presentation and news selection for next decade to slowly regain the trust, thus starving competitors (including the crazy ones) emerging online.
Additionally, the incentive structure is also not that favorable for policymakers. It can be quite blatant like in my country (state owned TV are political loot every election) or a bit subtle where supposedly independent stations are oddly cozy with some politicians. What's exactly the incentive for politician to fix it when from his perspective it works perfectly fine? There may be problem that such bias makes significant share of audience search for independent media (often even more biased but in other direction).
Moreover, there seems to be strong incentive for politicians to be busy "fixing" the issue of fake news. While one may not trust Twitterfiles, recently Facebook corroborated their claims with admitting experiencing similar pressure. From more interesting issues, it turned out that they were asked to censor not only info that later turned out be correct but even... satire. Yeah, in name of fighting fake news, one could suppress jokes that put establishment in bad light.
Thanks for your reply. The informed voices of educators and researchers are very much needed to encourage policymakers, media influencers, and other opinion leaders to advance change.
For example, some university conferences around this subject include sessions to probe how current global legislators "identify which interventions to propose as laws, how they weigh costs and benefits, how they garner public trust in proposed interventions, and how they address concerns about potential legislative overreach." This may help with executing some specific, practical initiatives. Arguably, most urgent are regulation of social media recommendation algorithms and criminalization of lies intended to cause serious harm, if that harm results.
But also some Western democracies are massively expanding education in media literacy, critical analysis, rhetoric, and civics to enhance the ability of adults, youth, and children to discern and assess reality. This may help people better identify and counter dishonest politicians or interrogate what's useful in social media or legacy media.
Even educators or researchers providing public comment to challenge inappropriate discourse can make a difference. For example, Randal Marlin (author of "Propaganda and the Ethics of Persuasion") offered comment on an article published in the online "Consortium News"* recently following the so-called debate between Vice Presidential aspirants in the United States. He raised serious concerns about the co-moderator, Margaret Brennan, the CBS network's chief foreign affairs correspondent opening the debate by stating that Iran could develop a nuclear weapon within a week or two--then asking if the two vice presidential candidates would support a preemptive strike by Israel on Iran. In my opinion, the question was so oddly questionable in so many ways (in addition to the questionable accuracy and recklessness from a foreign policy perspective)--and could be viewed as a sorry attempt to manufacture news from fairly trite "infotainment," given the context of Iran's missiles heading toward Israel about then. This was in addition to CBS abrogating responsibility for fact-checking.
Yes, some educators and researchers are stepping up to critique current actions and to encourage better practices. Yet, substantially more depth of insights and understandings could be advanced with good effect.
What are other approaches to consider for action?
[*The "Consortium News" article after which this comment appeared is entitled "VP Debate Question on Mideast 'Could Have Been Written by AIPAC'." The date of the article was October 2, 2024. Just scroll down and click "show comments" at: https://consortiumnews.com/2024/10/02/vp-debate-question-on-mideast-could-have-been-written-by-aipac/ ]
Aren't we already observing severe backslash due to attempts to manipulate the algorithm to fight fake news (or inconvenient news as well)?
Have you ( Rodney G. Miller ) tried in recent years to watch any youtube news / commentary independent channel that discusses hot button issues? As long as they are not properly part of establishment like big business or government sponsored one, they are heavily penalized by algorithm for mentioning improper key words.
As result one recently quite popular Polish channel by dr Wojciech Szewko (host is scholar specializing in jihadist groups and former deputy minister) has to be quite creative about wording. In rough translation he is discussing about conflict between Hummus and State North of Sinai. He describes for example heavy aerial landscaping of Band-Aid Strip (in Polish "gauze" is spelled "gaza").
The thing is that it would be really hard to accuse him of spreading fake news (quite often he outright even points sources or what are competing claims by propaganda of both sides) and not even that easy of accusing him of having clear political bias as whichever conflict he describes he generally is able to point out highly critical points for both sides. For extra irony, it also hard to accuse him of having especially radical views, as based on his political carrier he is apparently moderately left leaning.
So as his coverage is rather moderate and palatable for wider audience, it leads for masses to notice that big corporations and their politicians already are creating a dystopian civilization with huge amount of censorship. So, I'm not convinced whether general population (in contrast to ex. entrenched establishment) is really asking for more censorship with respect to ex. algorithms. Or after seeing so many arbitrary and bizarre bans, have enthusiasm to ramp up repression towards criminal charges as well. Most likely the feelings may be that the censorship is the primary problem that needs solving first. Moreover, seeing this amount of heavy handed censorship like this is rather unlikely to build any trust in official message.
(In case you dislike calling it "censorship", I'd like to remind that being censor used to be a highly respectable function in ancient Rome, just whichever term you may use it starts quickly earning negative connotation, so instead of repeating the process with new term, we may equally well stick to old one)
From techniques, that appear to at least not cause awful backslash, I think that community notes (due to lack of direct suppression and more even playing field in keeping accountable) seems to be the best what I've seen so far.
Thanks for sharing the Dr Wojciech Szewko example. Informed, public-service commentary will always be challenging - yet, so vital - whatever a commentator's implied ideology. Also challenging will be interventions to regulate harmful algorithms or lies. Spontaneous or manufactured backlashes can be virulent against legislative, organizational, or personal interventions. For example, well organized disinformation deniers have mounted successful efforts to suppress or harm academics and institutions just for studying the subject.
A potentially helpful perspective in these times may be Bertrand Russell's observations that "Government is a necessary but not sufficient condition for the greatest realizable degree of individual liberty... But if government is not to be tyrannical, it must be democratic, and the democracy must feel that the common interests of [hu]mankind are more important than the conflicting interests of separate groups." (Russell: p. 449)
Democracy will always remain a contested concept (Hanson: pp. 23-24) and genuine debate on public concerns, likewise. With bravado or defiance, autocrat-propagandists in the United States often falsely claim their speech is protected under The First Amendment–which does not apply for speech advancing particular illegal activity. This includes lies that "...unambiguously have no or little social value...and also cause cognizable harms (as well as sometimes yielding undeserved benefits for the liar)... [which includes] ...fraud, perjury, ...and making false statements to public officials." (Chen: p. 703) For sure, it’s well past time for more action than scholars or pundits rightly pointing this out.
For an interesting, practical overview of one nation's more comprehensive initiatives to address disinformation (beyond the better known efforts in Finland, Sweden, or Estonia), see on ResearchGate: Article How is Portugal addressing disinformation? Results of a mapp...
References
Chen, Alan K. and Justin Marceau (2018), “Developing a Taxonomy of Lies under The First Amendment,” University of Colorado Law Review, 89, p. 703, for U.S. Federal law governing limits to free speech, as interpreted in the ruling from United States v. Alvarez, 567 U.S. 709 (2012), with discussion of the limited ability to restrict lies, see: https://supreme.justia.com/cases/federal/us/567/709/
Hanson, Russel (1985), The Democratic Imagination in America: Conversations with Our Past, Princeton: Princeton University Press
Oliveiria, Ana Filipa, Margarida Maneta, Maria José Brites (2024), "How is Portugal addressing disinformation? Results of a mapping of initiatives (2010-2023)," Observatorio (OBS*), 18(5), pp. 158-174, https://obs.obercom.pt/index.php/obs/article/view/2444
Russell, Bertrand (1940), "Freedom and Government," pp. 438-449, [first published in Freedom: Its Meaning, edited by Ruth Nanda Anshen, published by Harcourt, Brace and Company, New York in 1940], https://russell.humanities.mcmaster.ca/wp-content/uploads/2019/05/10-58.pdf
"Thanks for sharing the Dr Wojciech Szewko example." I pointed him to demonstrate that policy which was supposed to fight fake news has reached this spectacular level of collateral damage, that not only some fringe channels with fringe opinions are incorrectly being targeted, but even those who appear as innocuous as possible (professional concerning checking sources, academic degree, still establishment adjacent, and what is hard to find by in my country ;) even nuanced opinion and moderate views) are nevertheless hit by ricochet. Another case: we have popular OsInt specialist (Jarosław Wolski), who usually covers Russo-Ukraine war. When he covered with the same approach the initial Hamas attack (with primarily drawing arrows on map and analyzing tactics employed by both sides) - this video was even banned by YT next day. As I was lucky enough to watch it, I still have no idea what was supposedly wrong with it, it looked reasonable.
"For example, well organized disinformation deniers have successfully mounted successful efforts to suppress or harm academics and institutions studying disinformation."
Maybe they perceive people like you as the bad guys who are responsible for giving academic credence and inciting such heavy handed and bizarre censorship (like the examples I gave above) under guise of fighting fake news?
By occasion, you made quite odd claim - would you be able demonstrate that they are "disinformation deniers"? Could you show they are outright denying existence of disinformation? Or merely consider supposed fighting against disinformation as even more destructive? (Would you call people who are against burning heretical books as "heresy deniers"? ;) )
I mean something much more serious concerning backslash, than some people involved being seen as highly contentious. People see censorship getting in overreach and removing stuff that they know is correct or opinion. As they don't give benefit of doubt concerning intent, they reach a conclusion that their establishment must have numerous awful things to hide. They lose trust in establishment and start to search on even more unhinged sources which begin to be perceived as relatively more trustful. It's a deep irony that both Russians and Ukrainians for closest to the truth coverage of undergoing war don't go to major Western platforms or media, but to Telegram which is treated as outlaw.
I'm not sure how the texts that you copy & paste are in any way related to the underlying problem of heavy handed censorship hitting on mass scale unrelated targets.
The cited comments refer to individual and collective efforts to sustain free thought and free speech in society. Bryan Druzin points out that censorship or other suppression of people’s exercise of these freedoms can result in self-censorship. And autocrats or aspiring autocrats rely greatly on a population’s self-censorship to manage dissent. [see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3387445 ]
There will always be tension between censorship and free speech principles in practice. But inaction or inappropriate reactions to the impact of propaganda processes renders society less free. In many places, it’s long past time to shake free from propaganda. This requires large-scale, sustained, cooperative efforts.
I've got one question - have you seen masses complaining that there is not enough heavy handed censorship on internet platforms? (Not masses admitting that there is a problem with fake news but masses cheering for solution being implemented like mass removal of content that they watched) Or maybe the complain came from establishment like politicians who were not specially popular to begin and become outraged as they started losing elections and wanted this problem to be fixed?
I've just learnt that the proper term in English for new language used by people to avoid algorithmic censorship is apparently algospeak:
https://en.wikipedia.org/wiki/Algospeak
In Poland, I've only heard the term "koalang", term coined in one our dystopian SF novel:
https://en.wikipedia.org/wiki/Koalang
But to be fair in that dystopian novel the language was much more poetic and creative that what we use right now on major platforms. Do you think that improved AI censorship (content moderation, whatever) would make this much more poetic language with regularly changing replacement terms, to prevent AI from catching up, a reality?