Claire Wardle - Media Helping Media https://mediahelpingmedia.org Free journalism and media strategy training resources Mon, 10 Mar 2025 11:16:43 +0000 en-GB hourly 1 https://mediahelpingmedia.org/wp-content/uploads/2022/01/cropped-MHM_Logo-32x32.jpeg Claire Wardle - Media Helping Media https://mediahelpingmedia.org 32 32 Information disorder – mapping the landscape https://mediahelpingmedia.org/advanced/information-disorder-mapping-the-landscape/ Thu, 08 Aug 2019 16:23:12 +0000 https://mediahelpingmedia.org/?p=1224 Information disorder is everywhere according to journalist Claire Wardle. Here she sets out the categories that reporters need to be aware of and research.

The post Information disorder – mapping the landscape first appeared on Media Helping Media.

]]>
Photo by Zainul Yasni on Unsplash
Photo by Zainul Yasni on Unsplash

Information disorder is everywhere according to journalist Claire Wardle. Here she sets out the categories that reporters need to be aware of and research.

As our understanding of information disorder becomes more sophisticated, it’s time to recognise the many smaller sub-categories so that journalists can better understand the issue and undertake more targeted research.

Here, I suggest thirteen sub-categories where I’m seeing specific initiatives, research or natural alliances.

It’s important to note that all these sub-categories should also be seen through an international lens. It is the one overarching theme that connects all of the following.

The thirteen spaces are:

  1. AI & manipulation:
    • Researching the ways that AI-generated synthetic media (otherwise known as ‘deepfakes’) will impact society, and developing tools and techniques for identifying and verifying these types of sophisticated manipulated visual imagery.
  2. Closed online spaces & messaging apps:
    • Researching the patterns of disinformation on private and semi-private spaces online, as well as messaging apps.
  3. Data harvesting, ad tech & micro-targeting:
    • Researching the connections between data collection and targeted disinformation campaigns.
  4. Fact-checking & verification:
    • Investigating claims made by official sources (politicians, think tanks, journalists), and investigating information, images and videos from unofficial sources on the social web.
  5. Identification of disinformation content & tactics:
    • Monitoring, verifying and providing contextual information around specific types of disinformation and the campaigns used to amplify them.
  6. Manufactured amplification:
    • Understanding techniques for artificially inflating disinformation campaigns, as well as attempts to distort ‘public opinion’, as when manipulating trending topics or purchasing signatures on online petitions.
  7. Media ecosystems:
    • Understanding how information disorder spreads across platforms and between traditional media (TV, radio and interpersonal communication).
  8. Media literacy:
    • Researching and evaluating best practices for teaching digital literacy in an age of information disorder.
  9. News Credibility:
    • Developing machine-readable indicators that ensure quality information sources are given priority in social streams and search results.
  10. Polarisation:
    • Understanding the impact of polarisation on the ways in which information is used, understood and shared.
  11. Policy & regulation:
    • Investigating the question of ‘regulation’, and ensuring it is based on clear definitions and evidence.
  12. Reporting best practices:
    • Researching and experimenting with best practices for publishing fact-checks or debunks, particularly investigating the concepts of the ‘tipping point’ and ‘strategic silence’ to prevent providing additional oxygen to rumours, false content and amplification tactics.
  13. Trust in media:
    • Research and initiatives designed to improve trust in the professional media.

Note: This material first appeared on First Draft and has been reproduced here with the author’s permission.


 

The post Information disorder – mapping the landscape first appeared on Media Helping Media.

]]>
Forms of information disorder https://mediahelpingmedia.org/advanced/information-disorder-how-to-recognise-the-forms/ Mon, 09 Jul 2018 09:25:32 +0000 https://mediahelpingmedia.org/?p=1231 With the spread of fake news, journalists need to recognise and understand the different categories, types, elements, and phases of information disorder.

The post Forms of information disorder first appeared on Media Helping Media.

]]>
Image courtesy of Randy Colas on Unsplash
Image courtesy of Randy Colas on Unsplash

With the spread of fake news, journalists need to recognise and understand the different categories, types, elements, and phases of information disorder.

Claire Wardle sets out the seven common forms of information disorder.

Categories of information disorder

  1. Satire or parody: No intention to cause harm but has potential to fool.
  2. Misleading content: Misleading use of information to frame an issue or individual.
  3. Imposter content: when genuine sources are impersonated.
  4. Fabricated content: New content is 100% false, designed to deceive and do harm.
  5. False connection: When headlines, visuals, or captions don’t support the content.
  6. False context: When genuine content is shared with false contextual information.
  7. Manipulated content: When genuine information or imagery is manipulated to deceive.
information graphic by Claire Wardle
Information graphic courtesy of First Draft News

Types of information disorder

  1. Misinformation: Unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken seriously.
  2. Disinformation: Fabricated or deliberately manipulated audio.visual content. Intentionally created conspiracy theories or rumours.
  3. Malinformation: Deliberate publication of private information for personal or corporate rather than public interest. Deliberate change of context, date or time of genuine content.
Types of information disorder. Graphic by Claire Wardle & Hossein Derakshan
Information graphic courtesy of First Draft News

Elements of information disorder

  1. Agent
  2. Message
  3. Interpeter
3 Elements of Information Disorder. Credit: Claire Wardle & Hossein Derakshan
Information graphic courtesy of First Draft News

Phases of information disorder

  1. Creation: When the message is created.
  2. (Re) Production: When the message is turned into a media product.
  3. Distribution: When the product is distributed or made public.
3 Phases of Information Disorder. Credit: Claire Wardle & Hossein Derakshan, 2017
Information graphic courtesy of First Draft News

Note: This material above first appeared on First Draft and has been reproduced here with the author’s consent. 


Graphic for a Media Helping Media lesson plan

Expanding the context; The erosion of trust

The prevalence of fake news isn’t merely a nuisance; it’s a symptom of a broader societal shift. We’re living in an information ecosystem where trust is increasingly fragile. The internet, while democratising access to information, has also lowered the barriers to its manipulation. This creates a fertile ground for information disorder to flourish.

Categories of information disorder: Beyond simple definitions

  • Satire or parody: While often harmless in intent, the line between satire and misleading content can blur, especially when taken out of context or shared with audiences unfamiliar with the original source. The danger lies in the potential for misinterpretation and the subsequent spread of misinformation.
  • Misleading content: This category highlights the power of framing. By selectively presenting information, even if factually accurate, manipulators can create a distorted narrative. Understanding how framing works—through choice of language, imagery, and emphasis—is crucial for journalists.
  • Imposter content: This form preys on established trust. It underscores the importance of verifying sources and understanding how easily digital identities can be faked.
  • Fabricated content: This is the most insidious form, as it involves the deliberate creation of falsehoods. Recognising the telltale signs of fabricated content, such as lack of sourcing, emotional manipulation, and inconsistencies, is essential.
  • False connection, false context, and manipulated content: These categories emphasise the importance of context. Even genuine content can be weaponised when its context is altered. Journalists must be meticulous in tracing the origins of information and ensuring its accurate presentation.

Types of information disorder: Intent matters

  • Misinformation (unintentional): While unintentional, misinformation can still have significant consequences. This highlights the need for rigorous fact-checking and accountability, even for unintentional errors.
  • Disinformation (intentional): This type is driven by malice and a desire to deceive. It often involves sophisticated tactics, such as coordinated campaigns and the use of bots and trolls. Understanding the motivations behind disinformation is crucial for countering its spread.
  • Malinformation (deliberate harm): This category underscores the ethical dimensions of information disorder. The deliberate exposure of private information or the manipulation of genuine content for harmful purposes represents a serious breach of trust.

Elements of information disorder: A communication model

  • Agent: This encompasses not only human actors but also automated systems such as bots and algorithms. Understanding the motivations and capabilities of different agents is crucial for analysing information disorder.
  • Message: The message itself is the vehicle for information disorder. Analysing its content, format, and style can reveal clues about its origins and intent.
  • Interpreter: The audience plays a critical role in the spread of information disorder. Factors like media literacy, cognitive biases, and social networks influence how individuals interpret and share information. Recognising these factors is essential for developing effective countermeasures.

Phases of information disorder: A lifecycle perspective

  • Creation: Understanding the techniques used to create false or misleading content is crucial for early detection. This includes analysing the use of deepfakes, manipulated images, and fabricated narratives.
  • (Re)production: The transformation of raw information into media products can involve further manipulation and distortion. Understanding the role of algorithms and social media platforms in amplifying certain types of content is essential.
  • Distribution: The speed and scale of digital distribution make it challenging to control the spread of information disorder. Understanding the dynamics of social networks and the role of influencers is crucial for mitigating its impact.

Adding value and depth: The journalist’s role

  • Media literacy: Journalists must be champions of media literacy, educating the public about the dangers of information disorder and equipping them with the tools to discern credible sources.
  • Verification and fact-checking: Rigorous fact-checking is more critical than ever. Journalists must be meticulous in verifying information from all sources.
  • Transparency and accountability: Journalists must be transparent about their sources and methods, and they must hold themselves accountable for their reporting.
  • Ethical considerations: Journalists must be mindful of the ethical implications of their reporting, particularly when dealing with sensitive or potentially harmful information.
  • Understanding algorithms: Journalists need to have a basic understanding of how algorithms work, and how they can be manipulated to spread disinformation.
  • Building trust: In a world of eroding trust, journalists must strive to build and maintain credibility by adhering to the highest ethical standards.

By understanding the complexities of information disorder, journalists can play a vital role in safeguarding the integrity of the information ecosystem and protecting the public from its harmful effects.


 

The post Forms of information disorder first appeared on Media Helping Media.

]]>
The glossary of Information disorder https://mediahelpingmedia.org/advanced/information-disorder-the-essential-glossary/ Mon, 09 Jul 2018 08:22:01 +0000 https://mediahelpingmedia.org/?p=1205 The following information disorder glossary is designed to help journalists understand the most common terms used.

The post The glossary of Information disorder first appeared on Media Helping Media.

]]>
Image of computer screen Markus Spiske on Unsplash
Image of computer screen by Markus Spiske on Unsplash

The following information disorder glossary is designed to help journalists understand the most common terms used.

The following is for journalists, policy-makers, technology companies, politicians, librarians, educators, researchers, academics, and civil society organisations who are all wrestling with the challenges posed by information disorder.

An algorithm is a fixed series of steps that a computer performs in order to solve a problem or complete a task. Social media platforms use algorithms to filter and prioritise content for each individual user based on various indicators, such as their viewing behaviour and content preferences. Disinformation that is designed to provoke an emotional reaction can flourish in these spaces when algorithms detect that a user is more likely to engage with or react to similar content.¹

An API, or application programming interface, is a means by which data from one web tool or application can be exchanged with, or received by another. Many working to examine the source and spread of polluted information depend upon access to social platform APIs, but not all are created equal and the extent of publicly available data varies from platform to platform. Twitter’s open and easy-to-use API has enabled thorough research and investigation of its network, plus the development of mitigation tools such as bot detection systems. However, restrictions on other platforms and a lack of API standardisation means it is not yet possible to extend and replicate this work across the social web.

Artificial intelligence (AI) describes computer programs that are “trained” to solve problems that would normally be difficult for a computer to solve. These programs “learn” from data parsed through them, adapting methods and responses in a way that will maximise accuracy. As disinformation grows in its scope and sophistication, some look to AI as a way to effectively detect and moderate concerning content. AI also contributes to the problem, automating the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalised much more efficiently.²

Automation is the process of designing a ‘machine’ to complete a task with little or no human direction. It takes tasks that would be time-consuming for humans to complete and turns them into tasks that are completed quickly and almost effortlessly. For example, it is possible to automate the process of sending a tweet, so a human doesn’t have to actively click ‘publish’. Automation processes are the backbone of techniques used to effectively ‘manufacture’ the amplification of disinformation.

Black hat SEO (search engine optimisation) describes aggressive and illicit strategies used to artificially increase a website’s position within a search engine’s results, for example changing the content of a website after it has been ranked. These practices generally violate the given search engine’s terms of service as they drive traffic to a website at the expense of the user’s experience.³

Bots are social media accounts that are operated entirely by computer programs and are designed to generate posts and/or engage with content on a particular platform. In disinformation campaigns, bots can be used to draw attention to misleading narratives, to hijack platforms’ trending lists and to create the illusion of public discussion and support.⁴ Researchers and technologists take different approaches to identifying bots, using algorithms or simpler rules based on number of posts per day.⁵

A botnet is a collection or network of bots that act in coordination and are typically operated by one person or group. Commercial botnets can include as many as tens of thousands of bots.⁶

Data mining is the process of monitoring large volumes of data by combining tools from statistics and artificial intelligence to recognise useful patterns. Through collecting information about an individual’s activity, disinformation agents have a mechanism by which they can target users on the basis of their posts, likes and browsing history. A common fear among researchers is that, as psychological profiles fed by data mining become more sophisticated, users could be targeted based on how susceptible they are to believing certain false narratives.⁷

Dark ads are advertisements that are only visible to the publisher and their target audience. For example, Facebook allows advertisers to create posts that reach specific users based on their demographic profile, page ‘likes’, and their listed interests, but that are not publicly visible. These types of targeted posts cost money and are therefore considered a form of advertising. Because these posts are only seen by a segment of the audience, they are difficult to monitor or track.⁸

Deepfakes is the term currently being used to describe fabricated media produced using artificial intelligence. By synthesising different elements of existing video or audio files, AI enables relatively easy methods for creating ‘new’ content, in which individuals appear to speak words and perform actions, which are not based on reality. Although still in their infancy, it is likely we will see examples of this type of synthetic media used more frequently in disinformation campaigns, as these techniques become more sophisticated.⁹

A dormant account is a social media account that has not posted or engaged with other accounts for an extended period of time. In the context of disinformation, this description is used for accounts that may be human- or bot-operated, which remain inactive until they are ‘programmed’ or instructed to perform another task.¹⁰

Doxing or doxxing is the act of publishing private or identifying information about an individual online, without his or her permission. This information can include full names, addresses, phone numbers, photos and more.¹¹ Doxing is an example of malinformation, which is accurate information shared publicly to cause harm.

Disinformation is false information that is deliberately created or disseminated with the express purpose to cause harm. Producers of disinformation typically have political, financial, psychological or social motivations.¹²

Encryption is the process of encoding data so that it can be interpreted only by intended recipients. Many popular messaging services such as WhatsApp encrypt the texts, photos and videos sent between users. This prevents governments from reading the content of intercepted WhatsApp messages.

Fact-checking (in the context of information disorder) is the process of determining the truthfulness and accuracy of official, published information such as politicians’ statements and news reports.¹³ Fact-checking emerged in the U.S. in the 1990s, as a way of authenticating claims made in political ads airing on television. There are now around 150 fact-checking organisations in the world,¹⁴ and many now also debunk mis- and disinformation from unofficial sources circulating online.

Fake followers are anonymous or imposter social media accounts created to portray false impressions of popularity about another account. Social media users can pay for fake followers as well as fake likes, views and shares to give the appearance of a larger audience. For example, one English-based service offers YouTube users a million “high-quality” views and 50,000 likes for $3,150.¹⁵

Malinformation is genuine information that is shared to cause harm.¹⁶ This includes private or revealing information that is spread to harm a person or reputation.

Manufactured amplification occurs when the reach or spread of information is boosted through artificial means. This includes human and automated manipulation of search engine results and trending lists, and the promotion of certain links or hashtags on social media.¹⁷ There are online price lists for different types of amplification, including prices for generating fake votes and signatures in online polls and petitions, and the cost of down-ranking specific content from search engine results.¹⁸

The formal definition of the term meme, coined by biologist Richard Dawkins in 1976, is an idea or behaviour that spreads person to person throughout a culture by propagating rapidly, and changing over time.¹⁹ The term is now used most frequently to describe captioned photos or GIFs that spread online, and the most effective are humorous or critical of society. They are increasingly being used as powerful vehicles of disinformation.

Misinformation is information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.²⁰

Propaganda is true or false information spread to persuade an audience, but often has a political connotation and is often connected to information produced by governments. It is worth noting that the lines between advertising, publicity and propaganda are often unclear.²¹

Satire is writing that uses literary devices such as ridicule and irony to criticise elements of society. Satire can become misinformation if audiences misinterpret it as fact.²² There is a known trend of disinformation agents labelling content as satire to prevent it from being flagged by fact-checkers.

Scraping is the process of extracting data from a website without the use of an API. It is often used by researchers and computational journalists to monitor mis- and disinformation on different social platforms and forums. Typically, scraping violates a website’s terms of service (i.e., the rules that users agree to in order to use a platform). However, researchers and journalists often justify scraping because of the lack of any other option when trying to investigate and study the impact of algorithms.

A sock puppet is an online account that uses a false identity designed specifically to deceive. Sock puppets are used on social platforms to inflate another account’s follower numbers and to spread or amplify false information to a mass audience.²³ The term is considered by some to be synonymous with the term “bot”.

Spam is unsolicited, impersonal online communication, generally used to promote, advertise or scam the audience. Today, it is mostly distributed via email, and algorithms detect, filter and block spam from users’ inboxes. Similar technologies to those implemented in the fight against spam could potentially be used in the context of information disorder, once accepted criteria and indicators have been agreed.

Trolling is the act of deliberately posting offensive or inflammatory content to an online community with the intent of provoking readers or disrupting conversation. Today, the term “troll” is most often used to refer to any person harassing or insulting others online. However, it has also been used to describe human-controlled accounts performing bot-like activities.

A troll farm is a group of individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion. One prominent troll farm was the Russia-based Internet Research Agency that spread inflammatory content online in an attempt to interfere in the U.S. presidential election.²⁴

Verification is the process of determining the authenticity of information posted by unofficial sources online, particularly visual media.²⁵ It emerged as a new skill set for journalists and human rights activists in the late 2000s, most notably in response to the need to verify visual imagery during the ‘Arab Spring’.

A VPN, or virtual private network, is used to encrypt a user’s data and conceal his or her identity and location. This makes it difficult for platforms to know where someone pushing disinformation or purchasing ads is located. It is also sensible to use a VPN when investigating online spaces where disinformation campaigns are being produced.
Download a PDF of this glossary.

1 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
2 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
3 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
4 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
5 Howard, P. N. & K. Bence (2016) Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendu, COMPROP Research note, 2016.1, http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/06/COMPROP-2016-1.pdf
6 Ignatova, T.V., V.A. Ivichev, V.A. & F.F. Khusnoiarov (December 2, 2015) Analysis of Blogs, Forums, and Social Networks, Problems of Economic Transition
7 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
8 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
9 Li, Y. Chang, M.C. Lyu, S. (June 11, 2018) In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking, Computer Science Department, University at Albany, SUNY
10 Ince, D. (2013) A Dictionary of the Internet (3 ed.), Oxford University Press
11 MacAllister, J. (2017) The Doxing Dilemma: Seeking a Remedy for the Malicious Publication of Personal Information, Fordham Law Review, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5370&context=fl
12 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
13 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking
14 Funke, D. (2018) Report: There are 149 fact-checking projects in 53 countries. That’s a new high, The Poynter Institute, https://www.poynter.org/news/report-there-are-149-fact-checking-projects-53-countries-thats-new-high
15 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
16 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
17 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
18 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
19 Dawkins, R. (1976) The Selfish Gene. Oxford University Press.
20 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
21 Jack, C. (2017) Lexicon of Lies, Data & Society, https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf
22 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
23 Hofileña, C. F. (Oct. 9, 2016) Fake accounts, manufactured reality on social media, Rappler, https://www.rappler.com/newsbreak/investigative/148347-fake-accounts-manufactured-reality-social-media
24 Office of the Director of National Intelligence. (2017). Assessing Russian activities and intentions in recent US elections. Washington, D.C.: National Intelligence Council, https://www.dni.gov/files/documents/ICA_2017_01.pdf.
25 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking

By Claire Wardle, with research support from Grace Greason, Joe Kerwin & Nic Dias.

This material first appeared on First Draft and has been reproduced here with the author’s consent.


 

The post The glossary of Information disorder first appeared on Media Helping Media.

]]>