The post Lesson: Accuracy In Journalism first appeared on Media Helping Media.
]]>Students will evaluate news reports for accuracy by identifying factual errors and verifying sources. They will also apply techniques to ensure transparency and reliability in their own reporting.
Notice and wonder: Display a short, factual news headline and a brief social media post about the same event. Ask students, “What do you notice? What do you think?” Give them a few minutes to discuss with a partner. Then have several students share their observations and questions. Record these for all to see. Guide the conversation towards noticing differences in detail, tone, and source reliability, setting the stage for evaluating news accuracy.
Use a hypothetical news story then walk through the fact-checking process, pausing to allow students to suggest verification methods.
Application: Present a brief, fictional news report containing deliberate inaccuracies. Divide students into small groups and task them with identifying errors and suggesting corrections. Encourage them to apply the fact-checking techniques discussed. Afterward, facilitate a class discussion to review findings and reinforce the importance of accuracy and transparency in journalism.
Think, Pair, Share: Distribute a short news article with potential inaccuracies.
Ask students to answer these questions:
Here are some suggested answers:
The free teaching tools at the Khan Academy were used as a basis for converting the original article into a lesson plan.
The post Lesson: Accuracy In Journalism first appeared on Media Helping Media.
]]>The post Fact-checking and adding context first appeared on Media Helping Media.
]]>Journalism is about far more than simply gathering information then passing it on. An essential part of the editorial process is to examine everything we are told to make sure it is factual.
We then add context so that any facts that are uncovered are considered alongside existing knowledge.
This is the first of two articles on this site about fact-checking. The other is ‘Beyond basic fact-checking‘ which we recommend you read after finishing this piece.
Journalists have a responsibility to apply editorial values to every piece of information that comes our way before we pass it on to others (see the material in our ethics section).
Once a piece of journalism is in the public domain it will be referenced, quoted, and possibly plagiarised as it becomes part of the global conversation. If that piece of journalism is untrue or flawed in any way, then lasting damage will have been done.
But let’s first agree what is meant by the word ‘fact’.
According to the Oxford English Dictionary, a fact is something that is “known or proved to be true”. It is also “information used as evidence or as part of a report or news article”. In legal terms, a fact is “the truth about events as opposed to interpretation”.
And that last definition is interesting, because journalists ‘interpret’ events by adding context – but more on that later. For now, let’s refer to facts that have not yet been fully tested as ‘claims’.
Here are a few tests that should be applied to information that a journalist receives from someone who ‘claims’ that what they are passing on is factual.
The first three tests are about source verification and fact-checking, the fourth is about adding context.
Recommended: Research the background of the source, their connections, any previous record of sharing information.
Recommended: Research the chronology of events. Check your own news organisation’s archive. Search the web.
Recommended: Seek out a second, independent and trusted source.
Recommended: Paint the bigger picture, understand the importance of the event in relation to other news stories.
Those of you who are new to journalism might want to print out the following checklist and put it on the wall in your newsroom as a reminder.
If the results of your research make you feel uneasy you might want to drop the story. However, even a false claim, presented as fact, but, on investigation, found to be untrue, could still be a story. It could point to a political, commercial, or social conflict that might require investigation.
Never rule out a possible news story because the initial evidence presented proves to be shaky.
Now let’s look at point four ‘the context’ more closely.
One dictionary definition of ‘context’ is: “the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.”
That word, ‘understood’, is important.
The role of a journalist is to enhance understanding. We do that by surrounding proven facts with data, statistics, history, and circumstances that, together, help paint a fuller picture of what has happened.
Think of it this way.
Imagine you are at home watching a series on TV. It’s the final episode of six. Just as the programme is reaching the conclusion there is a knock at the door. It’s a friend you haven’t seen for some time. You welcome them in.
As they walk through the door there is a scream from the lounge. One of the characters in the TV series has discovered the gruesome remains of a body. Your guest is shocked, but fascinated.
You offer to turn the TV off so you can chat, but they are so intrigued by what they saw on the screen that they ask you whether they could watch the programme with you, particularly as it’s reaching its conclusion. They want to know what happens next.
So you pause the programme, put the kettle on, make a cup of tea, and tell your guest about what has happened so far.
You explain who the characters are, what has taken place in previous episodes, how the situation has developed, the relationships between the characters, what clues you have picked up along the way, and how the plot has thickened to reach the point where your guest heard the scream.
And explaining the background proves to be important because your friend thought you must be watching a murder mystery, when, in fact, the series you were watching was a documentary about archeology. The scream was from an archeologist who had unexpectedly found mummified remains. It was not a modern-day crime thriller.
Now your guest has the context, so you can watch the end of the final episode together, with your guest informed about the background to the story and better able to understand events.
The same is true with journalism.
A colleague who was working as an intake editor on a news desk remembers receiving a call from an off-duty reporter who had just passed an overturned red double decker bus on a London street. People were wandering around with blood pouring from wounds. Two camera crews were mobilised, but before they’d even left the building the reporter discovered that it was a film crew making a movie. The story had changed once the reporter had checked his facts and explored the context.
I made a similar mistake when reporting on a fire at an inner city block of flats in Liverpool. I reported live into the 4pm news bulletin saying that residents were trying to salvage what they could from their burning homes. I was wrong. Had I checked my facts, not made assumptions, and taken time to establish the context of events I would have discovered that I was witnessing rioting and looting. You can read about that experience and the lessons learnt here.
The challenge all journalists face is not just to report the news but to also set out the background to an event as well as all related events in order to help the audience understand the elements of a story which they might otherwise find hard to comprehend – or even reach the wrong conclusion.
Perhaps it involves researching and setting out the chronology of events that have led to the current breaking news story. These can be presented as related stories.
You might need to research the backgrounds of the characters involved as you look for any social connections to anyone else involved. These can be presented as profiles.
Essentially, what you are doing is gathering as much information as possible in order to put together the most detailed, in-depth, and informative account of what has happened.
All this illustrates that journalism helps people make sense of the world – not just what’s happening, but why it’s happening. Stories that raise questions without even attempting to address those questions are weak stories.
A news story without context can never be completely understood. A news source that is not verified can never be completely trusted. A claim, left unchecked, might not necessarily be a fact. And a news story without fact-checking and context could add more to the cacophony of confusion than to the enhancement of understanding.
Beyond revisiting and revising, it’s important to acknowledge that “truth” itself can be complex and contested, particularly in stories involving social issues or conflicting narratives.
Fact-checking isn’t just about verifying isolated facts; it’s about understanding the different interpretations of those facts and how they contribute to larger narratives.
Journalists should strive for accuracy and fairness, acknowledging where interpretations diverge and avoiding presenting a single, definitive “truth” when it doesn’t exist.
Examples
Lateral reading is crucial, but so is understanding the motivations and potential biases of sources, especially online. Lateral reading is a technique for evaluating online information by opening multiple tabs in your browser to investigate the credibility of the source, rather than just reading the information on the page itself (which is called ‘vertical reading’).
These questions should be part of the fact-checking process.
We should also consider the rise of synthetic media (deepfakes) and the challenges they pose to verifying information.
Examples
Fact-checking shouldn’t be solely the responsibility of individual journalists. Newsrooms should foster a culture of fact-checking, where everyone is encouraged to question and verify information.
This can involve dedicated fact-checking teams, collaborative editing processes, and clear guidelines for source evaluation.
The limits of fact-checking: Fact-checking can verify specific claims, but it can’t always address broader issues of interpretation or framing.
A story can be factually accurate but still misleading if it’s presented in a way that distorts the overall picture. This highlights the importance of context.
Examples
Context and power: Context is not neutral. Those in positions of power often have greater control over the narrative and can shape the context in ways that benefit them.
Journalists should be aware of these power dynamics and strive to provide context that challenges dominant narratives and gives voice to marginalised perspectives.
The “how” question: In addition to “why,” exploring the “how” is crucial. How did this event happen? What were the processes and mechanisms involved?
Understanding the “how” can reveal systemic issues and prevent similar events from occurring in the future.
Context and time: Context is not static; it evolves over time. A story that is accurate and contextualised today might be incomplete or misleading tomorrow as new information emerges.
Journalists need to be prepared to update their reporting and provide ongoing context as the story unfolds.
The ethics of context: Providing context can sometimes involve revealing sensitive information or information that could be harmful to individuals or groups.
Journalists must carefully weigh the public’s right to know against the potential harm and make ethical decisions about what context to include.
The business model of misinformation: The spread of misinformation is often driven by economic incentives. Clickbait headlines, sensationalised stories, and emotionally charged content can generate more clicks and revenue, even if they are not accurate.
Understanding the business model of misinformation is crucial for combating it.
The role of technology platforms: Social media platforms and search engines play a significant role in the dissemination of information, both accurate and inaccurate.
Journalists should be aware of how these platforms work and how they can be used to spread misinformation.
They should also advocate for platform accountability and transparency.
The importance of media literacy education: Empowering the public with media literacy skills is essential for creating a more informed and engaged citizenry.
Media literacy education should be taught in schools and made available to people of all ages.
Journalism as a public service: At its best, journalism serves the public interest by providing accurate information, holding power accountable, and fostering informed debate.
By prioritising fact-checking and context, journalists can uphold these values and contribute to a more just and democratic society. We need to reinforce the idea that journalism is a vital public service, not just a business.
If you are a trainer of journalists we have a free lesson plan: Fact-checking and adding context which you are welcome to download and adapt for your own purposes.
If you found this helpful you might want to check our related training module ‘Beyond basic fact-checking‘.
The post Fact-checking and adding context first appeared on Media Helping Media.
]]>The post Information disorder – mapping the landscape first appeared on Media Helping Media.
]]>Information disorder is everywhere according to journalist Claire Wardle. Here she sets out the categories that reporters need to be aware of and research.
As our understanding of information disorder becomes more sophisticated, it’s time to recognise the many smaller sub-categories so that journalists can better understand the issue and undertake more targeted research.
Here, I suggest thirteen sub-categories where I’m seeing specific initiatives, research or natural alliances.
It’s important to note that all these sub-categories should also be seen through an international lens. It is the one overarching theme that connects all of the following.
The thirteen spaces are:
Note: This material first appeared on First Draft and has been reproduced here with the author’s permission.
The post Information disorder – mapping the landscape first appeared on Media Helping Media.
]]>The post The glossary of Information disorder first appeared on Media Helping Media.
]]>The following information disorder glossary is designed to help journalists understand the most common terms used.
The following is for journalists, policy-makers, technology companies, politicians, librarians, educators, researchers, academics, and civil society organisations who are all wrestling with the challenges posed by information disorder.
An algorithm is a fixed series of steps that a computer performs in order to solve a problem or complete a task. Social media platforms use algorithms to filter and prioritise content for each individual user based on various indicators, such as their viewing behaviour and content preferences. Disinformation that is designed to provoke an emotional reaction can flourish in these spaces when algorithms detect that a user is more likely to engage with or react to similar content.¹
An API, or application programming interface, is a means by which data from one web tool or application can be exchanged with, or received by another. Many working to examine the source and spread of polluted information depend upon access to social platform APIs, but not all are created equal and the extent of publicly available data varies from platform to platform. Twitter’s open and easy-to-use API has enabled thorough research and investigation of its network, plus the development of mitigation tools such as bot detection systems. However, restrictions on other platforms and a lack of API standardisation means it is not yet possible to extend and replicate this work across the social web.
Artificial intelligence (AI) describes computer programs that are “trained” to solve problems that would normally be difficult for a computer to solve. These programs “learn” from data parsed through them, adapting methods and responses in a way that will maximise accuracy. As disinformation grows in its scope and sophistication, some look to AI as a way to effectively detect and moderate concerning content. AI also contributes to the problem, automating the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalised much more efficiently.²
Automation is the process of designing a ‘machine’ to complete a task with little or no human direction. It takes tasks that would be time-consuming for humans to complete and turns them into tasks that are completed quickly and almost effortlessly. For example, it is possible to automate the process of sending a tweet, so a human doesn’t have to actively click ‘publish’. Automation processes are the backbone of techniques used to effectively ‘manufacture’ the amplification of disinformation.
Black hat SEO (search engine optimisation) describes aggressive and illicit strategies used to artificially increase a website’s position within a search engine’s results, for example changing the content of a website after it has been ranked. These practices generally violate the given search engine’s terms of service as they drive traffic to a website at the expense of the user’s experience.³
Bots are social media accounts that are operated entirely by computer programs and are designed to generate posts and/or engage with content on a particular platform. In disinformation campaigns, bots can be used to draw attention to misleading narratives, to hijack platforms’ trending lists and to create the illusion of public discussion and support.⁴ Researchers and technologists take different approaches to identifying bots, using algorithms or simpler rules based on number of posts per day.⁵
A botnet is a collection or network of bots that act in coordination and are typically operated by one person or group. Commercial botnets can include as many as tens of thousands of bots.⁶
Data mining is the process of monitoring large volumes of data by combining tools from statistics and artificial intelligence to recognise useful patterns. Through collecting information about an individual’s activity, disinformation agents have a mechanism by which they can target users on the basis of their posts, likes and browsing history. A common fear among researchers is that, as psychological profiles fed by data mining become more sophisticated, users could be targeted based on how susceptible they are to believing certain false narratives.⁷
Dark ads are advertisements that are only visible to the publisher and their target audience. For example, Facebook allows advertisers to create posts that reach specific users based on their demographic profile, page ‘likes’, and their listed interests, but that are not publicly visible. These types of targeted posts cost money and are therefore considered a form of advertising. Because these posts are only seen by a segment of the audience, they are difficult to monitor or track.⁸
Deepfakes is the term currently being used to describe fabricated media produced using artificial intelligence. By synthesising different elements of existing video or audio files, AI enables relatively easy methods for creating ‘new’ content, in which individuals appear to speak words and perform actions, which are not based on reality. Although still in their infancy, it is likely we will see examples of this type of synthetic media used more frequently in disinformation campaigns, as these techniques become more sophisticated.⁹
A dormant account is a social media account that has not posted or engaged with other accounts for an extended period of time. In the context of disinformation, this description is used for accounts that may be human- or bot-operated, which remain inactive until they are ‘programmed’ or instructed to perform another task.¹⁰
Doxing or doxxing is the act of publishing private or identifying information about an individual online, without his or her permission. This information can include full names, addresses, phone numbers, photos and more.¹¹ Doxing is an example of malinformation, which is accurate information shared publicly to cause harm.
Disinformation is false information that is deliberately created or disseminated with the express purpose to cause harm. Producers of disinformation typically have political, financial, psychological or social motivations.¹²
Encryption is the process of encoding data so that it can be interpreted only by intended recipients. Many popular messaging services such as WhatsApp encrypt the texts, photos and videos sent between users. This prevents governments from reading the content of intercepted WhatsApp messages.
Fact-checking (in the context of information disorder) is the process of determining the truthfulness and accuracy of official, published information such as politicians’ statements and news reports.¹³ Fact-checking emerged in the U.S. in the 1990s, as a way of authenticating claims made in political ads airing on television. There are now around 150 fact-checking organisations in the world,¹⁴ and many now also debunk mis- and disinformation from unofficial sources circulating online.
Fake followers are anonymous or imposter social media accounts created to portray false impressions of popularity about another account. Social media users can pay for fake followers as well as fake likes, views and shares to give the appearance of a larger audience. For example, one English-based service offers YouTube users a million “high-quality” views and 50,000 likes for $3,150.¹⁵
Malinformation is genuine information that is shared to cause harm.¹⁶ This includes private or revealing information that is spread to harm a person or reputation.
Manufactured amplification occurs when the reach or spread of information is boosted through artificial means. This includes human and automated manipulation of search engine results and trending lists, and the promotion of certain links or hashtags on social media.¹⁷ There are online price lists for different types of amplification, including prices for generating fake votes and signatures in online polls and petitions, and the cost of down-ranking specific content from search engine results.¹⁸
The formal definition of the term meme, coined by biologist Richard Dawkins in 1976, is an idea or behaviour that spreads person to person throughout a culture by propagating rapidly, and changing over time.¹⁹ The term is now used most frequently to describe captioned photos or GIFs that spread online, and the most effective are humorous or critical of society. They are increasingly being used as powerful vehicles of disinformation.
Misinformation is information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.²⁰
Propaganda is true or false information spread to persuade an audience, but often has a political connotation and is often connected to information produced by governments. It is worth noting that the lines between advertising, publicity and propaganda are often unclear.²¹
Satire is writing that uses literary devices such as ridicule and irony to criticise elements of society. Satire can become misinformation if audiences misinterpret it as fact.²² There is a known trend of disinformation agents labelling content as satire to prevent it from being flagged by fact-checkers.
Scraping is the process of extracting data from a website without the use of an API. It is often used by researchers and computational journalists to monitor mis- and disinformation on different social platforms and forums. Typically, scraping violates a website’s terms of service (i.e., the rules that users agree to in order to use a platform). However, researchers and journalists often justify scraping because of the lack of any other option when trying to investigate and study the impact of algorithms.
A sock puppet is an online account that uses a false identity designed specifically to deceive. Sock puppets are used on social platforms to inflate another account’s follower numbers and to spread or amplify false information to a mass audience.²³ The term is considered by some to be synonymous with the term “bot”.
Spam is unsolicited, impersonal online communication, generally used to promote, advertise or scam the audience. Today, it is mostly distributed via email, and algorithms detect, filter and block spam from users’ inboxes. Similar technologies to those implemented in the fight against spam could potentially be used in the context of information disorder, once accepted criteria and indicators have been agreed.
Trolling is the act of deliberately posting offensive or inflammatory content to an online community with the intent of provoking readers or disrupting conversation. Today, the term “troll” is most often used to refer to any person harassing or insulting others online. However, it has also been used to describe human-controlled accounts performing bot-like activities.
A troll farm is a group of individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion. One prominent troll farm was the Russia-based Internet Research Agency that spread inflammatory content online in an attempt to interfere in the U.S. presidential election.²⁴
Verification is the process of determining the authenticity of information posted by unofficial sources online, particularly visual media.²⁵ It emerged as a new skill set for journalists and human rights activists in the late 2000s, most notably in response to the need to verify visual imagery during the ‘Arab Spring’.
A VPN, or virtual private network, is used to encrypt a user’s data and conceal his or her identity and location. This makes it difficult for platforms to know where someone pushing disinformation or purchasing ads is located. It is also sensible to use a VPN when investigating online spaces where disinformation campaigns are being produced.
Download a PDF of this glossary.
1 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
2 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
3 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
4 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
5 Howard, P. N. & K. Bence (2016) Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendu, COMPROP Research note, 2016.1, http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/06/COMPROP-2016-1.pdf
6 Ignatova, T.V., V.A. Ivichev, V.A. & F.F. Khusnoiarov (December 2, 2015) Analysis of Blogs, Forums, and Social Networks, Problems of Economic Transition
7 Ghosh, D. & B. Scott (January 2018) #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America
8 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
9 Li, Y. Chang, M.C. Lyu, S. (June 11, 2018) In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking, Computer Science Department, University at Albany, SUNY
10 Ince, D. (2013) A Dictionary of the Internet (3 ed.), Oxford University Press
11 MacAllister, J. (2017) The Doxing Dilemma: Seeking a Remedy for the Malicious Publication of Personal Information, Fordham Law Review, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5370&context=fl
12 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
13 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking
14 Funke, D. (2018) Report: There are 149 fact-checking projects in 53 countries. That’s a new high, The Poynter Institute, https://www.poynter.org/news/report-there-are-149-fact-checking-projects-53-countries-thats-new-high
15 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
16 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
17 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
18 Gu, L., V. Kropotov & F. Yarochkin (2017) The Fake News Machine: How Propagandists Abuse the Internet and Manipulate the Public. Oxford University, https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-howpropagandists-abuse-the-internet.pdf
19 Dawkins, R. (1976) The Selfish Gene. Oxford University Press.
20 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
21 Jack, C. (2017) Lexicon of Lies, Data & Society, https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf
22 Wardle, C. & H. Derakshan (September 27, 2017) Information Disorder: Toward an interdisciplinary framework for research and policy making, Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
23 Hofileña, C. F. (Oct. 9, 2016) Fake accounts, manufactured reality on social media, Rappler, https://www.rappler.com/newsbreak/investigative/148347-fake-accounts-manufactured-reality-social-media
24 Office of the Director of National Intelligence. (2017). Assessing Russian activities and intentions in recent US elections. Washington, D.C.: National Intelligence Council, https://www.dni.gov/files/documents/ICA_2017_01.pdf.
25 Mantzarlis, A. (2015) Will Verification Kill Fact-Checking?, The Poynter Institute, https://www.poynter.org/news/will-verification-kill-fact-checking
By Claire Wardle, with research support from Grace Greason, Joe Kerwin & Nic Dias.
This material first appeared on First Draft and has been reproduced here with the author’s consent.
The post The glossary of Information disorder first appeared on Media Helping Media.
]]>