Guardrails for Describing Fake News, Misinformation, and Disinformation

Perhaps the most distasteful national omelet we’ve been served during the past four years has been the one that has mixed together an unsavory combination of three ingredients: fake news, misinformation, and disinformation.

While many express growing concerns and look for ways to deal with them, that may be difficult – if not impossible – as long as we use these terms without any agreed-upon definitions that set useful boundaries and are easy to understand among the public at large.  The alternative is to continue repeating the mantra “fake news-misinformation- disinformation” so often that it loses meaning, or using the terms interchangeably so that they become permanently blurred in our minds.

Against that background, here’s my approach to developing a necessary separation of these three distinct concepts, which can be useful in sorting out what is the root problem whenever one of these terms, all too often used pejoratively, is employed to describe a particular type of communication.

Fake News

Fake news should refer to a communication in any format – print, video, or online – but only if generated by the news media itself, which is comprised of professional journalists who have chosen the career path of reporting.  This staring point would delimit vast amounts of information from meeting this definition, so it could not simply be applied to any communication by anyone.

Doctors are part of the medical community, lawyers are part of the legal community, and journalists are part of the news community.  How many times have you heard someone say fake medicine when they disagreed with a diagnosis or fake law when they disagreed with a legal argument?  Fake news should be equally rare.

Granted, those professions have licensing requirements, but they usually are enforced by groups within them that establish ethical guidelines to be followed as a condition of being licensed.  Journalists are not licensed, of course, so it is a bit more difficult to use that boundary as a basis for their professional distinction.  But where fake news is concerned, it often is attributed to the news organization collectively – CNN, NBC, The New York Times, and the like.  In turn, this organizational focus makes it easier to ascertain a defined journalistic community at the outset, and many have transparent ethical guidelines, too.

The Associated Press is an excellent baseline.  It’s an independent global news organization dedicated to factual reporting, founded in 1846.  More than half of the world’s population sees AP journalism every day, with reporting from 250 locations worldwide.  About 15,000 news outlets are part of this AP community, and all of them should be considered to be a bona fide news medium.  Conversely, if they are not in the AP universe, it would be inaccurate and unfair to refer to any other source of communication as fake news.

Other objective criteria also can be applied to determine the news part of fake news.  For example, any media organization that has been issued a hard press pass by virtue of membership in the White House Press Association or a comparable group at the state or local level also would qualify under my litmus test.

My car dealer or grocery or bank certainly would not fit within this framework, and it would be a bit ridiculous to yell “fake news” if my repair bill, sales receipt, or account statement included erroneous information.  Yet without any boundaries, it’s easy to shout “fake news” without even thinking whether what is being complained about is news in any normative sense.  And since these three terms should have some distinctive meaning, fake news also should not be used when misinformation or disinformation is the more applicable concept, as explained below.


Misinformation is perhaps the largest category at issue.  I think any communication on social media – from anyone to anyone – may wind up being called misinformation if it is inaccurate in any way.  Yet that would be too broad-brushed an approach.

We tend to thrive on sending and receiving gossip, rumors, even biting satire that surely is not intended to be judged for its accuracy.  So misinformation should be limited to a smaller subset that is based on information, particularly information that relies on verifiable data rather than opinions.

Misinformation is really mistaken information, and it’s not essential to characterize the source as benign or malignant in order to have that information corrected.  The problem in social media is that a cry of misinformation too often turns into a finger-pointing exercise aimed at denigrating the motives of the person who communicated it.

This accelerates as more people are brought into the circle, as the original misinformation becomes further distorted when new layers of information are added by another person who picks up on the mistake and passes it on.  When the battle cry goes out, anyone who characterizes a post as misinformation should be prepared to point out the nature of the mistake and provide a correction.  This would help minimize the weaponization of the characterization as a way to demonize or demean the conveyor of that information.


This descriptive category, like misinformation, also should be applied specifically to social media.  In contrast, though, it should be focused on foreign governments and groups working on their behalf, which have the intent of providing misleading information that is designed to create confusion or dissension in our electoral process or other aspects of national security.

The behavior of these bad actors is really the central issue here, so that concrete responses by the United States are the best way to combat this threat to democratic norms.  This will require robust governmental offensive and defensive measures through diplomatic channels, including targeted counter-disinformation campaigns and sanctions when specific cases of disinformation arise.  The purveyors of disinformation are largely known, including Russia, Iran, North Korea, and China, along with terrorist organizations.  Our intelligence agencies are well equipped to identify sources and methods of disinformation, and they can, with global allies, confront current threats and deter future ones.

Looking Ahead

My proposal of a new classification system for these three concepts is illustrative rather than comprehensive, and I hope that it will be refined through a broader discussion

within our communities of interest.  The best first step will be to recognize that the labels being applied to these widespread communication phenomena need corresponding definitional guardrails if we are to develop the types of effective tailored responses that each area requires.

Stuart N. Brotman is a Distinguished Fellow at The Media Institute and is a member of the Institute’s First Amendment Advisory Council.  He is the author of Privacy’s Perfect Storm: Digital Policy for Post-Pandemic Times.  This article appeared in AEJMC Law & Policy Division Media Law Notes, Spring 2021, Vol. 49, Issue 2.