Real Versus Fake Fake News

It's not so easy to tell which is which

By William M Briggs Published on January 3, 2018

On the last day of bad old last year, Lake Superior State University released its 43rd annual List of Words Banished from the Queen’s English for Misuse, Overuse and General Uselessness.

The ripest for excision was “fake news.” “Let that sink in.”

I mean, “let that sink in” was another “impactful” phrase needing banning. As was (can we get an Amen?) “impactful.” (Most banned words are those that fill the air in corporate meetings.)

“Fake news” has to go not because there is no such thing, but because (a) people have the habit of calling real but unwanted or undesirable news “fake,” and (b) because defining it isn’t so easy.

“Fake news” is thus like bad yet award-winning art. You know it when you see it, but good luck developing an unambiguous definition.

Facebook Head Fake

Facebook discovered the second point the hard and expensive way. They had been testing “fake news” detection algorithms by putting “disputed” flags on certain stories. But a funny thing happened. The red flags had “the reverse effect of making people want to click [stories] even more.”

An employee wrote, “Putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs β€” the opposite effect to what we intended.”

Discovering the truth in news reports is not only not simple. In many cases, it’s not even possible.

This doesn’t mean the complete disappearance of “fake news” from Facebook. On stories meeting their selection criteria for fakiness, they will link counter stories called “related articles.” Call this the he-said-she-said approach.

Facebook’s wanting to weaken “deeply held beliefs” is curious. It implies that Facebook has stored in their massive computer banks a list of correct beliefs to which they can compare its customers’ false but “deeply held beliefs.” When a customer displays wrong-think, Facebook can counter with a “related article” of right-think. And thus “raise-awareness.”

They will soon learn that this is no easy task either. Discovering the truth in news reports is not only not simple. In many cases, it’s not even possible.

Computer Alert!

Not that people aren’t trying.

Some college kids think they have developed a browser plug-in that can alert users to “fake news.” And a group of folks at the Fake News Challenge believe they can harness “artificial intelligence” (statistical models that have undergone dull-knifed plastic surgeries and name changes) to identify made-up stories with “an intention to deceive.” Which is to say, propaganda.

It is charming the simple faith many have in the ability of computers to mimic human thinking. Yet all these algorithms can do is to note patterns in data, which when fed a new observation classifies it into one of the patterns. That means somebody has to create a list of news stories which have been without error or controversy placed into “truth” and “propaganda” bins.

Please Support The Stream: Equipping Christians to Think Clearly About the Political, Economic, and Moral Issues of Our Day.

Place a reactionary and progressive in a room and see if they can without disagreement classify any list of prominent news stories from, say, CNN. Since that can’t be done, bias will and must be introduced into the algorithms.

This bias is fine, if your goal is like Facebook’s, to disentrench “deeply held beliefs.” But even then, no algorithm will classify perfectly; no, not even a deep-learning artificial intelligence neural quantum computer.

For one, there are sins of omission, which no algorithm can detect. How do you know, in advance, that a news agency should be covering a revolution when instead they are showing shots of a white van? You can’t classify a non-existent story as “fake news,” or as any kind of news.

Computers Can’t think

Most limiting is that the algorithms do not work the way human thinking does. People grasp stories. We make mistakes, but we comprehend the words and meaning behind the words in a context beyond the ken of any computer. Computers are only mindless adding machines. Novelty will thus always stump any fake-news algorithm.

As he was heading out the door, Google’s Eric Schmidt spoke on this. He understood that when Google’s search algorithm is presented with “diametrically opposed viewpoints …[it] can not identify which is misinformation and which is truth.”

In the United States’ current polarized political environment, the constant publishing of articles with vehemently opposing arguments has made it almost impossible for Google to rank information properly.

Let’s say that this group believes Fact A and this group believes Fact B and you passionately disagree with each other and you are all publishing and writing about it and so forth and so on. It is very difficult for us to understand truth.

It isn’t so easy for us humans, either. Which is why there are still disputes in, say, philosophy, even after thousands of years of trying to sort them out. And which is why there will always be disagreements between what is “fake news” and what isn’t.

Print Friendly, PDF & Email

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, Twitter, Instagram, MeWe and Gab.

Inspiration
The Scarcity Mindset
Robert Morris
More from The Stream
Connect with Us