Brian Stelter Warns About Fake News ‘ Terrible New Age of Information Warfare ’ 11 20 16
Brian Stelter at CNN is doing an excellent nightly newsletter on media, it's free and a great read.
You can sign up here: http://cnn.us11.list-manage1.com/subscr ... 98033e792fEmpowering users to notice "fake news"
By Brian Stelter & the CNNMoney Media team
"Fake news" stories sow confusion. People in power, all around the world, benefit from confusion. So users should outsmart them. Refuse to be confused.
That was my message on Sunday's "Reliable Sources." Now it looks like #RefuseToBeConfused has become a hashtag. There has been a ton of conversation about "fake news" all weekend long, partly spurred by a pair of must-read stories in the NYT and the WashPost.
NYT's Sapna Maheshwari showed how made-up claims go viral by tracing one Texas man's false tweet about anti-Trump protesters. That's the bottom-up approach -- one fib on Twitter becomes a story of dozens of web sites. WashPost's Terrence McCoy documented the top-down approach -- how two unemployed restaurant workers started a web site that lies to Trump fans every single day. Seriously, please read McCoy's story, all the way til the end: http://www.nytimes.com/2016/11/20/busin ... 98033e792f
My take: "Fake news" is a symptom of a disease"We're the new yellow journalists"
"We're the new yellow journalists," Paris Wade, 26, told McCoy. His experience running "Liberty Writers News" has taught him that "violence and chaos and aggressive wording is what people are attracted to." Business partner Ben Goldman says it's about "shock value." Goldman wrote up a story saying "it was a literal Hell Storm at DNC headquarters today" and laughed at it. "God, I just know everything about this statement is so wrong. What is a hell storm?"
"Fake news" sources like "Liberty Writers News" are just a symptom of a disease. The disease is distrust. The folks who click on these links and share these stories don't trust real sources, and I don't know if/when that will change. On Sunday's "Reliable Sources," I asserted that we are entering a new age of information warfare, and it's being fought right on your phone. Here are the main points from my essay:
— We're beyond just "red news/blue news" now. We are in an environment where some people are choosing to be colorblind...
— We need a lot more research to understand why online lies are so appealing to some voters...
— Trump himself has been fooled by fake news. Remember when he said "All I know is what's on the Internet?"
— Media literacy is part of the solution. The more media-literate you are, the less likely you will be tricked by propaganda...
— Journalism is also part of the solution. As an industry, we have to redouble our efforts to restore our credibility...
— But these are not full or satisfying solutions. I wish I knew 'em, but I don't. How does this end? Is the U.S. moving into an authoritarian media climate, more like Russia or China, where no one really trusts anything?
Mark Zuckerberg, in a Facebook post over the weekend, says "we take misinformation seriously..." Jim Rutenberg responds to Zuckerberg and says "Truth doesn’t need arbiters. It needs defenders. And it needs them now more than ever..." Margaret Sullivan says Facebook should hire an "executive editor..." Jack Shafer says "fake news, which the supermarket tabloids once excelled at, fills a market need for frivolous hyper-excitement. This need will never vanish..." And Stephen Colbert says "the fact that they call this 'fake news' upsets me because this is just lying..." I
There's been a lot of discussion and meetings and finally Twitter is taking some steps. @delbius Del Harvey is head of Trust & Safety at Twitter. It's a start.
https://blog.twitter.com/2016/progress- ... line-abuseProgress on addressing online abuse
Tuesday, November 15, 2016 | By Twitter (@twitter) [14:00 UTC]
Twitter is the fastest way to see what’s happening and what everyone is talking about. What makes Twitter great is that it’s open to everyone and every opinion. We’ve seen a growing trend of people taking advantage of that openness and using Twitter to be abusive to others.
The amount of abuse, bullying, and harassment we’ve seen across the Internet has risen sharply over the past few years. These behaviors inhibit people from participating on Twitter, or anywhere. Abusive conduct removes the chance to see and share all perspectives around an issue, which we believe is critical to moving us all forward. In the worst cases, this type of conduct threatens human dignity, which we should all stand together to protect.
Because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct. We took a step back to reset and take a new approach, find and focus on the most critical needs, and rapidly improve. There are three areas we’re focused on, and happy to announce progress around today: controls, reporting, and enforcement.
Twitter has long had a feature called “mute” which enables you to mute accounts you don’t want to see Tweets from. Now we’re expanding mute to where people need it the most: in notifications. We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days. This is a feature we’ve heard many of you ask for, and we’re going to keep listening to make it better and more comprehensive over time.
Our hateful conduct policy prohibits specific conduct that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. Today we’re giving you a more direct way to report this type of conduct for yourself, or for others, whenever you see it happening. This will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter.
And finally, on enforcement, we’ve retrained all of our support teams on our policies, including special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program. We’ve also improved our internal tools and systems in order to deal more effectively with this conduct when it’s reported to us. Our goal is a faster and more transparent process.
We don’t expect these announcements to suddenly remove abusive conduct from Twitter. No single action by us would do that. Instead we commit to rapidly improving Twitter based on everything we observe and learn.
Thank you for choosing Twitter to amplify your voice to the world. We honor our role in protecting your right to speak freely, and our collective responsibility to human dignity.
NYTimes story on it: http://www.nytimes.com/2016/11/16/techn ... v=top-news
Racists, bigots and haters should be contained in Sewer Twitter.
From Facebook and Google:
More: http://www.reuters.com/article/us-alpha ... SKBN1392MMGoogle, Facebook move to restrict ads on fake news sites
By Julia Love and Kristina Cooke | SAN FRANCISCO Tue Nov 15, 2016 | 9:53am EST
Alphabet Inc's Google (GOOGL.O) and Facebook Inc (FB.O) on Monday announced measures aimed at halting the spread of "fake news" on the internet by targeting how some purveyors of phony content make money: advertising.
Google said it is working on a policy change to prevent websites that misrepresent content from using its AdSense advertising network, while Facebook updated its advertising policies to spell out that its ban on deceptive and misleading content applies to fake news.
The shifts comes as Google, Facebook and Twitter Inc (TWTR.N) face a backlash over the role they played in the U.S. presidential election by allowing the spread of false and often malicious information that might have swayed voters toward Republican candidate Donald Trump.
The issue has provoked a fierce debate within Facebook especially, with Chief Executive Mark Zuckerberg insisting twice in recent days that the site had no role in influencing the election.
Facebook's steps are limited to its ad policies, and do not target fake news sites shared by users on their news feeds.
In other threads, we've been talking about fake news sites and their role in the election of the Orange Menace, along with willful confusion of Americans. There are discussions underway to create countersites (not with fake news, but a more aggressive tone with facts). This may be worth it's own topic as things shake out.
Several groups I'm on have been working with Twitter and Facebook, Zuckerberg just announced some of what's getting worked on. It's important to report fake news, anyone on FB can do that. Teams are being retrained, indexing is being worked on and reporting is going to a new group now. It's been busy but going well.
Fight back against information warfare and propaganda.