Bots rampant on Twitter, study says, as network tries to thwart devious tweets
There's a lot of bots out there on Twitter.
That's the message from a new Pew Research Center study, out Monday, which found that two-thirds of tweets that link to digital content are generated by bots — accounts powered by automated software, not real tweeters.
Researchers analyzed 1.2 million tweets from last summer (July 27-Sept. 11), most of which linked to more than 2,300 popular websites devoted to sports, celebrities, news, business and sites created by organizations.
Two-thirds (66%) of those tweets were posted or shared by bots and even more, 89%, of links that led to aggregation sites that compile stores posted online were posted by bots, the study says.
The findings suggest that bots "play a prominent and pervasive role in the social media environment,” said Aaron Smith, associate research director at Pew, which used a "Botometer" developed at the University of Southern California and Indiana University to analyze links and determine if was posted by an automated account.
New York Mag
The Internet Apologizes …
Even those who designed our digital world are aghast at what they created. A breakdown of what went wrong — from the architects who built it. ...
Why, over the past year, has Silicon Valley begun to regret the foundational elements of its own success? The obvious answer is November 8, 2016. For all that he represented a contravention of its lofty ideals, Donald Trump was elected, in no small part, by the internet itself. Twitter served as his unprecedented direct-mail-style megaphone, Google helped pro-Trump forces target users most susceptible to crass Islamophobia, the digital clubhouses of Reddit and 4chan served as breeding grounds for the alt-right, and Facebook became the weapon of choice for Russian trolls and data-scrapers like Cambridge Analytica. Instead of producing a techno-utopia, the internet suddenly seemed as much a threat to its creator class as it had previously been their herald.
What we’re left with are increasingly divided populations of resentful users, now joined in their collective outrage by Silicon Valley visionaries no longer in control of the platforms they built. The unregulated, quasi-autonomous, imperial scale of the big tech companies multiplies any rational fears about them — and also makes it harder to figure out an effective remedy. Could a subscription model reorient the internet’s incentives, valuing user experience over ad-driven outrage? Could smart regulations provide greater data security? Or should we break up these new monopolies entirely in the hope that fostering more competition would give consumers more options?
Silicon Valley, it turns out, won’t save the world. But those who built the internet have provided us with a clear and disturbing account of why everything went so wrong — how the technology they created has been used to undermine the very aspects of a free society that made that technology possible in the first place. ...
How It Went Wrong, in 15 Steps
Start With Hippie Good Intentions …
“Did We Create This Monster?” How Twitter Turned Toxic ...
For all the ways in which the Imposter Buster saga is unique, it’s also symptomatic of larger issues that have long bedeviled Twitter: abuse, the weaponizing of anonymity, bot wars, and slow-motion decision making by the people running a real-time platform. These problems have only intensified since Donald Trump became president and chose Twitter as his primary mouthpiece. The platform is now the world’s principal venue for politics and outrage, culture and conversation–the home for both #MAGA and #MeToo.
This status has helped improve the company’s fortunes. Daily usage is up a healthy 12% year over year, and Twitter reported its first-ever quarterly profit in February, capping a 12-month period during which its stock doubled. Although the company still seems unlikely ever to match Facebook’s scale and profitability, it’s not in danger of failing. The occasional cries from financial analysts for CEO Jack Dorsey to sell Twitter or from critics for him to shut it down look more and more out of step. ...
Twitter is not alone in wrestling with the fact that its product is being corrupted for malevolence: Facebook and Google have come under heightened scrutiny since the presidential election, as more information comes to light revealing how their platforms manipulate citizens, from Cambridge Analytica to conspiracy videos. The companies’ responses have been timid, reactive, or worse. “All of them are guilty of waiting too long to address the current problem, and all of them have a long way to go,” says Jonathon Morgan, founder of Data for Democracy, a team of technologists and data experts who tackle governmental social-impact projects.
The stakes are particularly high for Twitter, given that enabling breaking news and global discourse is key to both its user appeal and business model. Its challenges, increasingly, are the world’s.
How did Twitter get into this mess? Why is it only now addressing the malfeasance that has dogged the platform for years? “Safety got away from Twitter,” says a former VP at the company. “It was Pandora’s box. Once it’s opened, how do you put it all back in again?”
Cambridge Analytica: how did it turn clicks into votes?
How do 87m records scraped from Facebook become an advertising campaign that could help swing an election? What does gathering that much data actually involve? And what does that data tell us about ourselves?
The Cambridge Analytica scandal has raised question after question, but for many, the technological USP of the company, which announced last week that it was closing its operations, remains a mystery.
For those 87 million people probably wondering what was actually done with their data, I went back to Christopher Wylie, the ex-Cambridge Analytica employee who blew the whistle on the company’s problematic operations in the Observer. According to Wylie, all you need to know is a little bit about data science, a little bit about bored rich women, and a little bit about human psychology...
Step one, he says, over the phone as he scrambles to catch a train: “When you’re building an algorithm, you first need to create a training set.” That is: no matter what you want to use fancy data science to discover, you first need to gather the old-fashioned way. Before you can use Facebook likes to predict a person’s psychological profile, you need to get a few hundred thousand people to do a 120-question personality quiz.
The “training set” refers, then, to that data in its entirety: the Facebook likes, the personality tests, and everything else you want to learn from. Most important, it needs to contain your “feature set”: “The underlying data that you want to make predictions on,” Wylie says. “In this case, it’s Facebook data, but it could be, for example, text, like natural language, or it could be clickstream data” – the complete record of your browsing activity on the web.“Those are all the features that you want to [use to] predict.”
I would add "truth." I see so many conspiracy theory, doctored video, etc., posts, even from reasonably well-educated people, that's it's cringeworthy. No one vets something before liking or sharing. And then there are small things, like authoritative comments, even on innocuous subjects, that are just wrong, but because they seem (and may be) well intended, everyone accepts it. This has always happened (see "old wives' tales") but not on a massive scale.