Social media, society and technology 2019 –

Written by  //  May 29, 2020  //  Media  //  No comments

Social media, society and technology 2017 – March 2019

28-29 May
Trump’s Audience of One
The new executive order targeting social-media companies isn’t really about Twitter. It’s about Mark Zuckerberg.
(The Atlantic) It’s important to pay attention to what the president is doing, but not because the legal details of this order matter at all. Trump is unlikely to repeal Section 230 or take any real action to curb the power of the major social-media companies. Instead, he wants to keep things just the way they are and make sure that the red-carpet treatment he has received so far, especially at Facebook, continues without impediment. He definitely does not want substantial changes going into the 2020 election. The secondary aim is to rile up his base against yet another alleged enemy: this time Silicon Valley, because there needs to be an endless list of targets in the midst of multiple failures.
Twitter Adds Warnings to Trump and White House Tweets, Fueling Tensions
Twitter said the tweets, which implied that protesters in Minneapolis could be shot, glorified violence — the first time it had applied such warnings to any public figure’s posts.
Defying Trump, Twitter Doubles Down on Labeling Tweets
Twitter continued fact-checking posts even as President Trump threatened to limit protections for social media companies.
Trump’s assault on truth takes an ugly new turn
The executive order attacks Section 230 of the Communications Decency Act of 1996. This protects Internet companies from being “treated as a publisher” of information on their platforms that are provided by someone else. This protects them from being held liable for much content on them.
In attacking this provision, Trump is advancing an argument from conservatives — that by fact checking Trump, Twitter is censoring him and has veered into the role of publisher (by adding its own fact checking content), and no longer deserves that special liability protection.
But this argument is absurd. As Sen. Ron Wyden (D-Ore.), the author of this provision, has pointed out, it allows social media companies to police certain truly egregious content for socially beneficial reasons, without being held liable for content that is, say, defamatory that was left on the platform, allowing it to feature more voices with less oversight.
Trump is doubly wrong about Twitter
On Tuesday, President Trump claimed — on Twitter, no less — that Twitter is “stifling FREE SPEECH,” thus suggesting that Twitter is violating the First Amendment. As usual, Trump is wrong on the law, but this time he’s even more wrong than usual. There is someone violating the First Amendment on Twitter, but it’s not Twitter — it’s Trump. What’s more, his threat on Wednesday to shut down Twitter altogether would mean violating the First Amendment in new ways.
Trump is utterly mistaken in claiming that Twitter is violating the First Amendment — or even that Twitter can violate the First Amendment. Prompting Trump’s outburst was the platform’s first-ever attachment of warnings to two of Trump’s tweets encouraging users to “get the facts about mail-in ballots.” Clicking the warning leads to a news story indicating that “Trump makes unsubstantiated claim that mail-in ballots will lead to voter fraud.” Attaching these warnings, Trump claimed, was Twitter’s First Amendment sin.
Here’s the irony: While Twitter isn’t using its platform to violate the First Amendment, Trump is. That’s not just our view; it’s what a federal appeals court held in a landmark decision last year. The court ruled that Trump was violating the First Amendment by blocking on Twitter those whose views he disliked.

9 May
Virus Conspiracists Elevate a New Champion
A video showcasing baseless arguments by Dr. Judy Mikovits, including attacks on Dr. Anthony Fauci, has been viewed more than eight million times in the past week.
(NYT) In the 26-minute video, the woman asserted how Dr. Fauci, the director of the National Institute of Allergy and Infectious Diseases and a leading voice on the coronavirus, had buried her research about how vaccines can damage people’s immune systems. It is those weakened immune systems, she declared, that have made people susceptible to illnesses like Covid-19.
The video, a scene from a longer dubious documentary called “Plandemic,” was quickly seized upon by anti-vaccinators, the conspiracy group QAnon and activists from the Reopen America movement, generating more than eight million views. And it has turned the woman — Dr. Judy Mikovits, 62, a discredited scientist — into a new star of virus disinformation.

6 May
Feeling Burnt Out? Researchers Explain Why Zoom Meetings Can Be So Draining
LIBBY SANDER & OLIVER BAUMAN
For many of us, working from home during COVID-19 has meant we are spending a lot of time on video meeting applications like Zoom. The effects of this have taken us by surprise.
(The Conversation) Having giant heads staring at us up close for long periods can be off-putting for a lot of us. Never mind that we feel we should fix our iso-hair (COVID mullet anyone?), put on makeup, or get out of our pyjamas.
So why are online meetings more tiring than face-to-face ones?
People feel like they have to make more emotional effort to appear interested, and in the absence of many non-verbal cues, the intense focus on words and sustained eye contact is exhausting.
Meetings in person are not only about the exchange of knowledge, they are also important rituals in the office. Rituals provide comfort, put us at ease, and are essential in building and maintaining rapport.
Face to face meetings are also important mechanisms for the communication of attitudes and feelings among business partners and colleagues.

29 April
The dangerous global flood of misinformation surrounding COVID-19
(PBS Newshour) U.S. intelligence agencies now also believe that false text messages sent last month to many Americans about a nationwide lockdown were pushed by Chinese operatives aiming to sow discord.
And there are the recent nationwide protests of stay-at-home orders that President Trump has at times encouraged. The seemingly organic movement was, in fact, organized and driven by far right Facebook groups that have become a hotbed for conspiracy theories.
Social media giants, including Facebook, Twitter and YouTube, have all faced growing criticism about their role in the spread of misinformation.
Facebook, which is a funder of the “NewsHour,” now alerts users when they interact with false coronavirus content. On another popular platform, Reddit, users have long policed each other, to varying degrees of success.

29 April
Why Zoom Is Terrible
There’s a reason video apps make you feel awkward and unfulfilled.
(NYT) Last month, global downloads of the apps Zoom, Houseparty and Skype increased more than 100 percent as video conferencing and chats replaced the face-to-face encounters we are all so sorely missing. Their faces arranged in a grid reminiscent of the game show “Hollywood Squares,” people are attending virtual happy hours and birthday parties, holding virtual business meetings, learning in virtual classrooms and having virtual psychotherapy.
But there are reasons to be wary of the technology, beyond the widely reported security and privacy concerns. Psychologists, computer scientists and neuroscientists say the distortions and delays inherent in video communication can end up making you feel isolated, anxious and disconnected (or more than you were already). You might be better off just talking on the phone.
The problem is that the way the video images are digitally encoded and decoded, altered and adjusted, patched and synthesized introduces all kinds of artifacts: blocking, freezing, blurring, jerkiness and out-of-sync audio. These disruptions, some below our conscious awareness, confound perception and scramble subtle social cues. Our brains strain to fill in the gaps and make sense of the disorder, which makes us feel vaguely disturbed, uneasy and tired without quite knowing why.

31 March
The video apps we’re downloading amid the coronavirus pandemic
(WEF) As a significant part of the world population is currently on lockdown in an attempt to contain the coronavirus pandemic, people are turning to technology to work, communicate and stay in touch with their loved ones.
Unsurprisingly, workplace communication tools such as Slack and Teams have seen a jump in usage as working from home has become the new norm in recent weeks. People are also making use of similar tools in their personal lives, however, leading to a spike in downloads of video chat apps.
While Zoom is definitely the rising star among video chat apps, Skype remains far ahead in terms of active users. According to Priori Data, the Microsoft-owned service had 59 million daily active users on its iOS and Android apps in March, compared to just 4.3 million for Zoom. It needs to be noted though, that many people also use Skype for other ways of communication, while Zoom has specialized on video conferences, so it may not be a fair comparison to make.

19 March
After Truth: how ordinary people are ‘radicalized’ by fake news
An eye-opening documentary traces the terrifying trajectory of disinformation, from Jade Helm to Pizzagate to Russian interference in the 2016 election
(The Guardian) After Truth tracks the influence of disinformation – a deliberately disseminated falsehood, as opposed to “misinformation”, which is an unintentional factual error – from a niche topic in 2015 through Russian weaponization in the 2016 election and ubiquity in the Trump presidency. “We’re looking at some of the biggest lies that continue to manipulate people’s imaginations, even after they’ve been thoroughly debunked and clarified,” Andrew Rossi, the director, told the Guardian. Though the term “fake news” dates back to 2014 – when BuzzFeed News’s Craig Silverman popularized it to describe false stories about the Ebola crisis – After Truth begins in 2015, with online conspiracies around a military drill in Bastrop county, Texas. Known as Jade Helm, the theories, harnessed by YouTube personalities, spilled into the town’s real life – now eerie footage depicts an army spokesperson shouted down by people who dismiss his reassurances as propaganda; instead, they believed an internet personality-driven theory that underground tunnels connected the town’s Walmarts to the military base. The hysteria caught Texas governor Greg Abbott’s attention, who treated it seriously. Russia took notice, and started replicating the pattern heading into America’s election year.

16 March
So We’re Working From Home. Can the Internet Handle It?
With millions of people working and learning from home during the pandemic, internet networks are set to be strained to the hilt.
(NYT) As millions of people across the United States shift to working and learning from home this week to limit the spread of the coronavirus, they will test internet networks with one of the biggest mass behavior changes that the nation has experienced.
That is set to strain the internet’s underlying infrastructure, with the burden likely to be particularly felt in two areas: the home networks that people have set up in their residences, and the home internet services from Comcast, Charter and Verizon that those home networks rely on.
That infrastructure is generally accustomed to certain peaks of activity at specific times of the day, such as in the evening when people return from work and get online at home. But the vast transfer of work and learning to people’s homes will show new heights of internet use, with many users sharing the same internet connections throughout the day and using data-hungry apps that are usually reserved for offices and schools.
That may challenge what are known as last-mile services, which are the cable broadband and fiber-based broadband services that pipe the internet into homes. These tend to provide a very different internet service from what’s available in offices and schools, which typically have “enterprise grade” internet broadband service. In broad terms, many offices and schools essentially have the equivalent of a big pipe to carry internet traffic, compared with a garden hose for most homes.
On top of that, home networks — such as the Wi-Fi routers that residents set up — can be finicky. Many consumers have broadband plans with much lower capacity than in the workplace. And when many people are loaded onto a single Wi-Fi network at the same time to stream movies or to do video conferencing, that can cause congestion and slowness.

28 – 29 February
Tech firms take a hard line against coronavirus myths. But what about other types of misinformation?
(WaPo) As misinformation about the coronavirus has spread online, YouTube has steered its viewers to credible news reports. Facebook has swept away some posts about phony cures. And Amazon has removed 1 million products related to dubious health claims.
These efforts have drawn praise from misinformation experts, who long have complained that tech companies should do more to confront misleading claims about other subjects, such as the Holocaust and fake cancer cures.
But this praise has come with a caveat: If tech companies can move to promote truth on a fast-moving public-health crisis, why do they struggle to do the same on other important issues?
Millions of tweets peddled conspiracy theories about coronavirus in other countries, an unpublished U.S. report says
The study, which said it excluded the United States, found early signs that some of the activity may have been coordinated and inauthentic

20 February
One of the Largest Disinformation Campaigns Ever Conducted
(PBS) When McKay Coppins, a journalist for The Atlantic, created a Facebook page so he could follow pro-Trump social media accounts and communicate with online Trump supporters, he uncovered something remarkable: a campaign-coordinated effort to undermine journalists and the mainstream press on a mass scale. Coppins told Hari Sreenivasan about his findings.
He told Hari Sreenivasan what he learned about the campaign’s aim to spread disinformation, discredit journalists, and even dismantle mainstream media.”
HR: “On Election Day, anonymous text messages direct voters to the wrong locations or maybe even circulate rumors of security threats. Deepfakes of the Democratic nominee using racial slurs crop up faster than social media platforms can remove them. As news outlets scramble to correct the inaccuracies, hordes of Twitter bots respond by smearing and threatening reporters. “Meanwhile, the Trump campaign has spent the final days of the race pumping out Facebook ads at such a high rate that no one can keep track of what they’re injecting into the bloodstream. “After the first round of exit polls is released, a mysteriously sourced video surfaces purporting to show undocumented immigrants at the ballot box. Trump begins retreating rumors of voter fraud and suggests that Immigration and Customs Enforcement officers should be dispatched to polling stations. “‘Are illegals stealing the election?’ reads the FOX News Chyron. ‘Are Russians behind the false videos?’ demands MSNBC.”
If it was 10 years ago, I’d say that sounds like science fiction. Why is this so plausible to you after what you have been looking into?

3 February
Conservatives push false claims of voter fraud on Twitter as Iowans prepare to caucus
The episode showcases social media’s hands-off approach and the possible perils ahead for a divisive election season
Early Monday, Iowa’s secretary of state, Republican Paul Pate, weighed in to debunk the allegation.
“False claim,” he wrote. “Here is a link to the actual county-by-county voter registration totals. They are updated monthly and available online for everyone to see.”
He included a link to his office’s website, as well as the hashtag #FakeNews.
Pate’s post gained virtually no amplification
“The truth actually gets retweeted almost never, and the things that are the most inflammatory get the most play,” said Ann Ravel, the director of the Digital Deception project at MapLight, which tracks money in politics. She previously served on the Federal Election Commission.
Ravel accused tech companies of failing to grapple with what she says is a form of voter suppression. She said such tweets have the effect of casting doubt on the legitimacy of the political process.
Stephen King quits Facebook over ‘flood of false information’ and privacy concerns
(WaPo) the prolific horror writer joined a chorus of criticism toward the social media giant as it resists pressure to remove false claims from politicians. Facebook also opted last month to keep the tools that help politicians and other groups target its users, adding to fears it will mislead voters in the 2020 presidential race.
…  Facebook declined to remove a Trump campaign ad containing falsehoods despite a request from Democratic presidential contender Joseph Biden, saying political ads are not subject to its usual fact-checking process created after the 2016 election.
Sen. Elizabeth Warren (D-Mass.) used a pointed ad of her own to critique the policy, opening with: “Breaking news: Mark Zuckerberg and Facebook just endorsed Donald Trump for reelection.”
“You’re probably shocked, and you might be thinking, ‘how could this possibly be true?’ ” the ad read. “Well, it’s not. (Sorry.) But what Zuckerberg *has* done is given Donald Trump free rein to lie on his platform — and then to pay Facebook gobs of money to push out their lies to American voters.”
Facebook has defended its actions as upholding freedom of expression, with chief executive Mark Zuckerberg arguing that his company should not try to referee politicians’ statements even as he frets about “an erosion of truth.”

31 January
George Soros: Mark Zuckerberg Should Not Be in Control of Facebook
The social media company is going to get Trump re-elected — because it’s good for business
(NYT Opinion) “Facebook helped Trump to get elected and I am afraid that it will do the same in 2020.” I explained that there is a longstanding law — Section 230 of the Communications Decency Act — that protects social media platforms from legal liability for defamation and similar claims. Facebook can post deliberately misleading or false statements by candidates for public office and others, and take no responsibility for them.
I went on to say that there appears to be “an informal mutual assistance operation or agreement developing between Trump and Facebook” in which Facebook will help President Trump to get re-elected and Mr. Trump will, in turn, defend Facebook against attacks from regulators and the media.
… In 2016, Facebook provided the Trump campaign with embedded staff who helped to optimize its advertising program. (Hillary Clinton’s campaign was also approached, but it declined to embed a Facebook team in her campaign’s operations.) Brad Parscale, the digital director of Mr. Trump’s 2016 campaign and now his campaign manager for 2020, said that Facebook helped Mr. Trump and gave him the edge. This seems to have marked the beginning of a special relationship.

2019

28 December
Is Social Media The New Tobacco?
If we set out to design a highly addictive platform that optimized the most toxic, destructive aspects of human nature, we’d eventually come up with social media.
by Charles Hugh Smith via OfTwoMinds blog,
(ZeroHedge.com) What are the full costs of the current addiction to social media? These costs are even more difficult to measure than the consequences of widespread addiction to nicotine, but they exist regardless of our unwillingness or inability to measure the costs.
Consider the devastating consequences of social media on teen suicides.
Then there’s all the lost productivity as social media addicts check their phones 150+ times a day, interrupting not just work or school but intimacy, up to and including sex.
… Compare the ease of social media to the traditional paths to social prominence: building a business or career to attain wealth, community service via leadership roles in organizations, gaining high visibility in conventional media via extraordinary good looks, athletic or artistic talent, etc. Each of these is an extremely demanding path, one that few people attain, and hence the relatively few at the top of the heap.
…social media holds out the promise that an average person can become larger than they are in real life with relatively modest tools (an Internet connection and a camera). … This promise of attaining higher social status without having to work incredibly hard at difficult accomplishments is very compelling.

26 August
Silicon Valley’s Crisis of Conscience
Where Big Tech goes to ask deep questions
By Andrew Marantz
(The New Yorker Magazine) Esalen is just outside Silicon Valley, so the executives who visit it have come from the likes of Intel and Xerox PARC—and, more recently, from Apple and Google and Twitter. Esalen’s board of trustees has included an early Facebook employee, a Google alumnus, and a former Airbnb executive. Presumably, had there been such conspicuous overlap between a countercultural think tank and captains of any other industry…there would have been an outcry, or at least some pointed questions. But Big Tech was supposed to be different. It was supposed to make the world a better place.
Then came Brexit, the 2016 election, and the Great Tech Backlash. “Donald Trump Won Because of Facebook,” a headline in New York declared. A law professor at Stanford published a paper that asked, “Can Democracy Survive the Internet?” Suddenly, a board with several Silicon Valley executives didn’t seem entirely unlike a board with several Atlantic City casino bosses. Even after it became apparent that Facebook posts were fuelling the Rohingya genocide in Myanmar, the company dithered for months before taking decisive action. Clearly, all was not in alignment.
For a long time, the prevailing posture of the Silicon Valley élite was smugness bordering on hubris. Now the emotional repertoire is expanding to include shame—or, at least, the appearance of shame. “They can’t decide whether they ought to feel like pariahs or victims, and they’re looking for places where they can work this stuff out,” a well-connected Silicon Valley organizer told me. “Not their boardrooms, where everyone tells them what they want to hear, and not in public, where everyone yells at them. A third place.”

August 19, 2019
1 August
Twitter Needs a Pause Button
Instantaneous communication can be destructive. We need to tweak our digital platforms to make time for extra eyes, cooler heads, and second thoughts.
(The Atlantic Magazine) Not long before the [Christchurch] attack, Justin Kosslyn, who was then an executive at Jigsaw, a technology incubator created by Google, had published an article on Vice.com called “The Internet Needs More Friction.” The internet, he argued, was built for instantaneous communication, but the absence of even brief delays in transmission had proved a boon to disinformation, malware, phishing, and other security threats. “It’s time to bring friction back,” he wrote. “Friction buys time, and time reduces systemic risk.”
For a long time, through the internet’s first and second generations, people naturally assumed that faster must be better; slowness was a vestige of a bygone age, a technological hurdle to be overcome. What they missed is that human institutions and intermediaries often impose slowness on purpose. Slowness is a social technology in its own right, one that protects humans from themselves.

28 May
Facebook’s False Standards for Not Removing a Fake Nancy Pelosi Video
(The New Yorker) Last week, when a doctored video of the Speaker of the House, Nancy Pelosi, began circulating on Facebook, it seemed like it would only be a matter of time before it was removed. After all, just one day before, Facebook proudly announced that it had recently removed 2.2 billion fake accounts between January and March as part of its expanded efforts to curb the platform’s circulation of misinformation. The video, which was manipulated to make Pelosi seem drunk and confused, is not a particularly sophisticated fake. But it was convincing enough that countless commenters believed it to be true and sent it spinning through cyberspace. At one point, there were seventeen versions of the video online; various iterations had jumped to Twitter and YouTube, and one was picked up by Fox News. The Fox News clip was then posted on Twitter by President Trump. Within hours, a version of the doctored video had been viewed more than two million times.  … Facebook refused to remove the Pelosi video because, according to Monika Bickert, the company’s head of global policy management, it does not violate the company’s community standards, even though it is demonstrably false.

14 May
Deepfakes are coming. We’re not ready.
By Brian Klaas
(WaPo) If 2016 was the election of “fake news,” 2020 has the potential to be the election of “deepfakes,” the new phenomenon of bogus videos created with the help of artificial intelligence. It’s becoming easier and cheaper to create such videos. Soon, those with even a rudimentary technical knowledge will be able to fabricate videos that are so true to life that it becomes difficult, if not impossible, to determine whether the video is real.
In the era of conspiracy theories, disinformation and absurd denials by politicians staring down seemingly indisputable facts, it is only a matter of time before deepfakes are weaponized in ways that poison the foundational principle of democracy: informed consent of the governed. After all, how can voters make appropriate decisions if they aren’t sure what is fact and what is fiction? Unfortunately, we are careening toward that moment faster than we think.
Deepfakes are created by something called a “generative adversarial network,” or GAN. GANs are technically complex, but operate on a simple principle. There are two automated rivals in the system: a forger and a detective. The forger tries to create fake content while the detective tries to figure out what is authentic and what is forged. Over each iteration, the forger learns from its mistakes. Eventually, the forger gets so good that it is difficult to tell the difference between fake and real content. And when that happens with deepfakes, those are the videos that are likely to fool humans, too.

4 May
David Frum: Trump Attacks Facebook on Behalf of Racists and Grifters
But unlike in previous eras, the social giant knows it can just ignore the president.
After Facebook on Friday banned far-right figures and organizations from their platform, including the site Infowars, the president threatened to “monitor” social-media sites in retaliation. Through much of the late evening of May 3 and early morning of May 4, the president used his Twitter feed to champion the people who earn a large living spreading false reports. He hailed them as conservative thinkers whose free-speech rights have been abridged by social-media platforms.
… even as the president engrafts conspiracists and racists onto mainstream conservatism, it’s worth wondering: Why is he starting this fight? What does he hope to accomplish? In the past, when presidents publicly criticized major corporations by name, they got results.
One observer of social media speculates that Trump hopes to deter Facebook from enforcing its rules against him and his 2020 campaign. In that case, wouldn’t Trump fight on the strongest ground, not the weakest? Identifying “my team” with some of the worst characters on the internet seems a prelude not to a hard fight, but to an embarrassing retreat.
Instead of preparing for a trial of strength against a corporation that a president should easily win, he has joined his personal brand to a gaggle of shady characters in an outburst likely to be forgotten in a day or two. Or, at least, forgotten by him.

2 May

Facebook and Instagram cracked down on extremist figures. Accounts from Alex Jones, Milo Yiannopoulos, and Louis Farrakhan—among several others—were all kicked off both platforms. The move comes as Facebook, which owns Instagram, continues to grapple with its inability to root out extremist views and misinformation, and its complicity in election interference. Instagram might be best known for seemingly flawless influencers, but the platform has become fertile ground for conspiracists who collectively have garnered millions of followers. “Banning these extremist figures is a step toward stricter moderation of extremist views,” writes Taylor Lorenz, “but time and again, we’ve seen that the internet’s worst actors always find new ways to exploit platforms.”

26 April
The Terrifying Potential of the 5G Network
Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution.

27 April
The age of the Influencer has peaked. It’s time for the slacker to rise again
(Quartz) It’s hard to remember a time when scrolling through Instagram was anything but a thoroughly exhausting experience.
Where once the social network was basically lunch and sunsets, it’s now a parade of strategically-crafted life updates, career achievements, and public vows to spend less time online (usually made by people who earn money from social media)—all framed with the carefully selected language of a press release. Everyone is striving, so very hard.

The Privacy Project
(NYT) Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs — in anonymity, even autonomy — are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The [New York] Times is embarking on this months-long project to explore the technology and where it’s taking us, and to convene debate about how it can best help realize human potential.
Does Privacy Matter?
What Do They Know, and
How Do They Know It?

What Should Be Done About This?
What Can I Do?
View all Privacy articles (April 2019)

21 April
Where Countries Are Tinderboxes and Facebook Is a Match
False rumors set Buddhist against Muslim in Sri Lanka, the most recent in a global spate of violence fanned by social media.
(NYT) For months, we had been tracking riots and lynchings around the world linked to misinformation and hate speech on Facebook, which pushes whatever content keeps users on the site longest — a potentially damaging practice in countries with weak institutions.
Time and again, communal hatreds overrun the newsfeed — the primary portal for news and information for many users — unchecked as local media are displaced by Facebook and governments find themselves with little leverage over the company. Some users, energized by hate speech and misinformation, plot real-world attacks.
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
… where institutions are weak or undeveloped, Facebook’s newsfeed can inadvertently amplify dangerous tendencies. Designed to maximize user time on site, it promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.
In the Western countries for which Facebook was designed, this leads to online arguments, angry identity politics and polarization. But in developing countries, Facebook is often perceived as synonymous with the internet and reputable sources are scarce, allowing emotionally charged rumors to run rampant. Shared among trusted friends and family members, they can become conventional wisdom.

2 April
Canada must legislate Facebook to tackle online-networked hate
By Kyle Matthews & Duncan Cooper
There are no quick fixes to dealing with online hate and the violence that it often fuels. This complex reality does not absolve governments from taking action, however. Former U.K. deputy prime minister and current head of Facebook’s public policy team, Nick Clegg, once said, “The best way to ensure that any regulation is smart and works for people is by governments, regulators and businesses working together to learn from each other.”
(Toronto Star) Canada ought to keep an eye out on other countries’ legislation. Perhaps the most advanced framework for online extremism has come from Germany, whose Network Enforcement Act imposes heavy fines — up to €50 million — if extremist content is not taken down within 24 hours of its detection.
Another model for digital governance comes from France, where President Emmanuel Macron has stewarded a “smart regulation” program of embedding government officials within Facebook’s offices for a six-month period. This effort will determine strategies to counter extremism amongst France’s Facebook users.
Australia is set to vote on a law this week that could see social media companies fined and their executives jailed for up to three years for not taking down such content faster. In statement, Australian Prime Minister Scott Morrison said, “Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists.” Australia also wants this issue to be part of the agenda for the upcoming G20 meeting in Japan in June.
In the past, Facebook has repeatedly pushed back against such ideas, promising to use artificial intelligence and hiring more human content moderators as the preferred strategy. However, Facebook’s ability to counteract threats through either workers or AI is deeply questionable.

30 March
Australia, Singapore Crack Down on Online Media With New Laws
(Bloomberg) Governments in the Asia Pacific region are accelerating efforts to fight malicious use of online media, unveiling laws that make it easier to target websites which enable distribution of criminal or fraudulent content.
Australia said it will legislate “tough” new laws to prevent social-media platforms from being “weaponized” by terrorists and extremists who may use them to live-stream violent crimes, such as this month’s terror attack in New Zealand. Singapore said it will introduce a law to halt the spread of “fake news.”
Facebook Inc. came under sharp criticism for not taking down a video in which the alleged gunman killed 50 people in two mosques in Christchurch fast enough, and for allowing it be circulated across the internet and uploaded to platforms like YouTube. The social-media company was considering placing restrictions on who could post live videos in the wake of the shooting that was filmed and disseminated in real time.
… Singapore, meanwhile, said its new law will give more power to the government to hold online outlets accountable if they’re deemed to have deliberately delivered false news.
The measures will include requiring them to show corrections or display warnings about online falsehoods, and even removing articles in extreme cases, Prime Minister Lee Hsien Loong said in a speech.

Leave a Comment

comm comm comm