Social media, society and technology 2021-

Written by  //  March 22, 2021  //  Justice & Law, Media, Science & Technology  //  No comments

News Use Across Social Media Platforms in 2020
Facebook stands out as a regular source of news for about a third of Americans
(Pew Research Center) As social media companies struggle to deal with misleading information on their platforms about the election, the COVID-19 pandemic and more, a large portion of Americans continue to rely on these sites for news. About half of U.S. adults (53%) say they get news from social media “often” or “sometimes,” and this use is spread out across a number of different sites, according to a Pew Research Center survey conducted Aug. 31-Sept. 7, 2020.
There are in some cases drastic demographic differences between the people who turn to each social media site for news. For example, White adults make up a majority of the regular news users of Facebook and Reddit but fewer than half of those who turn to Instagram for news. Both Black and Hispanic adults make up about a quarter of Instagram’s regular news users (22% and 27%, respectively). People who regularly get news on Facebook are more likely to be women than men (63% vs. 35%), while two-thirds of Reddit’s regular news users are men.
The majority of regular news users of many sites – YouTube, Twitter, Instagram, Reddit and LinkedIn – are Democrats or lean Democratic. (12 January 2021)

Reset: Reclaiming the Internet for Civil Society
We need to reclaim our lives from our phones and ‘reset,’ says CBC Massey lecturer Ron Deibert
(Massey Lectures 2020 Part 1) ‘Look at that device in your hand,’ says Ron Deibert in the first instalment of his 2020 CBC Massey Lectures. ‘You sleep with it, eat with it … depend on it.’ The renowned tech expert exposes deep systemic problems in our communication ecosystem and shares what we need to do about it.
“Information and communications technologies are, in theory, supposed to help us reason more effectively, facilitate productive dialogue and share ideas for a better future,” says renowned technology and security expert Ron Deibert. “They’re not supposed to contribute to our collective demise.” (originally aired on November 9, 2020)

20 February
Apps Recreate the Soundtrack of Pre-Pandemic Life
(Bloomberg City Lab) Ice cubes clink. A blender whirs. The hum of gossip carries. People shout to be heard over the din.
Can you hear it? Do you miss it? You’re not alone. There’s a whole genre of auditory environments like this that have all but disappeared over the past year: other people making little noises around you. In bars, coffee shops, and even open offices. Ears yearning, people in lonely apartments all over the world have tuned into new sites that turn that low background hum of life-in-public into a soundtrack.
The internet has long churned out “coffee shop” playlists, which channel the lo-fi instrumentals or soft folk you might hear at a Starbucks. These new mixes go further to include sounds you may not have appreciated but were always there, curated not via algorithm but by internet Foley artists. There’s Spotify’s “The Sound of Colleagues,” where remote workers can crank the volume to return to the dulcet, focusing tones of “printer,” “coffee machine,” and “keyboards.” Kids Creative Agency is behind I Miss The Office, where telephones ring and coworkers sneeze and “mhm.”

17 February
Facebook restricts the sharing of news in Australia as Google says it will pay some publishers.
(NYT) Facebook said on Wednesday that it would restrict people and publishers from sharing links to news articles in Australia, in response to a proposed law in the country that requires tech companies to pay publishers for linking to articles across their platforms.
The decision came hours after Google announced it had reached an agreement to pay Rupert Murdoch’s News Corp to publish its news content in a three-year global deal, part of a string of deals it had struck with media companies in recent days to ensure that news would remain on its services.

6 February
Lawsuits Take the Lead in Fight Against Disinformation
Defamation cases have made waves across an uneasy right-wing media landscape, from Fox to Newsmax.
Lou Dobbs, whose show on Fox Business was canceled on Friday, was one of several Fox anchors named in a defamation suit filed by the election technology company Smartmatic.

(NYT) In just a few weeks, lawsuits and legal threats from a pair of obscure election technology companies have achieved what years of advertising boycotts, public pressure campaigns and liberal outrage could not: curbing the flow of misinformation in right-wing media.
Dominion Voting Systems, another company that Mr. Trump has accused of rigging votes, filed defamation suits last month against two of the former president’s lawyers, Rudolph W. Giuliani and Sidney Powell, on similar grounds. Both firms have signaled that more lawsuits may be imminent.

25 January
A double-edged sword
How social media went from toppling dictators to platforming hate.
(Open Canada) Ever since the Arab Spring revealed the fragility of certain Middle Eastern dictatorships and highlighted how quickly online discontent can transform into national resistance, authoritarian regimes have used social media to help predict dissent and gauge public sentiment. Governments can now actively monitor protest plans, identify key figures and persecute people who support popular protests (as is currently the case in Belarus). Social media platforms also provide governments with new methods of communicating with their population, which they can use to counter dissenting opinions or to spread propaganda and disinformation that creates confusion and muddies the waters of legitimate news sources.
Countries that have effectively used social media to monitor and control public opinion include China, Russia and Saudi Arabia. China encourages limited expression online in order to better understand weaknesses within its own government. This gives the Chinese government a better understanding of the dynamics of public discontent, while also allowing it to present the façade of benevolence and democratic oversight. Saudi Arabia passed counterterrorism legislation in 2014 that criminalized defamation of the state — a purposely vague cybercrime law that arbitrarily limits free speech and allows the government to arrest online bloggers and activists with little explanation. Saudi, along with regional neighbours like the United Arab Emirates, also use automated bot and pro-government social media influencers to promote state propaganda and to drown out dissenting voices. Bahrain, an island neighbour of Saudi, has arrested several prominent opposition figures who criticized the Bahraini government online.

12 – 15 January
White supremacist terrorism: Key trends to watch in 2021
(Brookings) …the movement as a whole is heavily dependent on social media. Part of this is a generational shift, as youth around the world embrace Facebook, YouTube, Instagram, and other media. But social media is also cheap and easily accessible, making it ideal for propaganda and networking. This technological shift, however, has made the movement more diffuse, weakening what little hierarchies existed while connecting previously isolated individuals. Fortunately, social media and financial services companies are more willing to deplatform white supremacists, but many experts contend more could be done.

The Guardian view of Trump’s populism: weaponised and silenced by social media
(Editorial) Donald Trump’s incitement of a mob attack on the US Capitol was a watershed moment for free speech and the internet. Bans against both the US president and his prominent supporters have spread across social media as well as email and e-commerce services. Parler, a social network popular with neo-Nazis, was ditched from mobile phone app stores and then forced offline entirely. These events suggest that the most momentous year of modern democracy was not 1989 – when the Berlin wall fell – but 1991, when web servers first became publicly available.
There are two related issues at stake here: the chilling power afforded to huge US corporations to limit free speech; and the vast sums they make from algorithmically privileging and amplifying deliberate disinformation. The doctrines, regulations and laws that govern the web were constructed to foster growth in an immature sector. But the industry has grown into a monster – one which threatens democracy by commercialising the swift spread of controversy and lies for political advantage.

The Importance, and Incoherence, of Twitter’s Trump Ban
By Andrew Marantz
(The New Yorker) “I doubt I would be here if it weren’t for social media, to be honest with you,” Donald Trump said in 2017. He may have been wrong; after all, he uttered those words on Fox Business, a TV network that will surely continue to have him on as a guest long after he leaves the White House, and even if he loses every one of his social-media accounts. Perhaps Trump could have become President without social media. There were plenty of other factors militating in his favor—a racist backlash to the first Black president, the abandonment of the working class by both parties, and on and on. Still: Trump wanted to be President in 1988, and in 2000, and he couldn’t get close. In 2012, just as social media was starting to eclipse traditional media, Trump was a big enough factor in the Republican race that Mitt Romney went to the Trump Hotel in Las Vegas to publicly accept his endorsement. Only in 2016, when the ascent of social media was all but complete, did Trump’s dream become a reality. Maybe this was just a coincidence. There is, tragically, no way to run the experiment in reverse.

Trump’s Been Unplugged. Now What?
The platforms have acted, raising hard questions about technology and democracy.
(The New Yorker) … The President’s tweeting was “highly likely to encourage and inspire people to replicate the criminal acts at the U.S. Capitol,” the company stated, in a blog post. It noted that plans for additional violence—including a “proposed secondary attack” on the Capitol and various state capitols—were already in circulation on the platform.
… Although Twitter has been an undeniable force throughout the Trump Presidency—a vehicle for policy announcements, personal fury, targeted harassment, and clumsy winks to an eager base—most Americans don’t use it. According to Pew Research, only around twenty per cent of American adults have accounts, and just ten per cent of Twitter users are responsible for eighty per cent of its content.
By Saturday, most major tech companies had announced some form of action in regard to Trump. The President’s accounts were suspended on the streaming platform Twitch, and on Snapchat, a photo-sharing app. Shopify, an e-commerce platform, terminated two online stores selling Trump merchandise, citing the President’s endorsement of last Wednesday’s violence as a violation of its terms of service. PayPal shut down an account that was fund-raising for participants of the Capitol riot. Google and Apple removed Parler, a Twitter alternative used by many right-wing extremists, from their respective app stores, making new sign-ups nearly impossible. Then Amazon Web Services—a cloud-infrastructure system that provides essential scaffolding for companies and organizations such as Netflix, Slack, NASA, and the C.I.A.—suspended Parler’s account, rendering the service inoperable.

In the United States, online speech is governed by Section 230 of the Communications Decency Act, a piece of legislation passed in 1996 that grants Internet companies immunity from liability for user-generated content. Most public argument about moderation elides the fact that Section 230 was intended to encourage tech companies to cull and restrict content.

Social media companies need better emergency protocols
Daniel L. Byman and Aditi Joshi
How can and should social media companies treat politicians and governments fomenting hate online?
(Brookings) Online vitriol, especially in the hands of widely-followed, influential, and well-resourced politicians and governments, can have serious — and even deadly — consequences. On January 6, 2020, President Trump tweeted false claims of election fraud and seemingly justified the use of violence as his supporters stormed the U.S. Capitol. Although an in-person speech appeared to most directly trigger the violence, Trump’s social media presence played a large role in the mob’s actions. For weeks after losing the 2020 election, President Trump tweeted false claims of election fraud and encouraged supporters to descend on Washington, D.C. on January 6, refuse to “take it anymore,” and “be strong.” On the day of the assault, a tweet that Vice President Mike Pence “didn’t have the courage to do what should have been done” was followed by messages from Trump’s supporters on the social networking platform Gab calling for those in the Capitol to find the vice president, as well as in-person chanting of “Where is Pence?” Leading up to and during the outbreak of violence, various social media platforms helped the mob assemble at the right place and time, coordinate their actions, and receive directions from the president and one another.
Although states’ exploitation of communications technology is not new, social media provides new dangers and risks. Given platforms’ reach, states can have a huge impact on their populations if they dominate the narrative on popular platforms like Facebook and Twitter. Additionally, social media facilitates “echo chambers,” where feeds are personalized based on user data and users’ pre-existing views are reinforced (possibly to the point of inciting action) rather than challenged. Lastly, most social media platforms have no gatekeepers and lack the editorial role of newspapers or television broadcasts, though they do usually have minimum community standards.
Although Facebook and other companies have devoted significant resources to the problem of bad content, technical tools and available human moderators often fall short of solving the problem. Humans are necessary to train and refine technological tools, handle appeals, and treat nuanced content requiring social, cultural, and political context to be understood.
… over-restriction can have equally devastating consequences. Repressive regimes often shut down the internet in the name of security while using the silence to harm dissenters or minority communities. Furthermore, limiting any content, especially government content, may be at odds with U.S.-based technology companies’ supposed principles. Many companies claim to be committed to free speech for all their users and do not see themselves as arbiters of appropriate or inappropriate content. Making these judgments places social media companies in a role they should not nor want to be in. Yet, with the power these platforms yield, social media companies must find ways to prepare for this role and prevent escalation of tensions in a crisis.

Leave a Comment

comm comm comm