Science & Technology, Society, Social Media March 2022-

Written by  //  March 14, 2024  //  Science & Technology  //  Comments Off on Science & Technology, Society, Social Media March 2022-

Social media, society and technology 2021-November 2022
Rest of World

Rest of World: 2022’s best stories on global tech (that we wish we’d written)
Articles from around the world which we loved, obsessed over, and still can’t stop thinking about

26 January
3 AI predictions for 2023 and beyond, according to an AI expert
Michael Schmidt, Chief Technology Officer, DataRobot
The field of artificial intelligence (AI) has seen huge growth in recent years.
Companies seeking to harness AI must overcome key societal concerns.
Key predictions outline how to achieve value from responsible AI growth.
(WEF) The technology and its benefits are no longer a great unknown to the majority, instead, many have seen first hand the ability AI has to work quickly and efficiently in solving many of society’s most pressing challenges. We’ve seen it play a role in the record speed at which COVID-19 vaccines were delivered, help hospitals identify and treat their most at-risk patients, and more broadly, vastly reduce the number of human errors in data.
As we look to the year ahead, we think the ramifications of heightened societal awareness of AI, increased regulatory pressure, the increased momentum of investments in the space, and how AI will continue to increase employee productivity may come to a head. Practical and applied AI concerns will become paramount to enable continued value from AI growth.

14 March
US Lawmakers See TikTok as China’s Tool, Even as It Distances Itself From Beijing
A new bill threatens the app’s survival and casts a spotlight on the quandary that many private Chinese companies have found themselves in.
(The Diplomat) If some U.S. lawmakers have their way, the United States and China could end up with something in common: TikTok might not be available in either country.
The House on Wednesday approved a bill requiring the Beijing-based company ByteDance to sell its subsidiary, TikTok, or face a nationwide ban. It’s unclear if the bill will ever become law, but it reflects lawmakers’ fears that the social media platform could expose Americans to Beijing’s malign influences and data security risks.
But while U.S. lawmakers associate TikTok with China, the company, headquartered outside China, has strategically kept its distance from its homeland.
Since its inception, the TikTok platform has been intended for non-Chinese markets and is unavailable in mainland China. It pulled out of Hong Kong in 2020 when Beijing imposed a national security law on the territory to curtail speech. As data security concerns started to rise in the United States, TikTok sought to reassure lawmakers that data gathered on U.S. users stays in the country and is inaccessible to ByteDance employees in Beijing.
TikTok’s parent company is following the same playbook as many other Chinese companies with global ambitions: To win customers and trust in the United States and other Western countries, they are playing down their Chinese roots and connections. Some have insisted they be called “global companies” instead of “Chinese companies.”
See also: The TikTok Ban Is Mired in a Stalemate in US Congress
The idea of a TikTok ban may seem to have bipartisan support, but that belies the complicated political calculations going on behind the scenes. (May 2023)

28 February
Google chief admits ‘biased’ AI tool’s photo diversity offended users
Sundar Pichai addresses backlash after Gemini software created images of historical figures in variety of ethnicities and genders
Google’s chief executive has described some responses by the company’s Gemini artificial intelligence model as “biased” and “completely unacceptable” after it produced results including portrayals of German second world warsoldiers as people of colour.
Sundar Pichai told employees in a memo that images and texts generated by its latest AI tool had caused offence.
Social media users have posted numerous examples of Gemini’s image generator depicting historical figures – including popes, the founding fathers of the US and Vikings – in a variety of ethnicities and genders. Last week, Google paused Gemini’s ability to create images of people.
The future of AI video is here, super weird flaws and all
(WaPo) …Sora, a new tool from OpenAI that can create lifelike, minute-long videos from simple text prompts. When the company unveiled it on Feb. 15, experts hailed it as a major moment in the development of artificial intelligence. Google and Meta also have unveiled new AI video research in recent months. The race is on toward an era when anyone can almost instantly create realistic-looking videos without sophisticated CGI tools or expertise.
Disinformation researchers are unnerved by the prospect. Last year, fake AI photos of former president Donald Trump running from police went viral, and New Hampshire primary voters were targeted this January with fake, AI-generated audio of President Biden telling them not to vote. It’s not hard to imagine lifelike fake videos erupting on social media to further erode public trust in political leaders, institutions and the media.
For now, Sora is open only to testers and select filmmakers; OpenAI declined to say when Sora will available to the general public. “We’re announcing this technology to show the world what’s on the horizon,” said Tim Brooks, a research scientist at OpenAI who co-leads the Sora project.
OpenAI has a partnership with Shutterstock to use its videos to train AI. But because Sora is also trained on videos taken from the public web, owners of other videos could raise legal challenges alleging copyright infringement. AI companies have argued that using publicly available online images, text and video amounts to “fair use” and is legal under copyright law. But authors, artists and news organizations have sued OpenAI and others, saying they never gave permission or received payment for their work to be used this way.
Mitch Joel: From AI Angels To Data Demons – Did Google’s Gemini Cross The Line?
Generative AI is generating (very) concerning content.
Welcome to the new world.
It’s the intricate dance between innovation and responsibility.
Google’s re-introduction of Gemini (formerly known as Bard, and their response to Open AI’s ChatGPT) made headlines this week when its ability to generate images met with immediate controversy.
These are the pitfalls that will always accompany AI advancements.
Gemini’s generation of racially diverse images of historically inaccurate figures, including people of color dressed as Nazis, sparked widespread offense and forced Google to quickly shut that feature down.
Google’s CEO, Sundar Pichai, swiftly acknowledged the mishap, emphasizing a commitment to rectify the inaccuracies and biases — a move that speaks volumes about the iterative nature of AI development and the critical need for real-world testing.
Welcome to the double-edges sword of AI innovation.
Google’s rapid deployment of generative AI capabilities within Gemini serves as a potent reminder of the risks inherent in pushing technological boundaries, without fully considering societal impacts.
So the discussion is less about how did this happen but much more about cultural and ethical considerations in AI training data and algorithm design.
Sure, we need know the importance of diversity and inclusivity within AI development teams to mitigate inherent biases.
But these teams and algorithms may only be as strong as the data it’s fed.

10 February
AI and misinformation: what’s ahead for social media as the US election looms?
Innovation is outpacing our ability to handle misinformation, experts say. That makes falsehoods easy to weaponize
(The Guardian) As the United States’ fractured political system prepares for a tense election, social media companies may not be prepared for an onslaught of viral rumors and lies that could disrupt the voting process – an ongoing feature of elections in the misinformation age.
A handful of major issues face these tech companies at a time when trust in elections, and in the information people find on social media, is low. The potential for politicians and their supporters to weaponize social media to spread misinformation, meanwhile, is high.


14 December
Disinformation campaigns, including those using AI deepfakes, are creating risks for corporations
During the summer of 2020, Wayfair Inc. found itself in the throes of a crisis.
Conspiracy theorists linked to QAnon, seemingly emboldened by the chaos and confusion of the pandemic, tried to tarnish the reputation of the online furniture and home goods retailer. The trolls used Twitter, Instagram and Reddit to spread false information that Wayfair was involved in child sex trafficking through the sale of industrial-grade cabinets.
The company refuted the allegations, but the lies continued to circulate online, underscoring how easy it is for malicious actors to create reputational risks for companies in the digital era.
Wayfair experienced what is known as a disinformation campaign – a deliberate attempt to disseminate false information to inflict harm

3 November
In a Worldwide War of Words, Russia, China and Iran Back Hamas
Officials and researchers say the deluge of online propaganda and disinformation is larger than anything seen before.
(NYT) The conflict between Israel and Hamas is fast becoming a world war online.
Iran, Russia and, to a lesser degree, China have used state media and the world’s major social networking platforms to support Hamas and undercut Israel, while denigrating Israel’s principal ally, the United States.
Iran’s proxies in Lebanon, Syria and Iraq have also joined the fight online, along with extremist groups, like Al Qaeda and the Islamic State, that were previously at odds with Hamas.
The deluge of online propaganda and disinformation is larger than anything seen before, according to government officials and independent researchers — a reflection of the world’s geopolitical division.
“It is being seen by millions, hundreds of millions of people around the world,” said Rafi Mendelsohn, vice president at Cyabra, a social media intelligence company in Tel Aviv, “and it’s impacting the war in a way that is probably just as effective as any other tactic on the ground.” Cyabra has documented at least 40,000 bots or inauthentic accounts online since Hamas attacked Israel from Gaza on Oct. 7.

6 September
What Mark Zuckerberg Doesn’t Understand About Old People
(NYT) … Facebook has been struggling to hang on to young users for more than a decade; usage by people over 25 has steadily grown over that time, and along with YouTube, Facebook has become the internet’s most popular social network among people over 50. This wouldn’t seem terrible for a company that makes money from advertising, as Facebook does. After all, older people are the future of business: According to a recent analysis by AARP, people over 50 now account for more than half of the world’s consumer spending, and their share is projected to grow to 60 percent by 2050.
So is Zuckerberg rejoicing that he owns the preferred online destination of the planet’s wealthiest and fastest-growing consumer demographic, tomorrow’s whales of consumerist desire?
He is not. Instead he seems embarrassed by it. Documents leaked by a whistle-blower in 2021 showed Facebook product managers obsessed with reversing the app’s unpopularity with teens and young adults.

19 July
Meta’s Threads platform might not be revolutionary, but it poses a challenge to Twitter
Today in The Conversation Canada, Jordan Richard Schoenherr at Concordia University writes that while Threads isn’t revolutionary, it may be well-positioned to take advantage of the changes at Twitter. “A major selling point for Threads is that it wants to avoid the divisive politics that have made social media a caustic, polarized environment,” Schoenherr writes. But it remains to be seen how Threads manages to sustain interest in the platform.
The July 5 launch of Threads, Instagram’s new social media platform, has met with considerable interest. Meta CEO Mark Zuckerberg was quick to report that over 100 million users downloaded the app by the end of its first weekend.
The apparent success of Threads stands in stark contrast to other recent social media apps such as Spill, Bluesky, Mastodon and others.
Although Threads has been called the fastest growing app in history, it remains to be seen whether interest will be sustained over the long run.
Threads’ success is by no means assured. The app doesn’t present a radical departure from Twitter’s formula, doesn’t have access to the European market due to privacy concerns, and faces a potential lawsuit from Twitter which has also introduced revenue sharing to verified users.

23 May
Surgeon General Warns That Social Media May Harm Children and Adolescents
The report by Dr. Vivek Murthy cited a “profound risk of harm” to adolescent mental health and urged families to set limits and governments to set tougher standards for use.
The nation’s top health official issued an extraordinary public warning on Tuesday about the risks of social media to young people, urging a push to fully understand the possible “harm to the mental health and well-being of children and adolescents.”
In a 19-page advisory, the United States surgeon general, Dr. Vivek Murthy, noted that the effects of social media on adolescent mental health were not fully understood, and that social media can be beneficial to some users. Nonetheless, he wrote, “There are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.”

11 May
TikTok shows why social media companies need more regulation
They pose a threat to national security, Americans’ privacy, and children’s health
Sanjay Patnaik, Director – Center on Regulation and Markets Bernard L. Schwartz Chair in Economic Policy Development Fellow – Economic Studies and Robert E. Litan, Nonresident Senior Fellow – Economic Studies, Center on Regulation and Markets
(Brookings) There has been increasing political awareness regarding the national security issues posed by TikTok, the popular social media app owned by Chinese company ByteDance. Lawmakers and the public are right to be concerned—numerous data points are now available, from ByteDance’s connections to the Chinese Communist Party to its potential for social manipulation to its gathering of American personal data. Some efforts have been made to limit the potential risks, for example storing TikTok data on U.S. (rather than Chinese) servers. However, more could be done. Additional measures should also be considered by lawmakers, including the forced sale of ByteDance’s U.S. operations or even a complete nationwide ban on TikTok.
TikTok’s national security threat is only one component of the myriad challenges facing regulators looking to protect citizens utilizing social media platforms.

8 May
Four explanations for how BlackBerry blew it
Yes, they got killed by the iPhone. But wasn’t just the iPhone.
You could ask how a Canadian company that once held the world in its hands (or at least the world’s data on its servers) crashed so brutally. But a better question might be how a tech startup based in Waterloo, Ontario, got so big in the first place.
In some respects, the answers to both questions are the same.
They’re explored in the 2015 book Losing the Signal: The Untold Story Behind the Extraordinary Rise and Spectacular Fall of BlackBerry, which [Sean] Silcoff co-authored with his former Globe colleague Jacquie McNish. It’s been loosely adapted into a comedic feature film hitting theatres this Friday, starring Jay Baruchel and Glenn Howerton as BlackBerry co-founders Mike Lazaridis and Jim Balsillie, respectively.
On this week’s CANADALAND, Jesse Brown talks to Silcoff and, drawing from his book, attempts to distill the various explanations for how everything went so wrong:

21 April
Elon Musk’s Twitter drops government-funded media labels
The move comes after several media companies including NPR and the Canadian Broadcasting Corp. announced they would be leaving Twitter.
(NBC) Twitter has removed labels describing global media organizations as government-funded or state-affiliated, a move that comes after the Elon Musk-owned platform started stripping blue verification checkmarks from accounts that don’t pay a monthly fee.
Among those no longer labeled was NPR in the U.S., which announced last week that it would stop using Twitter after its main account was designated state-affiliated media, a term also used to identify media outlets controlled or heavily influenced by authoritarian governments, such as Russia and China.
Twitter later changed the label to “government-funded media,” but NPR — which relies on the government for a tiny fraction of its funding — said it was still misleading.
Canadian Broadcasting Corp. and Swedish public radio made similar decisions to quit tweeting. CBC’s government-funded label vanished Friday, along with the state-affiliated tags on media accounts including Sputnik and RT in Russia and Xinhua in China.

11 April
Will AI-generated images create a new crisis for fact-checkers? Experts are not so sure
Gretel Kahn
(Reuters Institute) AI image generators like DALL-E and Midjourney are popular and easy to use. Anyone can create new images through text prompts. Both applications are getting a lot of attention. DALL-E claims more than 3 million users. Midjourney has not published numbers, but they recently halted free trials citing a massive influx of new users.
While the most popular uses of generative AI so far are for satire and entertainment purposes, the sophistication of their technology is growing fast. A number of prominent researchers, technologists and public figures have signed an open letter asking for a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4, a large language model created by US company Open AI. “Should we let machines flood our information channels with propaganda and untruth?” they ask.
I spoke to several journalists, experts, and fact-checkers to assess the dangers posed by visual generative AI. When seeing is no longer believing, what are the implications this technology has on misinformation? How will this impact journalists and fact-checkers who debunk hoaxes? Will our information channels be flooded with “propaganda and untruth”?

1 March
People Over Robots
The Global Economy Needs Immigration Before Automation
By Lant Pritchett
(Foreign Affairs March/April 2023) As seismic as it may seem, technological change is not a natural force but the work of human beings. Of course, technology has radically improved human lives: no one wants to live without electricity, flush toilets, or (in Utah) central heating. In other cases, however, it is new policies, and not new technologies, that societies need most. …
There is no global scarcity of people who would like to be long-haul truck drivers in the United States, where the median wage for such work is $23 per hour. In the developing world, truck drivers make around $4 per hour. Yet firms cannot recruit workers from abroad even at the higher wage because of restrictions on immigration, so business leaders in the United States are impelled to choose machines over people and eradicate jobs through the use of technology. But if they could recruit globally, they would have less incentive to destroy those jobs and replace people with machines. The implacable fact of national borders steers businesses toward investing in technology that does not respond to global scarcities—and that no one really needs.
What is true for truck driving is also true for many other industries in the rich industrial world that require nonprofessional workers in specific work environments.
16 February
Elon Musk reinvents Twitter for the benefit of a power user: Himself
Musk has rolled out major changes — many of which have targeted his own experience, but not that of millions of regular users of the site
(WaPo) At one point, Musk was so hellbent on eliminating spam and bots on the site, for example, that Twitter — which uses phone numbers to verify the authenticity of accounts — banned entire country codes from its system, according to the people with knowledge of the matter.
That created a problem: One of the countries affected, Ukraine, is in the midst of a prolonged war with Russia, where social media has played a key strategic role. Ukrainians suddenly found they could not post on Twitter.
And since Musk took over as Twitter CEO, the list of major changes includes several with a direct benefit to Musk, former employees said, with little potential upside for the rest of the users.

13 February
Twitter’s plan to charge for crucial tool prompts outcry
The API paywall is Musk’s latest attempt to squeeze revenue out of Twitter, which is on the hook for about $1 billion in yearly interest payments from the billionaire’s acquisition, completed in October.
(AP) In the aftermath of the devastating earthquake in Turkey and Syria, thousands of volunteer software developers have been using a crucial Twitter tool to comb the platform for calls for help — including from people trapped in collapsed buildings — and connect people with rescue organizations.
They could soon lose access unless they pay Twitter a monthly fee of at least $100 — prohibitive for many volunteers and nonprofits on shoestring budgets.
Nonprofits, researchers and others need the tool, known as the API, or Application Programming Interface, to analyze Twitter data because the sheer amount of information makes it impossible for a human to go through by hand.
The loss of free API access means an added challenge for the thousands of developers in Turkey and beyond who are working around the clock to harness Twitter’s unique, open ecosystem for disaster relief.
“For Turkish coders working with Twitter API for disaster monitoring purposes, this is particularly worrying — and I’d imagine it is similarly worrying for others around the world that are using Twitter data to monitor emergencies and politically contested events,” said Akin Unver, a professor of international relations at Ozyegin University in Istanbul.
It’s not just disaster relief groups that are concerned. Academic and non-governmental researchers for years have used Twitter to study the spread of misinformation and hate speech or research public health or how people behave online.

9 February
The Economist Is Google’s era of dominance coming to the end?
How will the rise of artificial intelligence (AI) reshape the tech industry? Until recently you might have used Google to search for an answer to that question. But now you have another option: to ask an AI-powered chatbot, which lets you gather information from the internet through typed conversations. ChatGPT, the leading example, can write essays, explain complex concepts, answer trivia questions and suggest menus or holiday destinations. By the end of January, two months after its launch, it was being used by more than 100m people, making it the fastest-growing consumer application in history. Our cover this week in most of the world examines the potential impact of chatbots on the lucrative business of internet search, and whether they might pose a threat to Google’s dominance. Microsoft, which has just integrated ChatGPT into its search engine, Bing, certainly hopes so. Could this be a Schumpeterian moment in which incumbents are toppled and rivals seize the initiative? The answer depends on moral choices, monetisation and monopoly economics. But a hugely valuable prize—to become the new front door to the internet—may be up for grabs.

7 February
Google is scrambling to catch up to Bing, of all things
(Vox) Google bookended Microsoft’s big AI search announcement with underwhelming AI news of its own…
Google also said it was integrating its AI tools into its search results “soon.” It already seems to have made one misstep here, with Bard inserting a factual error into a demo response to a question about the James Webb Space Telescope.

2 February
Information Overload Helps Fake News Spread, and Social Media Knows It
Understanding how algorithm manipulators exploit our cognitive vulnerabilities empowers us to fight back
By Filippo Menczer, Thomas Hills
(Scientific American) Modern technologies are amplifying…biases in harmful ways, however. Search engines direct…to sites that inflame…suspicions, and social media connects…with like-minded people, feeding…fears. Making matters worse, bots—automated social media accounts that impersonate humans—enable misguided or malevolent actors to take advantage of…vulnerabilities.
Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes have become so cheap and easy that the information marketplace is inundated. Unable to process all this material, we let our cognitive biases decide what we should pay attention to. These mental shortcuts influence which information we search for, comprehend, remember and repeat to a harmful extent. (1 December 2020)
Why the Past 10 Years of American Life Have Been Uniquely Stupid
It’s not just a phase.
(The Atlantic) In their early incarnations, platforms such as Myspace and Facebook were relatively harmless. …early social media can be seen as just another step in the long progression of technological improvements—from the Postal Service through the telephone to email and texting—that helped people achieve the eternal goal of maintaining their social ties.
But gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will. (April 2022)

2 February
Diane Francis: Drone Age 2023
2023 will mark the beginning of the logistical revolution involving drones. Air traffic control regulators in many countries are devising systems that will make it possible for drones to deliver groceries, prescriptions, and mail as well as to replace trucks, taxis, and cargo containers safely — without chaos and collisions. Already, drones are being used in selected areas to deliver payloads to retail customers that weigh a few pounds. The next step is to delineate aerial flightpaths above existing roads for drones of all sizes, including those capable of delivering tons of freight or passengers. As the skies are regulated, a real-time air traffic control system will usher in a future filled with air-borne traffic.
…once regulatory templates and drone highways are created, adoption will be rapid and dramatic. That was certainly the case with new logistical innovations such as Uber or e-scooters. Eventually, traditional airlines, freight, and railways will be transformed, and sky ports on roofs will dot our cities. Uber will launch a fleet of flying taxis — and plans to take cars off the road and keep costs low — by “batching” passengers. People will be picked up and ride-share in vehicles to a sky port for departure, then fly and ride-share from the sky port to their work destinations. The process will be reversed at the end of the workday.

28 January
Myriad streaming services are causing subscription overload
The floodgates have been opened to endless new sites you forget you’ve subscribed to
In a poll, nearly half of respondents say they can’t keep track of where or how they signed up for their subscriptions — or often what they pay. The rest probably don’t even know they still have subscriptions because it’s usually almost impossible to unsubscribe writes columnist Josh Freed.
…When you look up the company online, it’s the name of that “free trial” channel that has now cost you $24 for one episode you didn’t like. Which is exactly what many specialty channels count on, because nobody actually subscribes to them — they make all their money from forgotten free trials.
At least you’ve spotted it and can cancel ASAP. Except for another problem: Ordering any subscription online is child’s play, usually just pressing one button under giant letters saying “SUBSCRIBE FREE!!!”
But cancelling that subscription requires a college course in “Over-subscription Management.” There’s never a simple option to be seen on a channel’s or a newspaper’s main website.

20 January
These were the biggest AI developments in 2022. Now we must decide how to use them
In 2022, we were presented with several stunning developments in artificial intelligence (AI). Some believe that these advances push the limits of what we have now (narrow AI) towards the holy grail of artificial general intelligence (a machine that can mimic the thinking and problem-solving capacities of humans but faster and more accurately).
Among the many developments in 2022, four breakthroughs are of note and will be significant in 2023 and beyond both within the discussions on responsible design development and AI use and in the transformative power they have for our societies.
First came DALL-E, the AI that can create pictures from language prompts.
Next there was ChatGPT, DALL-E’s literate and coder “sibling”. Whilst the former creates new images the latter creates text and code.
Furthermore, the AI development company DeepMind created an algorithm which codes very well. The system, AlphaCode can beat 72% of human coders in average competitions and recently solved about 30% of the coding problems in a highly complex coding competition against humans.
If all the above are extraordinary, and they are, the arrival of Gato is the icing on the cake. Gato, which is described as a generalist agent by inventors DeepMind, is an important development because whereas currently powerful algorithms do one or two things exceedingly well, Gato can do many.

30 January
Unlike with academics and reporters, you can’t check when ChatGPT’s telling the truth
By Blayne Haggart, Associate Professor of Political Science, Brock University
Being able to verify how information is produced is important, especially for academics and journalists.

24 January
The Tech-Layoff ‘Contagion’
Tens of thousands of people have been laid off from large tech and media companies in the past 12 months. The reasons for this are not obvious.
By Isabel Fattal
(The Atlantic) Our staff writers Annie Lowrey and Derek Thompson, who both recently published articles on the tech layoffs, offer several explanations for the trend. The first and most obvious is the Federal Reserve’s effort to ease inflation by raising interest rates sharply over the past year. …
Reporting in November on the tech industry’s apparent collapse, Derek used an entertaining and useful metaphor: The industry is having a midlife crisis. And that means once the crisis is over, a new era will begin.

23 January
What Microsoft gets from betting billions on the maker of ChatGPT
(Vox) The reported $10 billion investment in OpenAI will keep the hottest AI company on Microsoft’s Azure cloud platform.
This is Microsoft’s third investment in the company, and cements Microsoft’s partnership with one of the most exciting companies making one the most exciting technologies today: generative AI. It also shows that Microsoft is committed to making the initiative a key part of its business, as it looks to the future of technology and its place in it. And you can likely expect to see OpenAI’s services in your everyday life as companies you use integrate it into their own offerings.

6 January
A Skeptical Take on the A.I. Revolution
The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?

5 January
Control Your Tech Before It Controls You
Pascal Bornet, Forbes Councils Member
In today’s world, as business leaders, it is more and more challenging to stay focused and in control of your work. As technology advances, the risks are also evolving. For example, how often did you look for a piece of information (like the name of someone) on social media and finally end up scrolling from post to post for hours? You (and your teams) can’t afford this, as this distracts you from reaching your business goals. This article is about giving you the power of being “indistractable.”
Our Relationship With Technology
…our smartphones…are new to us: We don’t come from a culture with centuries of wisdom about using them responsibly (the iPhone was launched as recently as 2007). All we know is that they make it easier and more comfortable for us to socialize with others. But do we realize the risks?
The statistics show how much time we spend on our devices and how dependent we are on them. According to Nielsen Research, in 2018, the time Americans spent using computers and smartphones, watching TV and playing video games added up to over 11 hours per day, a number that has likely gone up since the pandemic.
…internet-era tech is…insidious because it’s evolving so rapidly and because it’s so tailored to its users. Tech companies harvest consumer data, including engagement metrics that show what attracts and keeps people’s interest and what doesn’t. Leveraging this, they keep people hooked and generate revenue from the ads they receive on their screens.
… People spend about five hours per day on their devices. Always-on technology can distract you and erode your ability to focus at work. You might pick up your device to look up something specific but end up losing hours to mindless scrolling or being distracted from your task by notifications and recommendations. This lost productivity can severely impact your career. In fact, smartphone use can interrupt your workflow and prevent you from achieving concentration.


We Haven’t Seen the Worst of Fake News
Deepfakes still might be poised to corrupt the basic ways we process reality—or what’s left of it.
(The Atlantic) The field of artificial intelligence has advanced rapidly since the 2018 deepfake panic, and synthetic media is once again the center of attention. The technology buzzword of 2022 is generative AI: models that seem to display humanlike creativity, turning text prompts into astounding images or commanding English at the level of a mediocre undergraduate. These and other advances have experts concerned that a deepfake apocalypse is still very much on the horizon. Fake video and audio might once again be poised to corrupt the most basic ways in which people process reality—or what’s left of it.

21 December
Elon Musk, ‘Chief Twit’
By Kenneth Li
(Reuters) What Twitter lacks in size, revenue or ambition, it has made up for in influence and impact. It has always punched above its weight and remains the preferred social media megaphone for world and industry leaders, revolutionaries and the media. …
What started as a narrative about the battle for survival of an aging digital business has turned into a global referendum on free speech and content moderation as journalism advocacy groups and officials from France, Germany, Britain and the European Union condemned the suspensions.
Volker Turk, the United Nations’ high commissioner for human rights, wrote last week: “Twitter has a responsibility to respect human rights: @elonmusk should commit to making decisions based on publicly-available policies that respect rights, including free speech. Nothing less.”
Time to Close Down the Elon Musk Circus
The press has been falling for the Twitter owner’s antics for too long.
Jack Shafer, Politico’s senior media writer.
In addition to being the world’s second richest person, Elon Musk is now the greatest press manipulator since Donald Trump inhabited the White House. Daily, often hourly, frequently minute-by-minute, Musk intercepts the news cycle and rides it like a clown on a barrel to the astonishment of all. Should he fall, he always gets back on and rides some more as the press corps records and transmits his every gyration.
Musk’s barrel-riding talents have been on conspicuous view since he bid for Twitter earlier this year, and especially so since he bought it last month. But he’s always been a champ at calling attention to himself, concocting promises and predictions about making his Tesla cars capable of self-driving, about an imminent manned landing on Mars by his SpaceX company, about the humanoid robots he’s allegedly building, and many other similarly unfulfilled pledges.

18 December
After backlash, Elon Musk is staking his leadership on a Twitter poll
After a new policy prompted backlash, Twitter CEO Elon Musk said future policies would be determined by polls
Elon Musk apologized and launched a poll asking whether he should step down as head of Twitter on Sunday night after the company launched a new policy that would suspend accounts linking to certain other platforms, a move that ignited massive backlash from individuals including some of Musk’s own supporters.

16 December
EU warns Elon Musk of sanctions after Twitter suspends accounts of several journalists and Mastodon
Reporters for The New York Times, Washington Post, CNN and Voice of America were among those whose accounts were taken down. The official account for Mastodon, a decentralised social network billed as an alternative to Twitter, was also banned.
Twitter’s suspension of journalists draws global backlash
(Al Jazeera) Twitter’s unprecedented suspension of at least five journalists over claims they revealed the real-time location of owner Elon Musk has drawn swift backlash from government officials, advocacy groups and journalism organisations across the globe.
Officials from France, Germany, the United Kingdom, the United Nations and the European Union condemned the suspensions, with some saying the platform was jeopardising press freedom..
The United Nations is “very disturbed” by the arbitrary suspension of journalists on Twitter, spokesman Stephane Dujarric said on Friday, adding that media voices should not be silenced on a platform professing to give space for free speech.

ChatGPT Has a Devastating Sense of Humor
(NYT) ChatGPT makes an irresistible first impression. It’s got a devastating sense of humor, a stunning capacity for dead-on mimicry, and it can rhyme like nobody’s business. Then there is its overwhelming reasonableness. When ChatGPT fails the Turing test, it’s usually because it refuses to offer its own opinion on just about anything. When was the last time real people on the internet declined to tell you what they really think?

13-14 December
Senate votes to ban TikTok use on government devices
The Senate on Wednesday unanimously approved legislation that would ban the use of TikTok on government phones and devices as part of the push to combat security concerns related to the Chinese-owned social media company.
The “No TikTok on Government Devices Act,” introduced by Sen. Josh Hawley (R-Mo.), was passed via unanimous consent late Wednesday, meaning that no member objected to the bill. The proposal would “prohibit certain individuals from downloading or using TikTok on any device issued by the United States or a government corporation.”
As GOPers ban TikTok, they’re not keeping that same energy with Twitter
There seems to be an obvious reason Republicans are banning TikTok from government devices but not Twitter, which carries many of the same security risks.
(MSNBC) Several Republican governors recently have taken steps to ban the use of TikTok on government devices, saying they’re doing so out of concern for national security.  They cited security concerns about undue influence by China’s government, which owns TikTok’s parent company, ByteDance.
What isn’t clear, though, is how these security issues are materially different from issues on social platforms Republicans tend to love these days — like Twitter and Facebook.

20-24 November
Elon Musk plans to reinstate nearly all previously banned Twitter accounts — to the alarm of activists and online trust and safety experts.
After posting a Twitter poll asking, “Should Twitter offer a general amnesty to suspended accounts, provided that they have not broken the law or engaged in egregious spam?” in which 72.4 percent of the respondents voted yes, Musk declared, “Amnesty begins next week.”
Elon Musk is unilaterally reinstating banned Twitter accounts, despite assuring civil rights groups and advertisers that he wouldn’t
Musk has reinstated at least 11 previously banned right-wing accounts, including Donald Trump, Project Veritas, and Marjorie Taylor-Greene
I Studied Trump’s Twitter Use for Six Years. Prepare for the Worst.
By Brian L. Ott, professor of communication at Missouri State University and a co-author of “The Twitter Presidency: Donald J. Trump and the Politics of White Rage.”
As someone who has been studying Mr. Trump’s Twitter use since before he was elected president, I believe that his return would mean the heightened spread of both misinformation and disinformation, the proliferation of degrading and dehumanizing discourse, the further mainstreaming of hate speech and the erosion of democratic norms and institutions. But there is something else: Mr. Trump’s return to Twitter could escalate the likelihood of political violence.

17 November
Why Everything in Tech Seems to Be Collapsing at Once
The industry is having a midlife crisis.
By Derek Thompson
(The Atlantic) We’ve mostly passed through the browser era, the social-media era, and the smartphone-app-economy era. But in the past few months, the explosion of artificial-intelligence programs suggests that something quite spectacular and possibly a little terrifying is on the horizon. Ten years from now, looking back on the 2022 tech recession, we may say that this moment was a paroxysm of scandals and layoffs between two discrete movements.

16 November
How to Prepare for Life After Twitter
Brian X. Chen, NYT lead consumer technology writer
Don’t delete your account just yet. Elon Musk’s takeover can teach us valuable lessons about our relationship with social networks.
Sheer chaos has surrounded Elon Musk’s takeover of Twitter over the past few weeks. More than half of Twitter’s employees have been fired or have resigned. The verification system no longer means much. And some users have reported problems with security features. So if you have an account on the social network, what do you do?
Unfortunately, there is no simple answer. But this continuing spectacle presents an opportunity for us to learn how to have healthier relationships with social platforms so we are not dependent on any one of them.
…those who have already left Twitter quickly realized there was no real alternative. Apps like Mastodon, the open-source site that involves posting on a social feed similar to Twitter’s timeline, are tricky for most people to set up. Reddit is more siloed by topics. LinkedIn is work-focused, Pinterest is centered on hobbies, TikTok is video-centric and Meta’s Facebook — well, let’s just say it has its own problems.
… This tumultuous situation with Twitter, according to social media consultants and security experts I interviewed, can serve as a template with valuable lessons for everyone, including casual tweeters and celebrities, on how to safely navigate any social network.
The first lesson is to always have an exit strategy — a plan for what to do with your data and contacts — in case things go awry. Lesson 2 is to avoid over-investing time and energy on any one social media site; hedge your bets by posting on multiple platforms that serve your needs.

20 March
Nine Canadian Space Companies Create The Space Canada Association
Canada’s leading space innovators today announced the formation of Space Canada, a new national industry association that will offer a united voice for Canada’s space sector and take it to new heights. Brian Gallant, the former premier of New Brunswick, is the founding organization’s CEO.
“It is such an honor to lead Space Canada…” said Brian Gallant. “Investments in space create high-quality STEM jobs of the future and play a significant role in addressing economic, societal and planetary challenges like climate change. Space technology monitors our land ecosystems and coastlines, supports disaster relief and protects our oceans and forests. Moreover, space can close the digital divide, particularly in our Northern, remote and rural communities, and enhance our national security.”

Comments are closed.