AI, Chatbots, Society & Technology August 2025-

Written by  //  November 6, 2025  //  AI Artificial Intelligence, Economy  //  No comments

AI, Chatbots, Society & Technology July 2024-July 2025
Gary Marcus on AI
Overview:
AI Strategy for the Federal Public Service 2025-2027

The first International Report on AI Safety, led by Yoshua Bengio

Stephen Hawking:
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
Ray Kurzweil:
“Our intuition about the future is linear. The reality of information technology is exponential.”
Eliezer Yudkowsky:
“By far the greatest danger of AI is that people conclude too early that they understand it.”

The Brookings AI Equity Lab
The AI Equity Lab is housed in the Center for Technology Innovation (CTI) at Brookings and is focused on advancing inclusive, ethical, nondiscriminatory, and democratized artificial intelligence (AI) models and systems throughout the United States and the Global South, including the African Union, India, Southeast Asia, the Caribbean, and Latin America.
In particular, the AI Equity Lab is focused on some of the most consequential areas of AI whose design implications, and autonomous decisions contribute to online biases, and can erode the quality of lives for people and their communities, including in criminal justice, education, health care, hiring and employment, housing, and voting rights. About
5 September 2024
Council of Europe opens first ever global treaty on AI for signature
The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.
The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union.
The AI Convention, the first international artificial intelligence treaty, opened for signing Thursday [5 September]. It is sponsored by the US, Britain and the EU and comes after months of negotiations between 57 countries. The Convention will address both the risks and responsible use of AI, especially pertaining to human rights and data privacy. Critics, however, worry that the treaty is too broad and includes caveats that may limit its enforceability. For instance, the Convention allows several exemptions for AI technology used for national security purposes and in the private sector.
Generative AI is not the panacea we’ve been promised
Eric Siegel for Big Think+
Is generative AI the viral sensation we’ve been promised? Headlines are selling it as a panacea, but 30-year AI industry vet Eric Siegel says that’s mostly hype. It may be impressive and introduce efficiencies, but it won’t run the world as we’ve been promised.
Predictive AI, Siegel argues, often holds more transformative potential than generative AI, including LLMs. While generative AI cannot generally be blindly trusted, predictive AI has the potential to operate autonomously for some applications, across a wide range of industries.
Here’s why Siegel thinks more potential lies with predictive modeling for many organizations, and why we’re not going to replicate general human intelligence in machines anytime soon.-Aug 2024
Gary Marcus: Meta pirated at least 101 of my books and articles, and tens of millions of others
And they knew perfectly well what they were doing
The Unbelievable Scale of AI’s Pirated-Books Problem
Meta pirated millions of books to train its AI. Search through them here.
By Alex Reisner – 20 March 2025

6 November
AI Agents Threaten Free Societies
Christina Lioma and Sine N. Just
Accountability for one’s actions is a bedrock principle of any society built on the rule of law. Yet while we understand human autonomy and the responsibilities that come with it, the workings of machine autonomy lie beyond our comprehension, making AI agents an obvious risk to democratic governance.
(Project Syndicate) As AI tools have entered more areas of our professional and personal lives, praise for their potential has been accompanied by concerns about their built-in biases, the inequalities they perpetuate, and the vast amounts of energy and water they consume. But now, an even more harmful development is underway: as AI agents are deployed to solve tasks autonomously, they will introduce many new risks, not least to our fragile democracies.
Although AI-generated misinformation is already a huge problem, we have failed to comprehend, let alone control, this rapidly evolving technology. Part of the problem (more so in some parts of the world than in others) is that the companies pushing AI agents have taken pains to divert citizens’ and regulators’ attention from potential harms. Advocates of safer, ethical technologies need to help the public come to terms with what AI agents are and how they operate. Only then can we hold fruitful discussions about how humans can assert some degree of control over them. AI agents’ capabilities have already advanced to the point that they can “reason,” write, speak, and otherwise appear human – achieving what Microsoft AI’s Mustafa Suleyman calls “seemingly conscious AI.” While these developments do not imply human consciousness in the usual sense of the word, they do herald the deployment of models that can act autonomously. If current trends continue, the next generation of AI agents will not only be able to perform tasks across a wide variety of domains; they will do so independently, with no humans “in the loop.”
27 October
New AI-generated information weapons pose problems
Marie Lamensch
(CORIM blog) In 2025, the world faces a wide array of conflicts and threats. Geopolitical instability—from wars in Ukraine and the Middle East to tensions in the Sahel and the Indo-Pacific—unfolds against a backdrop of intensifying great-power competition, including in the cyberspace. China, the U.S., Europe, and Russia are vying for influence across regions and technologies, with artificial intelligence and emerging tech now central to this contest. Synthetic video and AI-generated content are now embedded in the digital landscape, shaping opinion and influencing political and social outcomes.
Cognitive warfare and persuasive technologies
Foreign Information Manipulation and Interference (FIMI) is a strategic form of cognitive warfare targeting entire societies’ perceptions, trust, and decision-making without crossing the threshold of armed conflict. Enabled by social media, encrypted platforms, and AI, these operations have become central to modern conflicts, eroding trust and deepening divisions. Autocratic states in particular weaponize the information space and exploit the openness of democracies, blending state-controlled media with covert networks designed to appear organic. Their operations spread across major platforms—X, Facebook, Telegram, YouTube, TikTok—using articles, videos, memes, AI-generated content, and “information laundering”.
Generative AI, in particular, is a “disinformation supercharger.” By producing realistic synthetic content, including deepfakes, it enables influence operations at scale. Recent examples include a Chinese campaign using a deepfake of Philippine President Ferdinand Marcos Jr to undermine the president and his policies, or Russian efforts deploying spoof websites impersonating legitimate Western media to influence critical parliamentary election in Moldova and weaken the pro-EU ruling party.
The rise of “persuasive technologies”—including neurotechnology and ambient systems—further accelerates these threats as such tools interact with the human mind and body in increasingly intimate ways, enabling the large-scale manipulation of cognition.
Authoritarian regimes are the most active, with both state and non-state actors engaged in manipulation campaigns, making them difficult to counter.
Russia uses FIMI to destabilize democracies, erode trust, and amplify divisions at home and abroad. Its campaigns stretch from Europe to Africa and Latin America, where Kremlin-backed media such as RT and local amplifiers recycle anti-Western and neo-colonial tropes to justify its invasion of Ukraine and present Moscow as a defender of tradition and stability.

20 October
Our Last Hope Before The AI Bubble Detonates: Taming LLMs
By Eric Siegel
(Forbes) To know that we’re in an AI bubble, you don’t need OpenAI chair Bret Taylor or Databricks CEO Ali Ghodsi to admit it, as they have. Nor do you need to analyze the telltale economics of inflated valuations, underwhelming revenues and circular financing.
Instead, just examine the outlandish claim that’s been driving the hype: We’re nearing artificial general intelligence, computers that would amount to “artificial humans,” capable of almost everything humans can do.
But there’s still hope: AI could realize some of its overzealous promise of great autonomy with the introduction of a new reliability layer that tames large language models. By boosting AI’s realized value, this would be the most ideal way to soften the AI bubble’s burst. Here’s how it works.

1 October
New data show no AI jobs apocalypse—for now
Molly Kinder, Martha Gimbel, Joshua Kendall, and Maddie Lee
(Brookings) Every day brings new breakthroughs in artificial intelligence—and new fears about the technology’s potential to trigger mass unemployment. CEOs predict white collar “bloodbaths.” Headlines warn of widespread job losses. With public anxiety growing, it can feel like the economy is already hemorrhaging jobs to AI. But what if, at least for now, the data are telling a different story?
To find out, we measured how the labor market has changed since ChatGPT’s launch in November 2022. Specifically, we analyzed the change in the occupational mix across the labor market over the past 33 months. If generative AI technologies such as ChatGPT were automating jobs at scale, we would expect to see fewer workers employed in jobs at greatest risk of automation.
Our data found the opposite. In a new report from the Budget Lab at Yale, we share our findings of a labor market characterized broadly by stability, rather than disruption, since ChatGPT’s release. Despite fears of an imminent AI jobs apocalypse, the overall labor market shows more continuity than immediate collapse. The percent of workers in jobs with high, medium, and low AI “exposure” has remained remarkably steady over time. (Jobs that are highly “exposed” to generative AI technologies have the highest percentage of tasks that ChatGPT can be used for to save significant time.)

25-26 September
AI minister says Canada must harness emerging technology at home
Digital sovereignty has been catapulted to the top of Evan Solomon’s to-do list, Raisa Patel writes.
(TorStar) Artificial Intelligence and Digital Innovation Minister Evan Solomon speaks at the All In AI conference in Montreal on Sept. 25, 2025.
… Entrepreneurs, angel investors, public servants and a staffer from Conservative Leader Pierre Poilievre’s office roamed the floor.
“AI is kind of our brand. It’s kind of our thing. Like, I know we don’t like to trumpet things, but modern-day AI took shape in this country,” Solomon said in his opening address at the ALL IN summit on Wednesday.
And for Solomon, the place where Canada must focus next is … Canada.
“For our government, for our country, ‘all in’ means building digital sovereignty,” Solomon said. “Building digital sovereignty — the most pressing policy (and) democratic issue of our time.”
It’s not surprising that digital sovereignty — giving a country complete control over its data, systems, workforce and infrastructure — has been catapulted to the top of Solomon’s to-do list.
It’s the perfect intersection of Prime Minister Mark Carney’s pledge to transform Canada at a time when geopolitical ties are unravelling, and his enthusiasm for scaling up AI and integrating it into untold facets of Canadians’ lives.
“Whoever controls (our data), or whoever uses it, whoever governs it, will determine our collective prosperity and our security and sometimes our values,” Solomon told the crowd.
If there is a dark side to the protectionist nature of this emerging AI race, it has yet to be fully understood.
The Montreal AI Ethics Institute points to some downsides. There could be drains on the planet and its resources. Countries duplicating efforts in their own silos might be a costly undertaking for an approach that could weaken global collaboration and produce redundancies. Sovereignty could inflame mistrust between nations, and make it harder to impose ethical guidelines across borders.
KYLE MATTHEWS has a warning for Artificial Intelligence Minister EVAN SOLOMON.
(Politico Canada Playbook) As Solomon touts the promise of an AI-driven economic boom — and drafts policy to make it happen — Matthews cautions him not to lose sight of the dangers: disinformation, deepfakes and the broader security risks that threaten democracy.
— Safety first: Matthews, the co-founder and executive director of the Montreal Institute for Global Security, told Playbook that while he has not met the minister, he has questions he hopes the government will prioritize:
“How can we build up our capacity with AI companies to help counter these online threats, fighting back against deep fakes or personalized attacks online? How can we build on this and build some capacity?”
Matthews said the government should also look at the questions “through a security and democratic resilience lens.”
— Wait a minute: As Canada moves into the global AI marketplace, Matthews says the government must be wary of authoritarian countries attacking private sector innovators, “to steal our economic secrets, steal our AI algorithms and other intellectual property.”
— American absenteeism: Matthews says the U.S. State Department is cutting back on funding it uses to track and counter disinformation and online security threats. He is calling on Canada and other NATO allies to fill the void, as countries like China and Russia seize on the U.S. retreat.
“We’ve kind of been laggards,” says Matthews. “We haven’t developed the capacity, and now we’re at a very vulnerable spot where these authoritarian states are extremely happy the U.S. has pulled out.”

David Einhorn Sounds Warning on the AI Spending Splurge
(Bloomberg) Hedge fund manager David Einhorn cautioned that the unprecedented amount of spending on artificial intelligence infrastructure may destroy vast amounts of capital, even if the technology itself proves transformative. … Einhorn drew a sharp line between the long-term importance of AI and the immediate economics of funding it. He said many projects will be built, but investors may not see the payoffs they anticipate.

22 September
Chatbait Is Taking Over the Internet
How chatbots keep you talking
By Lila Shroff
(The Atlantic) … As OpenAI has grown up, its chatbot seems to have transformed into an over-caffeinated project manager, responding to messages with oddly specific questions and unsolicited proposals. Occasionally, this tendency is genuinely helpful, such as when I’m asking ChatGPT for dinner ideas and it proactively offers to draft a grocery list. But often, it feels like a gimmick to trap users in conversation. Sometimes, the bot even offers to perform tasks it’s incapable of. ChatGPT recently volunteered to make me a sleepy bedtime playlist. “Would you like me to put this into a ready-to-use playlist link for you on Spotify?” it asked. When I agreed, the chatbot demurred: “I can’t generate a live Spotify link.”
OpenAI and its peers have plenty to gain from keeping users hooked. People’s conversations with chatbots serve as valuable training data for future models. And the more time someone spends talking to a bot, the more personal data they are likely to reveal, which AI companies can, in turn, use to create more compelling responses.
Just as clickbait persuades people to open links they might have otherwise ignored, chatbait pushes conversations to places where they might not have otherwise gone. For the most part, chatbait is simply annoying. But at the extreme, it might be dangerous. Reporting has shown people descending into delusional or depressive spirals after prolonged conversations with chatbots. In April, a 16-year-old boy died by suicide after having spent months discussing ending his life with ChatGPT. …
Chatbait might only just be getting started. As competition grows and the pressure to prove profitability mounts, AI companies have the incentive to do whatever they need to keep people using their product. Clickbait has flourished on social-media feeds, and in some cases—consider Meta AI or X’s Grok—chatbots are being built by the very same companies that power the social web. Forget the infinite scroll. We’re headed toward the infinite conversation.

17 September
A new AI model can forecast a person’s risk of diseases across their life
Delphi-2M can predict which of more than 1,000 conditions a person might face next
(The Economist) Much of the art of medicine involves working out, through detailed questioning and physical examination, which disease a given patient has contracted. Far harder, but no less desirable, would be identifying which diseases a patient might develop in the future. This is what the team behind a new artificial-intelligence (AI) model, details of which were published in Nature on September 17th, claims to do.
Though the model, named Delphi-2M, is not yet ready for deployment in hospitals, its creators hope it could one day allow doctors to predict if their patients are likely to get one of more than 1,000 different conditions, including Alzheimer’s disease, cancer and heart attacks, which all affect many millions every year. In addition to helping flag patients who are at high risk, it might also help health authorities allocate budgets for disease areas that may need extra funds in the future.

5 September (UPDATE)
AI Impact Awards 2025: Every Cure Aims to ‘Teach Old Drugs New Tricks’
(Newsweek) … Thousands of lifesaving therapies may already exist, hidden in plain sight on pharmacy shelves.
Today, Every Cure is working to systematically uncover and validate those hidden opportunities using artificial intelligence. Its proprietary platform, MATRIX (Therapeutic Repurposing in extended uses), was designed to assess and rank roughly 75 million drug-disease combinations, prioritizing high-potential therapies with speed and scale that would be unimaginable without AI.
The AI model recommends the drug-disease combinations that are most likely to work biologically and reduce suffering for a significant number of people. Then, Every Cure can pursue that combination with a low-cost trial or study.
On a mission to save and improve lives by repurposing drugs

4 September
Group launches report on authoritarian states weaponizing AI (YouTube)
At a press conference on Parliament Hill in Ottawa, The Montreal Institute for Global Security (MIGS) and the Konrad-Adenauer-Stiftung Canada (KAS Canada), discuss a new joint research report entitled “Wired for War: How Authoritarian States are Weaponize AI against the West.” Taking part in the news conference is Bernd Althusmann from Adenauer Stiftung Canada as well as Kyle Matthews, Chris Beall, and Elizabeth Anderson from MIGS.

3 September
I’m a High Schooler. AI Is Demolishing My Education.
The end of critical thinking in the classroom
By Ashanty Rosario
(The Atlantic) Desperate to address AI, schools across the U.S. are investing in detection tools and screen-monitoring software to curb cheating. Some of these tools have been used in my school: Teachers rely on plagiarism checkers and exam-proctoring software. Still, these systems aren’t foolproof, and many students have begun to bypass these measures. Students use AI “humanizer” tools, which rephrase text to remove “robotic undertones,” as one such program puts it, or they manually edit the AI’s output themselves to simplify language or adjust the chatbot’s sentence structure. During in-class exams, screens may be locked or recording technology may be employed, but students have ways around these, too—sneaking phones in, for example. Based on what I’ve observed, preventative measures can only go so far.
12 August
The AI Takeover of Education Is Just Getting Started
Was your kid’s report card written by a chatbot?
(The Atlantic) Rising seniors are the last class of students who remember high school before ChatGPT. But only just barely: OpenAI’s chatbot was released months into their freshman year. Ever since then, writing essays hasn’t required, well, writing. By the time these students graduate next spring, they will have completed almost four full years of AI high school. …
Gone already are the days when using AI to write an essay meant copying and pasting its response verbatim. To evade plagiarism detectors, kids now stitch together output from multiple AI models, or ask chatbots to introduce typos to make the writing appear more human. The original ChatGPT allowed only text prompts. Now students can upload images (“Please do these physics problems for me”) and entire documents (“How should I improve my essay based on this rubric?”). Not all of it is cheating. Kids are using AI for exam prep, generating personalized study guides and practice tests, and to get feedback before submitting assignments. Still, if you are a parent of a high schooler who thinks your child isn’t using a chatbot for homework assistance—be it sanctioned or illicit—think again.

28 August
The AI cheating panic is missing the point
Gen Z knows using a chatbot to write a whole essay is wrong. But what are they supposed to use it for?
(WaPo) Let’s get the obvious out of the way: Gen Z knows they shouldn’t use ChatGPT to flat out cheat, even if some of them do it anyway. ChatGPT knows this, too — and OpenAI rolled out a study mode partly to address concerns about its misuse. But the obsession with this topic is distracting from a more pressing question: What should students be using AI to do?
In listening sessions, one-on-one conversations and surveys with young adults, Zoomers describe a complex relationship with AI: They use it daily, but they’re uneasy about its rise. Far from being enthusiastic early adopters, more than half of Gen Z adults said in a recent Gallup survey that AI makes them feel anxious. (The poll was conducted in collaboration with the Walton Family Foundation, which also supports my own research.)
Right now, the AI educational landscape looks like the Wild West. Last spring, 53 percent of K-12 students told Gallup that their school does not have a clear AI policy in place. The number is even higher outside metro areas, with 67 percent of students reporting a lack of clear guidelines — which risks further exacerbating digital divides between urban and rural communities.
23 August- 28 August Rebroadcast
AI in science — the good, the bad and the ugly; Science is being transformed by the AI revolution
(CBC Quirks and Quarks) The stunning advances in artificial intelligence that we see with internet AI apps are just the tip of the iceberg when it comes to science. Researchers from almost every field are experimenting with this powerful new tool to diagnose disease, understand climate change, develop strategies for conservation and discover new kinds of materials. And AI is on the threshold of being able to make discoveries all by itself. Will it put scientists out of a job?

27 August
We tested which AI gave the best answers without making stuff up. One beat ChatGPT.
Librarians helped us quiz AI bots with tough trivia, recent events questions and more. Some answers were impressive — others were worse than an old-fashioned Google search.
(WaPo) Lots of artificial intelligence tools claim they can answer any question. Except sometimes they are hilariously, or even dangerously, wrong. So which AI is most likely to give you a correct answer?
To find out, I enlisted some professional help: librarians. We set up a competition between nine AI search tools, asking each AI to answer 30 tough questions. Then the librarians judged the AI answers — and whether an old-fashioned Google web search might have been sufficient.

26 August
Students Hate Them. Universities Need Them. The Only Real Solution to the A.I. Cheating Crisis.
(NYT) Blue books and viva voce testing will live side by side with modern innovations like active learning and authentic assessment. But a return to a more conversational, extemporaneous style will make higher education more interpersonal, more improvised and more idiosyncratic, restoring a sense of community to our institutions.

20 August
The best-case scenario for AI in schools (video)
BBC Special Correspondent Katty Kay and Khan Academy founder and author Sal Khan discuss his optimistic case for how increased use of artificial intelligence could benefit students.
Related: Sal Khan unleashes the power of AI to transform education
Smart use of artificial intelligence in education can empower learners of all backgrounds and ability levels.
(Standtogether) Artificial intelligence (AI) has sparked concerns and cultural fears regarding its potential negative impacts. Chief among those concerns is the fear that AI will inhibit learning by making things easier for students and preventing them from developing and mastering important skills such as writing, problem solving, and critical thinking.
In a recent TED Talk [How AI Could Save (Not Destroy) Education YouTube], Sal Khan, founder of Khan Academy and Stand Together Trust partner, challenged these anxieties and presented a compelling argument for the transformative role of AI in education. Particularly, Khan sees a way that AI can not only enhance human potential but also how it can accelerate access to individualized education for students worldwide.
27 November 2024
How AI Will Impact the Future of Teaching—a Conversation With Sal Khan
The founder of Khan Academy and Khanmigo believes AI can deliver the personalized instruction students need, while freeing up teachers to do what they do best.
(Edutopia) … Khan’s latest project, dubbed Khanmigo and launched in 2023, provokes some of the same fears. Pairing generative AI with a user-friendly interface, the application, which is being piloted by over 600,000 students and teachers in the U.S., promises to deliver a personalized tutor to every classroom, allowing students to plug in and receive instruction on subjects ranging from elementary math to essay writing. Instead of simply providing answers to their questions, Khan says, new AI bots like Khanmigo are trained to serve as “thoughtful” mentors, prodding students with questions, giving them encouragement, and delivering feedback on their mistakes as they work to develop their own understanding. …

18 August
AI Is a Mass-Delusion Event
Three years in, one of AI’s enduring impacts is to make people feel like they’re losing it.
By Charlie Warzel
(The Atlantic) Right now, there are competing theories as to whether AI is having a meaningful effect on employment. But real and perceived impact are different things. A recent Quinnipiac poll found that, “when it comes to their day-to-day life,” 44 percent of surveyed Americans believe that AI will do more harm than good. The survey found that Americans believe the technology will cause job loss—but many workers appeared confident in the security of their own job. Many people simply don’t know what conclusions to draw about AI, but it is impossible not to be thinking about it. …
Even if you personally don’t believe in the hype, you are living in an economy that has reoriented itself around AI. A recent report from The Wall Street Journal estimates that Big Tech’s spending on IT infrastructure in 2025 is “acting as a sort of private-sector stimulus program,” with the “Magnificent Seven” tech companies—Meta, Alphabet, Microsoft, Amazon, Apple, Nvidia, and Tesla—spending more than $100 billion on capital expenditures in the recent months. The flip side of such consolidated investment in one tech sector is a giant economic vulnerability that could lead to a financial crisis.
This is the AI era in a nutshell. Squint one way, and you can portray it as the saving grace of the world economy. Look at it more closely, and it’s a ticking time bomb lodged in the global financial system. The conversation is always polarized. …
Bots are everywhere, and they have produced profoundly strange and meaningful effects on digital life. Sometimes they’re racist. Many are sycophants. Other times, they summon demons. Google’s AI summaries are cratering traffic and rewiring the web. In schools, ChatGPT hasn’t just killed the student essay; it seems to be threatening some of the basic building blocks of human cognition. Some research has argued that chatbots are homogenizing the way people speak. In any case, they appear to have inverted the promise of the internet as an endless archive of information one can navigate for themselves. Do your own research has, in short order, become Get one canonical answer.
Sometimes this is helpful: A bot artfully summarizes a complex PDF. They are, by most accounts, truly helpful coding tools. Kids use them to build helpful study guides. They’re good at saving you time by churning out anemic emails. Also, a health-care chatbot made up fake body parts. The FDA has introduced a generative-AI tool to help fast-track drug and medical-device approvals—but the tool keeps making up fake studies. To scan the AI headlines is a daily exercise in trying to determine the cost that society is paying for these perceived productivity benefits. For example, with a new Google Gemini–enabled smartwatch, you can ask the bot to “tell my spouse I’m 15 minutes late and send it in a jokey tone” instead of communicating yourself. This is followed by news of a study suggesting that ChatGPT power users might be accumulating a “cognitive debt” from using the tool.

11 August
Ian Bremmer asks Can Democracy Survive AI?
While the internet and telecommunications diffused political power, the next wave of technological innovation could have the opposite effect. If current trends in AI development and deployment continue, the openness that long gave democracies their edge might become the cause of their undoing.
(Project Syndicate) Digital technology was supposed to disperse power. Early internet visionaries hoped that the revolution they were unleashing would empower individuals to free themselves from ignorance, poverty, and tyranny. And for a while, at least, it did. But today, ever-smarter algorithms increasingly predict and shape our every choice, enabling unprecedentedly effective forms of centralized, unaccountable surveillance and control. That means the coming AI revolution may render closed political systems more stable than open ones. In an age of rapid change, transparency, pluralism, checks and balances, and other key democratic features could prove to be liabilities. Could the openness that long gave democracies their edge become the cause of their undoing?

Interwoven frontiers: Energy, AI, and US-China competition
R. David Edelman
(Brookings) The AI demand driver
AI systems in their present form are notorious energy consumers, a demand burden potentially borne by the U.S. and Chinese grids that may strain long-standing projections of energy demand. While AI by no means accounts for the bulk of growing energy demand globally, its significant needs, rapid emergence, and mass diffusion provide a window into a new and importantly disruptive dynamic to clean energy plans around the globe.
Prior to the development of AI systems at scale, a relatively stable dynamic existed between the increasing need for energy-intensive data centers and the efficiency of the machines within those data centers. As a result, in the decade before modern AI systems came of age (roughly 2005-2016), the energy being consumed by U.S. data centers was relatively flat, as efficiency counterbalanced the growing number of data centers. However, as the world’s most data-intensive and data-center-operating firms like Amazon, Alphabet’s Google, and Meta’s Facebook began to seize the business benefits of advanced machine learning—the underlying science that would drive “generative” tools like ChatGPT in recent years—the necessary shift to particularly energy-intensive hardware led energy consumption to more than double in the five years that followed (2017-2023).
The demand spikes caused by AI take many forms. Training advanced AI systems creates needs among single, large power consumers that may rapidly outstrip any ability to serve them, let alone with the clean energy they increasingly seek. While estimates vary wildly, parsing available research and companies’ own statements suggests that training (which is to say building for consumer use) the current, cutting-edge models available today is estimated to have expended tens of megawatts per model. The “scaling laws”1 that many AI companies are using to project future demand and acquire energy capacity suggest a multi-gigawatt annual need in just a few years to develop and sustain next-generation systems.

10 August
Nvidia, AMD agree to pay U.S. government 15% of AI chip sales to China
Trump says he told Nvidia chief that “I want you to pay us as a country something.” Some experts say the highly unusual arrangement may be unconstitutional.
Under the financial agreement, the companies will give the U.S. government a portion of their sales as a prerequisite to obtaining export licenses for China
(WaPo) In April, the Trump administration halted H20 chip sales to China, citing security risks, but announced a reversal of the decision in July shortly ahead of another round of trade talks with Beijing. The H20 had been Nvidia’s last AI chip allowed to be sold to China after the Biden administration clamped down on such sales starting in 2022.
President Donald Trump dismissed the H20 as “an old chip” that doesn’t advance China’s capabilities, though “it still has a market” there. In a White House news conference on Monday, he said he had had a conversation with Nvidia CEO Jensen Huang, whom he called “a great guy,” about allowing the chip’s export. “I said, if I’m going to do that, I want you to pay us as a country something because I’m giving you a release.”

7 August
Tailored psychological warfare: a deepfake video of Hong Kong activists
A deepfake video fabricating an online conversation between prominent Hong Kong activists has become the first known exercise in the next level of AI-enabled influence operations. Presumably concocted by the Chinese government or a group that serves it, the video heralds a more tailored and comprehensive approach of psychological warfare.
As China integrates deepfake technology into its influence-operations playbook, psychological operations could soon use the technology as a tool of emotional and cognitive manipulation.
The deepfake video purported to show a leaked secret conversation between exiled Hong Kong activists discussing their concerns over the possibility of being extradited from Britain to China as a result of Britain’s plan to reinstate an extradition deal with Hong Kong. The video was first published by the Facebook page Yellowbrainclown 黃腦膠戰 on 26 July. Within two hours of the original post appearing, 22 other accounts shared the video to 17 Facebook groups.

The New ChatGPT Resets the AI Race
With GPT-5, OpenAI is making its strongest effort yet to hook users.
By Matteo Wong
(The Atlantic) GPT-5 achieves state-of-the-art performance on a number of AI benchmarks, according to OpenAI’s internal tests, but it is far from a clean sweep: On a few tests, competing products such as Google Gemini, Anthropic’s Claude, and xAI’s Grok outperform, or are just barely below the level of, OpenAI’s new top model. The GPT-5 announcement video and launch page also contained a number of errors—incorrect labels, numbers and colors that made no sense, and missing entries on charts—that made the program’s precise abilities, and the trustworthiness of OpenAI’s reporting, hard to discern (and led some observers to joke that perhaps GPT-5 itself had made, or hallucinated, the graphics). Yet that may not matter. OpenAI’s animating theme for GPT-5 is user experience, not “intelligence”: Its new model is intuitive, fast, and efficient; adapts to human preferences and intentions; and is easy to personalize. Before it is more intelligent, GPT-5 is more usable—and more likely to attract and retain users.

Leave a Comment

comm comm comm

Wednesday-Night