AI, Chatbots, Society & Technology November 2023-June 2024

Written by  //  June 25, 2024  //  Canada, Science & Technology  //  Comments Off on AI, Chatbots, Society & Technology November 2023-June 2024

AI pioneers Hinton, Ng, LeCun, Bengio amp up x-risk debate
GZERO AI launches October 31st

2023: The year we played with artificial intelligence — and weren’t sure what to do about it
(AP) Artificial intelligence went mainstream in 2023 — it was a long time coming yet has a long way to go for the technology to match people’s science fiction fantasies of human-like machines.
Catalyzing a year of AI fanfare was ChatGPT. The chatbot gave the world a glimpse of recent advances in computer science even if not everyone figured out quite how it works or what to do with it. (14 December 2023)
Race to AI: the origins of artificial intelligence, from Turing to ChatGPT
(The Guardian) Today’s poem-writing AI has ancestry in punch-card machines, trundling robots and godlike gaming engines (28 October 2023)

Is AI’s “intelligence” an illusion?
Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write a college term paper in Klingon or instantly create nine images of a slice of bread ascending to heaven.
But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth, often presenting inaccurate or plainly false information as facts. As generative AI becomes more widespread, it will undoubtedly change the way we live, in both good ways and bad. (11 September)
Rise of the AI psychbots
(Politico) The story of psychologist Martin Seligman and his AI counterpart “Ask Martin” forces us to think hard about some questions that current tech policy is ill-suited to handle, such as what part of ourselves we “own,” how the law diverges from our own instinctive sense of justice — and also, what digital version of us may survive into the future, whether or not we want it to. (2 January 2024)
The Path to AI Arms Control
America and China Must Work Together to Avert Catastrophe
By Henry A. Kissinger and Graham Allison (13 October)
AI’s Pugwash Moment
Anne-Marie Slaughter and Fadi Chehadé
Leading scientists, technologists, philosophers, ethicists, and humanitarians from every continent must come together to secure a broad agreement on a framework for governing AI that can win support at the local, national, and global levels.
(Project Syndicate) Unlike the original Pugwash Movement, the AI version would not have to devise a framework from scratch. Scores of initiatives to govern and guide AI development and applications are already underway. Examples include the Blueprint for an AI Bill of Rights in the United States, the Ethics Guidelines for Trustworthy AI in the European Union, the OECD’s AI Principles, and UNESCO’s Recommendation on the Ethics of Artificial Intelligence. Instead, the new Pugwash Movement would focus largely on connecting relevant actors, aligning on necessary measures, and ensuring that they are implemented broadly. Institutions will be vital to this effort. (24 July)
The politics of AI: ChatGPT and political bias
(Brookings) The release of OpenAI’s ChatGPT in late 2022 made a splash in the tech world and beyond. A December 2022 Harvard Business Review article termed it a “tipping point for AI,” calling it “genuinely useful for a wide range of tasks, from creating software to generating business ideas to writing a wedding toast.” Within two months after its launch, ChatGPT had more than 100 million monthly active users—reaching that growth milestone much more quickly than TikTok and Instagram. (8 May)

25 June
World Economic Forum Identifies Top 10 Emerging Technologies to Address Global Challenges
(WEF) AI-powered scientific discovery, carbon-capturing microbes, elastocalorics are among the 10 listed technologies.
Top 10 emerging technologies focus on applications in health, communication, infrastructure and sustainability.
This year’s Top 10 Emerging Technologies report identifies breakthroughs impacting societies and economies within 3-5 years.
The World Economic Forum announces today the publication of its annual Top 10 Emerging Technologies Report featuring technologies with the greatest potential to make a positive impact in the world in the next three to five years.
Top 10 Emerging Technologies of 2024
The Top 10 Emerging Technologies report is a vital source of strategic intelligence. First published in 2011, it draws on insights from scientists, researchers and futurists to identify 10 technologies poised to significantly influence societies and economies. These emerging technologiesare disruptive, attractive to investors and researchers, and expected to achieve considerable scale within five years. This edition expands its analysis by involving over 300 experts from the Forum’s Global Future Councils and a global network of comprising over 2,000 chief editors worldwide from top institutions through Frontiers, a leading publisher of academic research.

20 June
They call it ‘AI washing’
SEC’s AI Crackdown Signals Trickle of Cases Will Turn to Flood
Agency has filed three enforcement cases since March
Securities lawyers see similarity to early crypto scrutiny
(Bloomberg) A spate of recent US enforcement actions is likely just the beginning of a crackdown on companies overhyping artificial intelligence to investors
It’s AI’s turn. Since March, the US Securities and Exchange Commission has accused three companies of misrepresenting how they use machine learning and other tools—so-called AI washing. The moves follow multiple warnings from SEC Chair Gary Gensler and the regulator’s top enforcement attorney over misstatements around artificial intelligence. While Gensler has referred to AI as “most transformative technology of this generation,” he has also said it could spark a financial meltdown. Even before the recent SEC cases, the agency had proposed new restrictions for brokerages and advisers using AI. Lawyers contend the enforcement actions brought so far around AI washing are similar to those involving statements some companies made about Covid treatments and ESG.

30 May
All Eyes on Rafah: The post that’s been shared by more than 47m people
An AI-generated image depicting tent camps for displaced Palestinians and a slogan that reads All Eyes on Rafah is sweeping social media.
The image and the slogan went viral after an Israeli air strike and resulting fire at a camp for displaced Palestinians in the southern Gaza city of Rafah earlier this week. The deadly incident led to people posting clips of Richard Peeperkorn, a representative of the World Health Organization in the occupied Palestinian territories, speaking in February.
He told journalists at the time that “All eyes are on Rafah”, warning against Israeli forces attacking the city. … in the last two days, the AI-generated image featuring the slogan has proliferated on social media sites, with more than 47 million shares according to an Instagram count on Thursday afternoon.
… Experts who spoke to the BBC say there are a number of factors that explain why the All Eyes on Rafah message has gone viral in such a short amount of time.
Among them are the AI-generated nature of the image, the simplicity of the slogan, the ease at which Instagram users can share the post in just a couple of clicks, and its uptake by celebrities.
But according to Anastasia Kavada, who runs an MA course on media, campaigning and social change at the University of Westminster, the most important factor is the timing and political context of the post.

16-21 May
New work on AI – Brookings presents 3 new reports
Regulating general-purpose AI: Areas of convergence and divergence across the EU and the US
Benjamin Cedric Larsen and Sabrina Küspert
The European Parliament has acknowledged that the speed of technological progress around general-purpose AI models is faster and more unpredictable than anticipated by policymakers. At the end of 2023, EU lawmakers reached political agreement on the EU AI Act, a pioneering legislative framework on AI, which introduces binding rules for general-purpose AI models and a centralised governance structure at the EU level through a new European AI Office.
The most important question when designing AI
Enhancing collective intelligence through AI
When designing AI systems, one useful proxy for the interests of people and planet is collective intelligence. For biological life, all intelligence is collective intelligence (CI). Consider humans: The intelligence of our bodily functions emerges from teamwork of cells; our cognitive intelligence emerges through cooperation among neurons. Similarly, our social intelligence—from spoken language to the creation of modern (and maybe one day sustainable) societies—has emerged from the collaborative efforts of families, teams, communities, and now vast digital networks whose intelligence has surpassed the sum of their parts.
If CI is the underlying logic of human intelligence and perhaps even the sustainability of life itself, then one of the most important questions we can ask when designing AI applications today is: How can AI enhance CI?
How AI can inclusively transform agri-food systems in Africa
AI and other automation technologies are presenting game-changing opportunities for the continent’s smallholder farmers, particularly when delivered through low-tech delivery channels, in-person intermediary networks, and through partnerships with value chain stakeholders to subsidize costs.
A report by Genesis Analytics provides a sneak peek into this future. Data from sensors, satellites, and drones is enabling optimal use of land based on specific crop suitability.1 Automated systems, including irrigation, ensure efficient resource utilization.2 AI-enabled advisory services provide farmers with timely, tailored advice to boost yields and manage pests, reducing crop failure, spoilage, and bolstering food security.3 More accurate farming minimizes costs and environmental impact by using resources efficiently.4 Traceability tools reduce certification costs, broadening market access. AI-driven risk analysis facilitates access to crucial financial services like credit and insurance.5 The report identifies the types of solutions with existing pockets of adoption impacting smallholder farmers in Africa.
Can AI inclusively advance agri-food systems?
In late 2022, ChatGPT made it clear that AI is transforming our world. But what does this revolution mean for agri-food systems in low and middle income countries? The Centre of Digital Excellence at Genesis partnered with the Bill and Melinda Gates Foundation and US Agency for International Development to investigate. This report unpacks why and how AI and automation is used by small-scale agricultural producers, what the risks and opportunities are, and makes recommendations for steering AI innovation toward more inclusive outcomes.

21 May
The Big AI Risk Not Enough People Are SeeingBeware technology that makes us less human.
By Tyler Austin Harper
As AI is built into an ever-expanding roster of products and services, covering dating, essay writing, and music and recipe recommendations, we need to be able to make granular, rational decisions about which uses of artificial intelligence expand our basic human capabilities, and which cultivate incompetence and incapacity under the guise of empowerment. Disabling algorithms are disabling precisely because they leave us less capable of, and more anxious about, carrying out essential human behaviors.
(The Atlantic) “Our focus with AI is to help create more healthy and equitable relationships.” Whitney Wolfe Herd, the founder and executive chair of the dating app Bumble, leans in toward her Bloomberg Live interviewer. “How can we actually teach you how to date?”
… What Herd provides here is much more than a darkly whimsical peek into a dystopian future of online dating. It’s a window into a future in which people require layer upon layer of algorithmic mediation between them in order to carry out the most basic of human interactions: those involving romance, sex, friendship, comfort, food. Implicit in Herd’s proclamation—that her app will “teach you how to date”—is the assumption that AI will soon understand proper human behavior in ways that human beings do not. Despite Herd’s insistence that such a service would empower us, what she’s actually describing is the replacement of human courtship rituals: Your digital proxy will go on innumerable dates for you, so you don’t have to practice anything so pesky as flirting and socializing.
OpenAI Just Gave Away the Entire Game
The Scarlett Johansson debacle is a microcosm of AI’s raw deal: It’s happening, and you can’t stop it.
By Charlie Warzel
On its own, this seems to be yet another example of a tech company blowing past ethical concerns and operating with impunity. But the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners.

29 April
AI is coming for the professional class. Expect outrage — and fear.
By Megan McArdle
(WaPo) …as artificial intelligence starts coming for our jobs, I wonder how well the professional class will take its own medicine. Will we gracefully transition to lower-skilled service work, as we urged manufacturing workers to do? Or will we fight like hell to retain what we have, for our children as well as ourselves?
For I suspect AI is coming for a lot of professional class jobs, despite how many people I hear say a machine can never do what they do.
We’re accustomed to think of automation as primarily displacing the working class, but as economist Daron Acemoglu wrote in 2002, “the idea that technological advances favor more skilled workers is a 20th-century phenomenon”; in the 19th century, steam-driven machines replaced a lot of skilled artisans, and AI currently looks to be pointed in a similar direction. If you work with words and symbols, AI can already do a surprising amount of what you can do — and it is improving with terrifying speed. …

5 April
Facebook parent Meta overhauls rules on deepfakes, other altered media
(Globe & Mail) Facebook owner Meta announced major changes to its policies on digitally created and altered media on Friday, ahead of U.S. elections poised to test its ability to police deceptive content generated by new artificial-intelligence technologies.
The social-media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on its platforms, expanding a policy that previously addressed only a narrow slice of doctored videos, vice-president of content policy Monika Bickert said in a blog post.
Ms. Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether the content was created using AI or other tools.
The new approach will shift the company’s treatment of manipulated content. It will move from one focused on removing a limited set of posts toward one that keeps the content up while providing viewers with information about how it was made.
Meta previously announced a scheme to detect images made using other companies’ generative AI tools using invisible markers built into the files, but did not give a start date at the time.

2 April
That robot sounds just like you
(GZERO media) A group of OpenAI clients is reportedly testing a new tool called Voice Engine, which can mimic a person’s voice based on a 15-second recording, according to the New York Times. And from there it can translate the voice into any language.
The report outlined a series of potential abuses: spreading disinformation, allowing criminals to impersonate people online or over phone calls, or even breaking voice-based authenticators used by banks.
…the real danger lies in the absence of other indicators that the audio is fake. With every other AI-generated media, there are clues for the discerning viewer or reader. AI text can feel clumsily written, hyper-organized, and chronically unsure of itself, often refusing to give real recommendations. AI images often have a cartoonish or sci-fi sheen, depending on their maker, and are notorious for getting human features wrong: extra teeth, extra fingers, and ears without lobes. AI video, still relatively primitive, is infinitely glitchy.
It’s conceivable that each of these applications for generative AI improves to a point where they’re indistinguishable from the real thing, but for now, AI voices are the only iteration that feels like it could become utterly undetectable without proper safeguards. And even if OpenAI, often the first to market, is responsible, that doesn’t mean all actors will be29 March
OpenAI Unveils A.I. Technology That Recreates Human Voices
(NYT) The start-up is sharing the technology, Voice Engine, with a small group of early testers as it tries to understand the potential dangers.

The US and UK have struck the world’s first bilateral agreement on AI safety, agreeing to cooperate on testing and risk-assessing artificial intelligence.
US and UK announce formal partnership on artificial intelligence safety
Countries sign memorandum to develop advanced AI model testing amid growing safety concerns
Generative AI – which can create text, photos and videos in response to open-ended prompts – has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans since the release of ChatGPT in November 2022.
Both countries plan to share key information on capabilities and risks associated with AI models and systems and technical research on AI safety and security.

28 March
AI already uses as much energy as a small country. It’s only the beginning.
Brian Calvert, environmental journalist
The energy needed to support data storage is expected to double by 2026. You can do something to stop it.
(Vox) In January, the International Energy Agency (IEA) issued its forecast for global energy use over the next two years. Included for the first time were projections for electricity consumption associated with data centers, cryptocurrency, and artificial intelligence.
The IEA estimates that, added together, this usage represented almost 2 percent of global energy demand in 2022 — and that demand for these uses could double by 2026, which would make it roughly equal to the amount of electricity used by the entire country of Japan.
We live in the digital age, where many of the processes that guide our lives are hidden from us inside computer code. We are watched by machines behind the scenes that bill us when we cross toll bridges, guide us across the internet, and deliver us music we didn’t even know we wanted. All of this takes material to build and run — plastics, metals, wiring, water — and all of that comes with costs. Those costs require trade-offs.

18 March
Industry Minister François-Philippe Champagne says the federal government’s efforts to regulate artificial intelligence are the envy of other G7 countries.
(iPolitics) After wrapping up three days of meetings with his G7 counterparts in Italy, Champagne said it’s clear Canada is a world leader in regulating machine learning and other emerging technologies.
“I wish you had been in the room,” he told reporters last Friday -“Because everyone is praising [Bill] C-27”
Introduced in the House of Commons in June 2022, C-27, also known as the Digital Charter Implementation Act, seeks to update Canadian privacy law and establish a new legislative framework for regulating AI. Among other things, it features new requirements governing the design and development of AI in an effort to maintain data security and limit harmful practices.
The bill passed second reading last April with the support of the NDP and Bloc and was referred to the industry committee, where it still resides.
Recently, the feds have come under fire from business groups, Indigenous organizations, and other stakeholders for failing to adequately consult on the bill, though Champagne has said the bill must progress through Parliament expeditiously to ensure AI doesn’t go unregulated for years to come.

14 March
How to Spot AI Fakes (For Now)
Artificial intelligence can generate images, texts, human voices, and videos. Can we avoid being duped?
Jonathan Jarry M.Sc.
(McGill Office for Science and Society) Technology has improved dramatically in the last couple of years to allow for the wholesale creation of images, texts, audio, and video that appear to come from humans but are actually made by computers. Meanwhile, our species is already at a profound disadvantage when it comes to media literacy: most teenagers can’t tell the difference between a fact and an opinion in a text they read, and nearly half of all Canadians are functionally illiterate. We have the makings of a massive problem on our hands.
Even if the fakes are not that good, their mere existence facilitates the manufacture of doubt. A snippet of audio or a smartphone snapshot makes you look bad? It’s clearly a fake! (Just ask Donald Trump, who is already using this defence.)
… Human hands are already difficult to draw in three-dimensional space, and AI still generates hands with too many fingers and with fingers that blend into one another in weird ways. AI can generate realistic hands, but it’s still not a sure thing.

12 March
Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity
(GZERO media) Highlighting AI’s unparalleled capacity to make autonomous decisions and generate original content, Harari underscores the rapid pace at which humans are ceding control over both power and stories to machines. “AI is the first technology in history that can take power away from us,” Harari tells Bremmer.
The discussion also touches on AI’s impact on democracy and personal relationships, with Harari emphasizing AI’s infiltration into our conversations and its burgeoning ability to simulate intimacy. This, he warns, could “destroy trust between people and destroy the ability to have a conversation,” thereby unraveling the fabric of democracy itself. Harari chillingly refers to this potential outcome as “a social weapon of mass destruction.” And it’s scaring dictators as much as democratic leaders. “Dictators,” Harari reminds us, “they have problems too.”

5 March
AI will upset democracies, dictatorships, and elections
(GZERO media) There’s no mistaking it: Artificial intelligence is here, and it’s already playing a major role in elections around the globe. In a year with national elections in 64 countries, the world’s governments are seeing the immediate impact of this nascent technology in real time. …
“Politicians have to win the AI race before they win the election,” says Xiaomeng Lu, director of geo-technology at the Eurasia Group. Some of that work is defensive: Taiwan reportedly used AI tools to debunk disinformation campaigns coming from China ahead of its election in January.
Of course, AI isn’t just a factor in elections but in activism and pro-democracy movements as well. That means autocrats worldwide have to watch their digital backs.
In a recent GZERO panel conversation at the Munich Security Conference, former National Security Council official Fiona Hill said that there are innovative ways for the technology to be used in protest movements.., saying we need to consider how these technologies can be used for good by legitimate opposite leaders.
… With regulation lagging far behind the spread of cheap, high-quality generative AI, look for voluntary commitments from AI firms to predate the passage of effective regulation. In February, a group of 20 leading tech companies — including Amazon, Google, Meta, and Microsoft — pledged to combat election-related misinformation. … The companies promised to conduct risk assessments for their models; develop watermarking, detection, and labeling systems; and educate the public about AI.

28 February
Google chief admits ‘biased’ AI tool’s photo diversity offended users
Sundar Pichai addresses backlash after Gemini software created images of historical figures in variety of ethnicities and genders
Google’s chief executive has described some responses by the company’s Gemini artificial intelligence model as “biased” and “completely unacceptable” after it produced results including portrayals of German second world war soldiers as people of colour.
Sundar Pichai told employees in a memo that images and texts generated by its latest AI tool had caused offence.
Social media users have posted numerous examples of Gemini’s image generator depicting historical figures – including popes, the founding fathers of the US and Vikings – in a variety of ethnicities and genders. Last week, Google paused Gemini’s ability to create images of people.

The future of AI video is here, super weird flaws and all
(WaPo) …Sora, a new tool from OpenAI that can create lifelike, minute-long videos from simple text prompts. When the company unveiled it on Feb. 15, experts hailed it as a major moment in the development of artificial intelligence. Google and Meta also have unveiled new AI video research in recent months. The race is on toward an era when anyone can almost instantly create realistic-looking videos without sophisticated CGI tools or expertise.

2 February
Ian Explains: How will AI impact the workplace?
(GZERO media) Ian Bremmer looks at the history of human anxiety about being replaced by machines and the impact this new AI era will have on today’s workers. Will AI be the productivity booster CEOs hope for, or the job-killer employees fear? Experts are torn. Goldman Sachs predicts a $7 trillion increase in global GDP over the next decade from advances in AI, but the International Monetary Fund estimates that AI will negatively impact 40% of all jobs globally in the same time frame.
Human capital has been the powerhouse of economic growth for most of history, but the unprecedented pace of advances in AI is stirring up excitement and deep anxieties about not only how we work but if we’ll work at all. 20 January
AI’s impact on jobs could lead to global unrest, warns AI expert Marietje Schaake
The 2024 World Economic Forum in Davos was dominated by conversations about AI and its potential as well as possible pitfalls for society. GZERO’s Tony Maciulis spoke to former European Union parliamentarian Marietje Schaake about the current regulatory landscape, a recent report from the International Monetary Fund (IMF) saying as many as 40% of jobs globally could be lost or impacted by AI, and how that might give rise to unrest as we head into a critical year of elections.

18-19 January
Different views: Altman’s​ optimism vs. IMF’s caution
(GZERO media) Much of the buzz in Davos this year has been around artificial intelligence and the attendance of precocious talents like Open AI’s Sam Altman, who has helped pioneer the biggest technological breakthrough since the personal computer. The World Economic Forum’s Chief Economists Outlook suggested near unanimity in the belief that productivity gains from AI will become economically significant in the next five years in high-income economies. And Altman himself has said he is motivated to “create tech-driven prosperity.”

Preventing Big AI
As generative artificial intelligence is applied in a rapidly growing number of industries, a slew of recent lawsuits, summits, legislation, and regulatory actions have bolstered efforts to establish guardrails for the technology. While some of the challenges AI poses might prove relatively straightforward to solve, others will require creative thinking – and strong political will.
(Project Syndicate) … A broader risk, points out the University of Chicago’s Eric Posner, is that AI is “likely to reinforce Big Tech’s dominance of the economy.” In fact, given “collusion and coordination among a handful of players,” a “future of economic concentration and corporate political power that dwarfs anything that came before” is “all but inevitable.”
Already, notes Diane Coyle of the University of Cambridge, Big Tech’s “dominant players” are “deploying [AI] models to reinforce their position.” Meanwhile, most policymakers and other decision-makers lack any AI expertise, so “policy responses to specific issues are likely to remain inadequate, heavily influenced by lobbying, or highly contested.” In this context, ensuring that powerful new AI technologies “serve everyone” will thus require a policy approach based on principles like interoperability.
Ian Ayres of Yale, Aaron Edlin of the University of California, Berkeley, and Nobel laureate Robert J. Shiller highlight a related problem: the AI revolution will “almost surely lead to an increase in income disparities,” as “those who make and own the inventions” amass “immense wealth,” largely by “economizing on labor costs.” Since regulation “cannot eliminate these risks without precluding…AI’s potential benefits,” including “dramatic increases in productivity,” inequality insurance is essential.
Beyond economics, explains Giulio Boccaletti of the Euro-Mediterranean Center on Climate Change, the power of a few private actors over AI development and applications has important implications for scientific research, including climate science. With “the means of research,” such as computational infrastructure, “firmly in private hands, policymakers will need to be vigilant to ensure that these new tools provide public goods, rather than just private benefits.”
But, according to Carme Artigas, James Manyika, Ian Bremmer, and Marietje Schaake – all members of the Executive Committee of the UN High-level Advisory Body on Artificial Intelligence – national-level efforts will not be enough. “The unique challenges that AI poses demand a coordinated global approach to governance,” and only the United Nations “has the inclusive legitimacy needed to organize such a response.”
The Davos elite embraced AI in 2023. Now they fear it.
(WaPo) Heads of state, billionaires and CEOs appear aligned in their anxieties, as they warn that the burgeoning technology might supercharge misinformation, displace jobs and deepen the economic gap between wealthy and poor nations.
In contrast to far-off fears of the technology ending humanity, a spotlight is on concrete hazards borne out last year by a flood of AI-generated fakes and the automation of jobs in copywriting and customer service. The debate has taken on new urgency amid global efforts to regulate the swiftly evolving technology.
The event opened Tuesday with Swiss President Viola Amherd calling for “global governance of AI,” raising concerns the technology might supercharge disinformation as a throng of countries head to the polls. At a sleek cafe Microsoft set up across the street, CEO Satya Nadella sought to assuage concerns the AI revolution would leave the world’s poorest behind, following the release of an International Monetary Fund report* this week that found the technology is likely to worsen inequality and stoke social tensions. And Irish Prime Minister Leo Varadkar said he was concerned about the rise of deepfake videos and audio, as AI-generated videos of him peddling cryptocurrency circulate the internet.
* Kristalina Georgieva: AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity.
AI will affect almost 40 percent of jobs around the world, replacing some and complementing others. We need a careful balance of policies to tap its potential

17 January
Ian Bremmer writes from Davos Global politics in 2024: It’s not all doom and gloom
… The wild card, more than ever, is technology – specifically, artificial intelligence. AI will disrupt our economies, societies, and geopolitics in ways we can’t yet predict, but it will also become the most powerful human development tool the world has ever seen, helping people live longer, healthier, and more productive lives than at any time in history. And this will happen much sooner than you think. With AI capabilities doubling roughly every six months, three times faster than Moore’s law, the upsides will start materializing more dramatically as new applications find their way into every major corporation across every economic sector. And as hundreds of millions of people begin to upskill themselves in their jobs, AI will become a copilot before it takes over their jobs. This will create a new globalization, one exponentially faster and more transformative than the globalization unleashed by free global trade and investment in recent decades.
The flip side is that the technology is also advancing far faster than the ability to govern it, and a technopolar world for artificial intelligence – i.e., tech companies rather than governments are in control of AI development and deployment – means crisis response and reaction will come only after things break … and then, it might be too late. Let’s just hope in 2024 those things aren’t that big.

16 January
Generative AI for economic research: Use cases and implications for economists
Anton Korinek
Editor’s note:This paper was originally published in the Journal of Economic Literature in December 2023.
(Brookings) This article describes use cases of modern generative AI to interested economic researchers based on the author’s exploration of the space. The main emphasis is on [Large language models (LLMs)], which are the type of generative AI that is currently most useful for research. I have categorized their use cases into six areas: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions for how to take advantage of each of these capabilities and demonstrate them using specific examples. Moreover, I classify the capabilities of the most commonly used LLMs from experimental to highly useful to provide an overview. My hope is that this paper will be a useful guide both for researchers starting to use generative AI and for expert users who are interested in new use cases beyond what they already have experience with to take advantage of the rapidly growing capabilities of LLMs. The online resources associated with this paper are available at the journal website and will provide semi-annual updates on the capabilities and use cases of the most advanced generative AI tools for economic research. In addition, they offer a guide on “How do I start?” as well as a page with “Useful Resources on Generative AI for Economists.”

15 January
At Davos, conflict, climate change and AI get top billing as leaders converge for elite meeting
WHITHER AI?
(AP) A testament to how technology has taken a large and growing slice of attention in Davos, the theme of artificial intelligence “as a driving force for the economy and society” will get about 30 separate sessions.
The dizzying emergence of OpenAI’s ChatGPT over a year ago and rivals since then have elevated the power, promise and portent of artificial intelligence into greater public view. OpenAI chief Sam Altman will be in Davos along with top executives from Microsoft, which helped bankroll his company’s rise.
AI in education, transparency about the technology, its ethics and impact on creativity are all part of the menu — and the Davos Promenade is swimming in advertisements and displays pointing to the new technology.
Forum organizers warned last week that the threat posed by misinformation generated by AI, such as through the creation of synthetic content, is the world’s greatest short-term threat.

3 January
‘Where does the bot end and human begin?’: what the legendary @Horse_ebooks can teach us about AI
Kari Paul
By reusing and repurposing existing writing into viral fragments on Twitter, the account functioned like today’s chatbots. The Guardian spoke to Jacob Bakkila, the human behind the account
More than a decade before an AI-powered chatbot could do your homework, help you make dinner or pass the bar exam, there was @Horse_ebooks. The primitive predecessor to today’s chatbot renaissance began as a Twitter account in 2010, tweeting automated excerpts from ebooks that, decontextualized, took on unexpected and strangely poetic meanings.

2 January
We asked top AI chatbots for their predictions for 2024… and it produced some VERY alarming results
DailyMail.com asked Google’s Bard and Amazon-backed Claude
The predictions include rising tensions with China and election hacking
AI systems might start reasoning by themselves
Claude.ai predicted the first AI models would begin to show signs of AGI – ‘artificial general intelligence.’
AGI is a theoretical intelligent agent able to complete any intellectual task a human can – and the arrival of AGI is forecast to cause huge changes to human society.
‘Groups like DeepMind, OpenAI, Google Brain, and Anthropic are pushing towards this goal of AGI. While we likely won’t fully crack general intelligence by 2024, we might see demos of systems that start displaying more expansive reasoning, creativity, and decision-making abilities.’
Claude.ai said that problems around AI could include systems that behave unpredictably and job automation outpacing workers’ ability to adapt. …
Bard predicted that 2024 could see biotechnology breakthroughs that ‘upgrade’ the human race.
The AI suggested that these could include breakthroughs in ‘Brain-Computer Interfaces,’ where human brains connect to computers.
Elon Musk’s Neuralink is set to test such technology in volunteers in the coming year.
Analysis by Foresight Factory suggested this year that more than a third of consumers would be happy to have such a chip implanted to connect more easily to computer systems.
Bard said: ‘Advances in biocompatible materials and robotic engineering could lead to bionic limbs that restore near-natural motor function or even surpass human limitations in strength and dexterity. Exoskeletons could augment physical capabilities for heavy lifting, military applications, or assisting the elderly.
Ian Bremmer: Hold us accountable: Our biggest calls for 2023
… 3. Weapons of Mass Disruption
Here’s where I think we were furthest ahead of the curve. A year ago, very, very few political leaders were actively thinking about the disruptive power of artificial intelligence. Now, the hopes and fears are front and center in every region of the world – but especially for decision-makers in America, China, and Europe. The UN is on the case now too.
We learned this year that new AI tools represent a unique technological breakthrough with implications for every sector of the economy. They’re already driving a new phase of globalization. But they’re also creating serious risks because AI will enable disinformation on a massive scale, fuel public mistrust in governing institutions, and empower demagogues and autocrats in both politics and the private sector..”

2023

12 December
AI is forcing teachers to confront an existential question
AI is forcing educators to rethink plagiarism guidelines, grading and even lesson plans. But above all, it is demanding that they decide what education is really about — that teachers ask, in short, “What are we here for, anyway?”
ChatGPT has become to generative AI what Kleenex is to tissues. This most mentioned of tools, however, might be the least of teachers’ worries. Boutique services geared toward composing college essays…abound.
In the spring, after students came back to campus eager to enlist robots in their essay-writing, Watkins and his colleagues created the Mississippi AI Institute….
The hope is that the institute’s work can eventually be used by campuses across the country. For now, a two-day program in early June at Ole Miss may be the only one of its kind to pay teachers a stipend to educate themselves on AI: how students are probably using it today, how they could be using it better, and what all of that means for their brains.

8 December
E.U. Agrees on Landmark Artificial Intelligence Rules
The agreement over the A.I. Act solidifies one of the world’s first comprehensive attempts to limit the use of artificial intelligence.
European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.
The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.
Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

The future of the world is intelligent: Insights from the World Economic Forum’s AI Governance Summit
(Brookings) Last month, the World Economic Forum convened over 200 hundred world leaders, technology experts, and academics for the AI Governance Summit.
The Summit raised a multiple important issues, including the challenge of coordinating on AI policy given the fast pace of technological development and the need to balance the benefits and risks of generative AI.
Discussions at the Summit also emphasized that prioritizing responsible AI deployment is an imperative for corporations and that national AI strategies will play an important role in balancing the risks and benefits of this technology.

AI’s ‘Fog of War’
How can institutions protect Americans against a technology no one fully understands?
By Damon Beres
Earlier this year, The Atlantic published a story by Gary Marcus, a well-known AI expert who has agitated for the technology to be regulated, both in his Substack newsletter and before the Senate. (Marcus, a cognitive scientist and an entrepreneur, has founded AI companies himself and has explored launching another.) Marcus argued that “this is a moment of immense peril,” and that we are teetering toward an “information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots.”
I was interested in following up with Marcus given recent events. In the past six weeks, we’ve seen an executive order from the Biden administration focused on AI oversight; chaos at the influential company OpenAI; and this Wednesday, the release of Gemini, a GPT competitor from Google. What we have not seen, yet, is total catastrophe of the sort Marcus and others have warned about. Perhaps it looms on the horizon—some experts have fretted over the destructive role AI might play in the 2024 election, while others believe we are close to developing advanced AI models that could acquire “unexpected and dangerous capabilities,” as my colleague Karen Hao has described. But perhaps fears of existential risk have become their own kind of AI hype, understandable yet unlikely to materialize.Read our conversation, “No Idea What’s Going On”

3 December
Ego, Fear and Money: How the A.I. Fuse Was Lit
The people who were most afraid of the risks of artificial intelligence decided they should be the ones to build it. Then distrust fueled a spiraling competition.
(NYT) The question of whether artificial intelligence will elevate the world or destroy it — or at least inflict grave damage — has framed an ongoing debate among Silicon Valley founders, chatbot users, academics, legislators and regulators about whether the technology should be controlled or set free.
That debate has pitted some of the world’s richest men against one another: Mr. Musk, Mr. Page, Mark Zuckerberg of Meta, the tech investor Peter Thiel, Satya Nadella of Microsoft and Sam Altman of OpenAI. All have fought for a piece of the business — which one day could be worth trillions of dollars — and the power to shape it.

31 October
Everybody wants to regulate AI
US President Joe Biden on Monday [30 October] signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run  Nov. 1-2, at Bletchley Park. …
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”

28 November
EU AI regulation efforts hit a snag
(GZERO AI) In May, the European Parliament approved the legislation, but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.
Bump in the road: Last week, France, Germany, and Italy dealt the AI Act a setback by reaching an agreement that supports “mandatory self-regulation through codes of conduct” for AI developers building so-called foundation models

Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead
(AP) Pictures from the Israel-Hamas war have vividly and painfully illustrated AI’s potential as a propaganda tool, used to create lifelike images of carnage. Since the war began last month, digitally altered ones spread on social media have been used to make false claims about responsibility for casualties or to deceive people about atrocities that never happened.
While most of the false claims circulating online about the war didn’t require AI to create and came from more conventional sources, technological advances are coming with increasing frequency and little oversight. That’s made the potential of AI to become another form of weapon starkly apparent, and offered a glimpse of what’s to come during future conflicts, elections and other big events.

26 November
Dr. Gary Marcus Hinton vs LeCun vs Ng vs Tegmark vs O
Three top ML researchers, a leading physicist, and a former French Minister, at each other’s throats
If last weekend’s OpenAI drama was top-billing, the can’t-look-away undercard was four heavyweights, Hinton, LeCun, Ng, and Tegmark slugging it out on X, briefly joined by former French Minister Cedric O, now working at a French LLM startup, Mistral, and arguably having pulled a 180 with respect to regulation.
The main issues were two: does deep learning understand anything, and how should we regulate AI. Some spicy samples…

25 November
Artificial Intelligence: Canada’s future of everything
Artificial Intelligence is on the brink of revolutionizing virtually every facet of human existence and Canada is on the leading edge, from healthcare and education to airlines and entertainment. For The New Reality, Mike Drolet explores some of the critical risks and the need for guardrails. And we take viewers inside how AI is improving our daily lives in ways that often remain undetectable – and certainly unimaginable just a few years ago.
Geoffrey Hinton, the so-called grandfather of AI, issued warnings this year, sounding the alarm about the existential threat of AI.
In May 2023, he appeared in an article on the front page of The New York Times, announcing he had quit his job at Google to speak freely about the harm he believes AI will cause humanity.
If Hinton is having a come-to-Jesus moment, he might be too late. Over 100 Million people use ChatGPT, a form of AI using technology he invented. That’s on top of the way AI is already interwoven into practically everything we do online.
And while Toronto-based Hinton is one of the Canadian minds leading this industry — one which is growing exponentially — the circle of AI innovators remains small.
… Canada’s AI pioneering dates back to the 1970s, when researchers formed the world’s first national AI association. The Canadian Artificial Intelligence Association (CAIAC) formerly known as the Canadian Society for the Computational Studies of Intelligence, held its first “official” meeting in 1973.
Its own mission statement says the CAIAC aims to “foster excellence and leadership in research, development and education in Canada’s artificial intelligence community by facilitating the exchange of knowledge through various media and venues.”

18-25 November
OpenAI’s new board aims to, ‘bring in more grown ups,’ says Forbes Senior Editor
OpenAI’s board is getting a makeover and expansion as per terms of Sam Altman’s reinstatement as the AI firm’s CEO. Former Salesforce Co-CEO Bret Taylor (CRM) and former US Treasury Secretary Larry Summers will now hold board seats, while experts speculate whether Microsoft (MSFT) — which owns a 49% stake in OpenAI — could push for its own seat at the table.
Forbes Senior Editor Alex Konrad highlights who else these figures could bring onto OpenAI’s board of directors, believing this board won’t be “the exact board a year or two from now.”
“The prevailing narrative is more of that OpenAI is going to get back to what it was doing and that this will be hopefully a blip or a distraction from the mission they were on,” Konrad tells Yahoo Finance.

Sam Altman and the OpenAI power struggle, explained (CBC via YouTube)
Sam Altman is back in charge as CEO of OpenAI after being ousted by the company’s board. Andrew Chang explains why the man famous for bringing ChatGPT to the world was fired, then rehired — and what it could mean for the future of one of the world’s most powerful AI innovators.

David Brooks: The Fight for the Soul of A.I.
The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?
As it evolved, OpenAI turned into what you might call a fruitful contradiction: a for-profit company overseen by a nonprofit board with a corporate culture somewhere in between.
Many of the people at the company seem simultaneously motivated by the scientist’s desire to discover, the capitalist’s desire to ship product and the do-gooder’s desire to do this all safely.
The events of the past week — Sam Altman’s firing, all the drama, his rehiring — revolve around one central question: Is this fruitful contradiction sustainable?
A.I. is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going. The venture capital investor Marc Andreessen emphasizes that it is going to change the world vastly for the better. The cognitive scientist Gary Marcus depicts an equally persuasive scenario about how all this could go wrong.
… The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?

Sam Altman’s back. Here’s who’s on the new OpenAI board and who’s out
After several days of crisis and tumult, Sam Altman has returned as the CEO of OpenAI. Three new board members have replaced the previous leadership that ousted Altman.
OpenAI’s new board doesn’t appear to be fully built. Negotiations are reportedly underway to install representation from Microsoft or other major investors.
After all the hue and cry, as of early Wednesday morning it seems that Sam Altman Is Reinstated as OpenAI’s Chief Executive
(NYT) Sam Altman was reinstated late Tuesday as OpenAI’s chief executive, the company said, successfully reversing his ouster by the company’s board last week after a campaign waged by his allies, employees and investors.

Sam Altman is still trying to return as OpenAI CEO
(The Verge) Altman’s move to Microsoft isn’t a done deal, and Ilya Sutskever’s flip to supporting Altman means two board members need to change their minds.
Sam Altman’s surprise move to Microsoft after his shock firing at OpenAI isn’t a done deal. He and co-founder Greg Brockman are still willing to return to OpenAI if the remaining board members who fired him step aside, multiple sources tell The Verge.

Microsoft Hires Sam Altman Hours After OpenAI Rejects His Return
(NYT) The announcement capped a tumultuous weekend for OpenAI, after Mr. Altman made a push to reclaim his job as C.E.O. of the artificial intelligence company.
The departure of Mr. Altman, 38, also drew attention to a rift in the A.I. community between people who believe A.I. is the most important new technology since web browsers and others who worry that moving too fast to develop it could be dangerous. [Director Ilya] Sutskever, in particular, was worried that Mr. Altman was too focused on building OpenAI’s business while not paying enough attention to the dangers of A.I.
Threat of OpenAI Staff Exodus Leaves Its Future Uncertain
With more than 700 of OpenAI’s nearly 800 staff members saying they might head to Microsoft, prospects for the A.I. start-up aren’t rosy. The industry could experience second-order effects, too.
Sam Altman ‘was working on new venture’ before sacking from OpenAI
(The Guardian) To add to the confusion over the future of one of the world’s most potentially valuable technology firms, a report by the Verge on Saturday night claimed that the OpenAI board was in discussions with Sam Altman to return as CEO, just a day after he was ousted.
OpenAI board in discussions with Sam Altman to return as CEO
Altman was suddenly fired on Friday, sending the hottest startup in tech into an ongoing crisis.

13 November
Why the Godfather of A.I. Fears What He’s Built
Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours.
“There’s a very general subgoal that helps with almost all goals: get more control,” Hinton said of A.I.s. “The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.”
Geoffrey Hinton: The Man Who Taught Machines to Learn
AI is not just a technological leap but a societal leapfrog.
The story of Geoffrey Hinton, often dubbed the ‘Godfather of AI’, isn’t just a tale of technological advancements; it’s a saga that intertwines human brilliance with the unpredictability of machine intelligence. As a pioneer in the field of artificial intelligence, Hinton’s journey from conceptualizing neural networks to acknowledging the fears associated with AI’s rapid progression is both fascinating and instructive. His story serves as a beacon for AI developers, offering essential insights into the relationship between human cognition and artificial learning.
The future of AI is not just in codes and algorithms but also in the ethical considerations it demands.
1-2 November
AI Safety Summit 2023
The summit will bring together international governments, leading AI companies, civil society groups and experts in research. It aims to:
consider the risks of AI, especially at the frontier of development
discuss how they can be mitigated through internationally coordinated action
Countries at a UK summit pledge to tackle AI’s potentially ‘catastrophic’ risks
(AP) Delegates from 28 nations, including the U.S. and China, agreed Wednesday to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.
The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge “frontier” AI that some scientists warn could pose a risk to humanity’s very existence.
Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work towards “shared agreement and responsibility” about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.
Rishi Sunak’s first-ever UK AI Safety Summit: What to expect
Just in time for the 2023 AI Summit comes the launch of the new weekly GZERO AI newsletter with this introduction.
“There is no more disruptive or more remarkable technology than AI, but let’s face it, it is incredibly hard to keep up with the latest developments. Even more importantly, it’s almost impossible to understand what the latest AI innovations actually mean. How will AI affect your job? What do you need to know? Who will regulate it? How will it disrupt work, the economy, politics, war? ”

1 November
Toward international cooperation on AI governance—the US executive order on AI
(Brookings) On October 30, the White House released a detailed and comprehensive executive order on AI (EOAI)—the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The EOAI mobilizes the federal government to develop guidelines and principles, and compile reports on AI use and its development. The EOAI, along with the AI Bill of Rights, the Voluntary AI Commitments, and work on AI standards sum to an increasingly coherent and comprehensive approach to AI governance. U.S. leadership on AI governance is critical, particularly given the role of the U.S. as a leading developer and investor in AI, including more recently foundational AI models such as ChatGPT4. However, international cooperation on AI governance is also needed to make domestic AI governance efforts more effective, including by facilitating the exchange of AI governance experiences that can inform approaches to domestic AI governance; addressing the externalities and extraterritorial impacts of domestic AI governance that can otherwise stifle innovation and reduce opportunities for uptake and use of AI; and finding ways to broaden access globally to the computing power and data that is essential for building and training AI models.

Comments are closed.

Wednesday-Night