AI, Chatbots, Society & Technology November 2023-

Written by  //  November 28, 2023  //  Canada, Science & Technology  //  No comments

AI pioneers Hinton, Ng, LeCun, Bengio amp up x-risk debate
GZERO AI launches October 31st

Is AI’s “intelligence” an illusion?
Is ChatGPT all it’s cracked up to be? Will truth survive the evolution of artificial intelligence?
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus breaks down the recent advances––and inherent risks––of generative AI.
AI-powered, large language model tools like the text-to-text generator ChatGPT or the text-to-image generator Midjourney can do magical things like write a college term paper in Klingon or instantly create nine images of a slice of bread ascending to heaven.
But there’s still a lot they can’t do: namely, they have a pretty hard time with the concept of truth, often presenting inaccurate or plainly false information as facts. As generative AI becomes more widespread, it will undoubtedly change the way we live, in both good ways and bad. (11 September)
Everybody wants to regulate AI
US President Joe Biden on Monday [30 October] signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run  Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”

28 November
EU AI regulation efforts hit a snag
(GZERO AI) In May, the European Parliament approved the legislation, but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.
Bump in the road: Last week, France, Germany, and Italy dealt the AI Act a setback by reaching an agreement that supports “mandatory self-regulation through codes of conduct” for AI developers building so-called foundation models

Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead
(AP) Pictures from the Israel-Hamas war have vividly and painfully illustrated AI’s potential as a propaganda tool, used to create lifelike images of carnage. Since the war began last month, digitally altered ones spread on social media have been used to make false claims about responsibility for casualties or to deceive people about atrocities that never happened.
While most of the false claims circulating online about the war didn’t require AI to create and came from more conventional sources, technological advances are coming with increasing frequency and little oversight. That’s made the potential of AI to become another form of weapon starkly apparent, and offered a glimpse of what’s to come during future conflicts, elections and other big events.

25 November
Artificial Intelligence: Canada’s future of everything
Artificial Intelligence is on the brink of revolutionizing virtually every facet of human existence and Canada is on the leading edge, from healthcare and education to airlines and entertainment. For The New Reality, Mike Drolet explores some of the critical risks and the need for guardrails. And we take viewers inside how AI is improving our daily lives in ways that often remain undetectable – and certainly unimaginable just a few years ago.
Geoffrey Hinton, the so-called grandfather of AI, issued warnings this year, sounding the alarm about the existential threat of AI.
In May 2023, he appeared in an article on the front page of The New York Times, announcing he had quit his job at Google to speak freely about the harm he believes AI will cause humanity.
If Hinton is having a come-to-Jesus moment, he might be too late. Over 100 Million people use ChatGPT, a form of AI using technology he invented. That’s on top of the way AI is already interwoven into practically everything we do online.
And while Toronto-based Hinton is one of the Canadian minds leading this industry — one which is growing exponentially — the circle of AI innovators remains small.
… Canada’s AI pioneering dates back to the 1970s, when researchers formed the world’s first national AI association. The Canadian Artificial Intelligence Association (CAIAC) formerly known as the Canadian Society for the Computational Studies of Intelligence, held its first “official” meeting in 1973.
Its own mission statement says the CAIAC aims to “foster excellence and leadership in research, development and education in Canada’s artificial intelligence community by facilitating the exchange of knowledge through various media and venues.”

18-25 November
OpenAI’s new board aims to, ‘bring in more grown ups,’ says Forbes Senior Editor
OpenAI’s board is getting a makeover and expansion as per terms of Sam Altman’s reinstatement as the AI firm’s CEO. Former Salesforce Co-CEO Bret Taylor (CRM) and former US Treasury Secretary Larry Summers will now hold board seats, while experts speculate whether Microsoft (MSFT) — which owns a 49% stake in OpenAI — could push for its own seat at the table.
Forbes Senior Editor Alex Konrad highlights who else these figures could bring onto OpenAI’s board of directors, believing this board won’t be “the exact board a year or two from now.”
“The prevailing narrative is more of that OpenAI is going to get back to what it was doing and that this will be hopefully a blip or a distraction from the mission they were on,” Konrad tells Yahoo Finance.

Sam Altman and the OpenAI power struggle, explained (CBC via YouTube)
Sam Altman is back in charge as CEO of OpenAI after being ousted by the company’s board. Andrew Chang explains why the man famous for bringing ChatGPT to the world was fired, then rehired — and what it could mean for the future of one of the world’s most powerful AI innovators.

David Brooks: The Fight for the Soul of A.I.
The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?
As it evolved, OpenAI turned into what you might call a fruitful contradiction: a for-profit company overseen by a nonprofit board with a corporate culture somewhere in between.
Many of the people at the company seem simultaneously motivated by the scientist’s desire to discover, the capitalist’s desire to ship product and the do-gooder’s desire to do this all safely.
The events of the past week — Sam Altman’s firing, all the drama, his rehiring — revolve around one central question: Is this fruitful contradiction sustainable?
A.I. is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going. The venture capital investor Marc Andreessen emphasizes that it is going to change the world vastly for the better. The cognitive scientist Gary Marcus depicts an equally persuasive scenario about how all this could go wrong.
… The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?

Sam Altman’s back. Here’s who’s on the new OpenAI board and who’s out
After several days of crisis and tumult, Sam Altman has returned as the CEO of OpenAI. Three new board members have replaced the previous leadership that ousted Altman.
OpenAI’s new board doesn’t appear to be fully built. Negotiations are reportedly underway to install representation from Microsoft or other major investors.
After all the hue and cry, as of early Wednesday morning it seems that Sam Altman Is Reinstated as OpenAI’s Chief Executive
(NYT) Sam Altman was reinstated late Tuesday as OpenAI’s chief executive, the company said, successfully reversing his ouster by the company’s board last week after a campaign waged by his allies, employees and investors.

Sam Altman is still trying to return as OpenAI CEO
(The Verge) Altman’s move to Microsoft isn’t a done deal, and Ilya Sutskever’s flip to supporting Altman means two board members need to change their minds.
Sam Altman’s surprise move to Microsoft after his shock firing at OpenAI isn’t a done deal. He and co-founder Greg Brockman are still willing to return to OpenAI if the remaining board members who fired him step aside, multiple sources tell The Verge.

Microsoft Hires Sam Altman Hours After OpenAI Rejects His Return
(NYT) The announcement capped a tumultuous weekend for OpenAI, after Mr. Altman made a push to reclaim his job as C.E.O. of the artificial intelligence company.
The departure of Mr. Altman, 38, also drew attention to a rift in the A.I. community between people who believe A.I. is the most important new technology since web browsers and others who worry that moving too fast to develop it could be dangerous. [Director Ilya] Sutskever, in particular, was worried that Mr. Altman was too focused on building OpenAI’s business while not paying enough attention to the dangers of A.I.
Threat of OpenAI Staff Exodus Leaves Its Future Uncertain
With more than 700 of OpenAI’s nearly 800 staff members saying they might head to Microsoft, prospects for the A.I. start-up aren’t rosy. The industry could experience second-order effects, too.
Sam Altman ‘was working on new venture’ before sacking from OpenAI
(The Guardian) To add to the confusion over the future of one of the world’s most potentially valuable technology firms, a report by the Verge on Saturday night claimed that the OpenAI board was in discussions with Sam Altman to return as CEO, just a day after he was ousted.
OpenAI board in discussions with Sam Altman to return as CEO
Altman was suddenly fired on Friday, sending the hottest startup in tech into an ongoing crisis.

13 November
Why the Godfather of A.I. Fears What He’s Built
Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours.
“There’s a very general subgoal that helps with almost all goals: get more control,” Hinton said of A.I.s. “The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.”
Geoffrey Hinton: The Man Who Taught Machines to Learn
AI is not just a technological leap but a societal leapfrog.
The story of Geoffrey Hinton, often dubbed the ‘Godfather of AI’, isn’t just a tale of technological advancements; it’s a saga that intertwines human brilliance with the unpredictability of machine intelligence. As a pioneer in the field of artificial intelligence, Hinton’s journey from conceptualizing neural networks to acknowledging the fears associated with AI’s rapid progression is both fascinating and instructive. His story serves as a beacon for AI developers, offering essential insights into the relationship between human cognition and artificial learning.
The future of AI is not just in codes and algorithms but also in the ethical considerations it demands.
1-2 November
AI Safety Summit 2023
The summit will bring together international governments, leading AI companies, civil society groups and experts in research. It aims to:
consider the risks of AI, especially at the frontier of development
discuss how they can be mitigated through internationally coordinated action
Countries at a UK summit pledge to tackle AI’s potentially ‘catastrophic’ risks
(AP) Delegates from 28 nations, including the U.S. and China, agreed Wednesday to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.
The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge “frontier” AI that some scientists warn could pose a risk to humanity’s very existence.
Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work towards “shared agreement and responsibility” about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.
Rishi Sunak’s first-ever UK AI Safety Summit: What to expect
Just in time for the 2023 AI Summit comes the launch of the new weekly GZERO AI newsletter with this introduction.
“There is no more disruptive or more remarkable technology than AI, but let’s face it, it is incredibly hard to keep up with the latest developments. Even more importantly, it’s almost impossible to understand what the latest AI innovations actually mean. How will AI affect your job? What do you need to know? Who will regulate it? How will it disrupt work, the economy, politics, war? ”

1 November
Toward international cooperation on AI governance—the US executive order on AI
(Brookings) On October 30, the White House released a detailed and comprehensive executive order on AI (EOAI)—the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The EOAI mobilizes the federal government to develop guidelines and principles, and compile reports on AI use and its development. The EOAI, along with the AI Bill of Rights, the Voluntary AI Commitments, and work on AI standards sum to an increasingly coherent and comprehensive approach to AI governance. U.S. leadership on AI governance is critical, particularly given the role of the U.S. as a leading developer and investor in AI, including more recently foundational AI models such as ChatGPT4. However, international cooperation on AI governance is also needed to make domestic AI governance efforts more effective, including by facilitating the exchange of AI governance experiences that can inform approaches to domestic AI governance; addressing the externalities and extraterritorial impacts of domestic AI governance that can otherwise stifle innovation and reduce opportunities for uptake and use of AI; and finding ways to broaden access globally to the computing power and data that is essential for building and training AI models.

Leave a Comment

comm comm comm