AI, Chatbots, Society & Technology January – 7 May 2023

Written by  //  May 7, 2023  //  Science & Technology  //  1 Comment

Bill C-27

Mila – Quebec Artificial Intelligence Institute, is recognized worldwide for its major contributions to AI.
Today, the Mila community boasts the largest concentration of deep learning academic researchers globally.
Mila is the result of a unique collaboration between Université de Montréal and McGill University, in close collaboration with Polytechnique Montréal and HEC Montréal.
AI Chatbot
A chatbot is a computer program or application that simulates and processes human conversation (either through text or voice), enables the user/human to interact with digital entities as if they were communicating with a real human.
Chatbots are in two major categories.
Foreign Affairs September/October 2022
Spirals of Delusion -How AI Distorts Decision-Making and Makes Dictators More Dangerous
By Henry Farrell, Abraham Newman, and Jeremy Wallace
AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.
6 January
A Skeptical Take on the A.I. Revolution
The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?
26 January
3 AI predictions for 2023 and beyond, according to an AI expert
Michael Schmidt, Chief Technology Officer, DataRobot
The field of artificial intelligence (AI) has seen huge growth in recent years.
Companies seeking to harness AI must overcome key societal concerns.
Key predictions outline how to achieve value from responsible AI growth.

7 May
A curious person’s guide to artificial intelligence
Everything you wanted to know about the AI boom but were too afraid to ask

By Pranshu Verma and Rachel Lerman
(WaPo) Artificial intelligence is everywhere. And the recent explosion of new AI technologies and tools has introduced many new terms that you need to know to understand it.
The technology fuels virtual assistants, like Apple’s Siri, helps physicians to spot cancer in MRIs and allows your phone to recognize your face.
Tools that generate content have reignited the field. Chatbots, like ChatGPT and Bard, write software code and chapter books. Voice tools can manipulate celebrities’ speech. Image generators can make hyper-realistic photos given just a bit of text.
This groundbreaking technology has the potential to revolutionize entire industries, but even experts have trouble explaining how some tools work. And tech leaders disagree on whether these advances will bring a utopian future or a dangerous new reality, where truth is indecipherable from fiction.

4 May
White House signals support for AI legislation
The CEOs of Google, Microsoft, OpenAI and Anthropic visit with Biden administration officials wrestling with artificial intelligence technology
The meeting “included frank and constructive discussion” about the need for the companies to be transparent, the White House said. Administration officials and the tech executives also discussed the importance of evaluating how effective AI systems are and ensuring the systems are secure from attacks.
On Thursday morning, the administration also announced a new investment in “trustworthy” AI alongside voluntary commitments from major tech companies to participate in a public assessment of their AI systems at an upcoming cybersecurity conference.

3 May
Media freedom in dire state in record number of countries, report finds
World Press Freedom Index report warns disinformation and AI pose mounting threats to journalism
[The survey] shows rapid technological advances are allowing governments and political actors to distort reality, and fake content is easier to publish than ever before.
“The difference is being blurred between true and false, real and artificial, facts and artifices, jeopardising the right to information,” the report said. “The unprecedented ability to tamper with content is being used to undermine those who embody quality journalism and weaken journalism itself.”
Artificial intelligence was “wreaking further havoc on the media world”, the report said, with AI tools “digesting content and regurgitating it in the form of syntheses that flout the principles of rigour and reliability”.
This is not just written AI content but visual, too. High-definition images that appear to show real people can be generated in seconds.

1-3 May
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
(NYT) Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation
(The Guardian) Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”
But, he added, he was also concerned about the “existential risk of what happens when these things get more intelligent than us.
“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said. “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
AI experts warn of looming catastrophes
(Axios) Among the top concerns: Strongmen crack down. Mass digital data collection can give would-be autocrats a means to anticipate and defuse social anger that bypasses democratic debate — “with no need to tolerate the messiness of free speech, free assembly, or competitive politics,” per Kerley.
MIT’s Daron Acemoglu, author of “Why Nations Fail” and “Redesigning AI,” told Axios he worries “democracy cannot survive” such a concentration of power without guardrails.
India’s Narendra Modi, who is already engaging in democratic backsliding, could be the next digital strongman to weaponize AI against democracy. India has the highest acceptance rates of AI globally, according to a KPMG survey of 17 countries.

28 April
Yuval Noah Harari argues that AI has hacked the operating system of human civilisation
Storytelling computers will change the course of human history, says the historian and philosopher
(The Economist) FEARS OF ARTIFICIAL INTELLIGENCE (AI) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new AI tools have emerged that threaten the survival of human civilisation from an unexpected direction. AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

AI Is a Waste of Time
The newest AI tools are accelerating basic research and scaring the general public. But many people are simply using them as toys.
By Derek Thompson
(The Atlantic) Economists have a tendency to analyze new tech by imagining how it will immediately add to productivity and gross domestic product. What’s harder to model is the way that new technology—especially communications technology—might simultaneously save time and waste time, making us, paradoxically, both more and less productive. I used my laptop to research and write this article, and to procrastinate the writing of this article. The smartphone’s productivity-enhancing potential is obvious, and so is its productivity-destroying potential: The typical 20-something spends roughly seven hours a day on their phone, including more than five hours on social media, watching videos, or gaming.
We overlook the long-range importance of time-wasting technology in several ways. In 1994, the economists Sue Bowden and Avner Offer studied how various 20th-century technologies had spread among households. They concluded that “time using” technologies (for example, TV and radio) diffused faster than “time saving” technologies (vacuum cleaners, refrigerators, washing machines).
Economists have a tendency to analyze new tech by imagining how it will immediately add to productivity and gross domestic product. What’s harder to model is the way that new technology—especially communications technology—might simultaneously save time and waste time, making us, paradoxically, both more and less productive. I used my laptop to research and write this article, and to procrastinate the writing of this article. The smartphone’s productivity-enhancing potential is obvious, and so is its productivity-destroying potential: The typical 20-something spends roughly seven hours a day on their phone, including more than five hours on social media, watching videos, or gaming.
We overlook the long-range importance of time-wasting technology in several ways. In 1994, the economists Sue Bowden and Avner Offer studied how various 20th-century technologies had spread among households. They concluded that “time using” technologies (for example, TV and radio) diffused faster than “time saving” technologies (vacuum cleaners, refrigerators, washing machines).

26 April
The next level of AI is approaching. Our democracy isn’t ready.
Danielle Allen, political theorist at Harvard University, where she is James Bryant Conant University Professor and director of the Edmond and Lily Safra Center for Ethics
(WaPo) we need to strengthen the tools of democracy itself. A pause in further training of generative AI could give our democracy the chance both to govern technology and to experiment with using some of these new tools to improve governance. The Commerce Department recently solicited input on potential regulation for the new AI models; what if we used some of the tools the AI field is generating to make that public comment process even more robust and meaningful?
We need to govern these emerging technologies and also deploy them for next-generation governance. But thinking through the challenges of how to make sure these technologies are good for democracy requires time we haven’t yet had. And this is thinking even GPT-4 can’t do for us.

21 April
The Chernobyl Of The Tech World? Expert Says Unchecked AI Can Have Life-Altering Consequences
Stuart Russell is a highly respected and well-known expert in artificial intelligence (AI) and machine learning. As a professor of computer science at the University of California, Berkeley, he has dedicated 45 years to AI research and co-authored “Artificial Intelligence: A Modern Approach,” a widely used text in the field.
Russell emphasized the need for reasonable guidelines and safety measures to prevent catastrophic events that could have far-reaching consequences. He specifically warned about the possibility of a “Chernobyl for AI” — a reference to the 1986 nuclear disaster in Ukraine that caused widespread environmental and health impacts. In the context of AI, a Chernobyl event could refer to a catastrophic failure of an AI system or an unintended consequence of its development that causes harm on a large scale.

19 April
Powerful new chatbots highlight need for legislation on AI, Montreal conference told
World Summit AI Americas conference seen as “an important moment to reflect on the risks” of artificial intelligence.
Artificial intelligence “is a technology that can be amazingly useful but also risky,” Yoshua Bengio, founder and scientific director of the Quebec artificial intelligence institute known as Mila, said Wednesday in a keynote address to the World Summit AI Americas conference
Innovations such as OpenAI’s ChatGPT chatbot highlight the importance for governments to quickly legislate artificial intelligence and shield humanity from perils associated with the technology, deep-learning pioneer Yoshua Bengio says.
Bengio was one of more than 1,100 executives, thinkers and researchers who published an open letter last month urging a pause in the development of large AI systems because of risks that humans could gradually lose control of civilization. … On Wednesday, Bengio joined about 75 researchers and industry executives in signing another letter, which calls on Canada’s federal government to pass a new law on artificial intelligence — Bill C-27 — before the summer. Although Canada has the chance to become a global leader in legislating AI, “the window to act is closing rapidly,” the signatories said.

Inside the secret list of websites that make AI like ChatGPT sound smart
(WaPo) Chatbots cannot think like humans: They do not actually understand what they say. They can mimic human speech because the artificial intelligence that powers them has ingested a gargantuan amount of text, mostly scraped from the internet.
This text is the AI’s main source of information about the world as it is being built, and it influences how it responds to users. If it aces the bar exam, for example, it’s probably because its training data included thousands of LSAT practice sites.
Tech companies have grown secretive about what they feed the AI. So The Washington Post set out to analyze one of these data sets to fully reveal the types of proprietary, personal, and often offensive websites that go into an AI’s training data.

17 April
Gary Marcus and Sasha Luccioni: Stop Treating AI Models Like People
No, they haven’t decided to teach themselves anything, they don’t love you back, and they still aren’t even a little bit sentient.
As most experts realize, the reality is that current AI doesn’t “decide to teach itself”, or even have consistent beliefs. One minute the string of words that it generates may tell you that it understands language. And another it may say the opposite.
What Keeps a Leading AI Scientist Up At Night
By Benjamin Hart,
Stuart Russell isn’t just an AI expert; he literally wrote the book on it. Russell’s textbook Artificial Intelligence: A Modern Approach, which he co-authored with Peter Norvig in the mid-’90s, is still the gold standard for students in the field. So Russell’s signature on an open letter last month that warned about AI’s enormous potential pitfalls carried serious weight. Russell, a professor of computer science at UC Berkeley, isn’t a wild-eyed AI doomsayer; he believes the technology has the potential to transform the world for the better in astonishing ways. But he also worries that researchers have already lost a complete understanding of what their creations can do. I spoke with Russell about what the open letter accomplished, why a chatbot need not take on science-fiction qualities to wreak havoc, and whether the AI upsides are worth the existential risks.
Pause Giant AI Experiments: An Open Letter
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
16 April
Google CEO: AI impact to be more profound than discovery of fire, electricity
Sundar Pichai told 60 Minutes he believes artificial intelligence technology will be more capable than anything humankind has seen before.
How Google’s “Don’t be evil” motto has evolved for the AI age
(60 Minutes Overtime) As [Sundar Pichai, the CEO of Google and its parent company Alphabet] noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved.
“There has to be regulation. You’re going to need laws…there have to be consequences for creating deep fake videos which cause harm to society,” Pichai said. “Anybody who has worked with AI for a while…realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt.”
Adaptation that is already happening around us with technology that Pichai believes, “will be more capable “anything we’ve ever seen before.”
Soon it will be up to society to decide how it’s used and whether to abide by Alphabet’s code of conduct and, “Do the right thing.”
I used GPT-4 to write my biography. Here’s what it got wrong
Ron Graham, author and journalist
(Globe & Mail) Spurred by all the sturm und drang about artificial intelligence in recent weeks, I decided to take GPT-4 out for a test run. For simplicity’s sake, I asked it for a 1,500-word biography of a subject I know quite a lot about: myself. Seconds later, I received the following, to which I’ve inserted a few factual corrections in square brackets, while leaving its many flabby thoughts and repetitive sentences for a later edit.

Discussion on the Canadian Media Coverage of Artificial Intelligence (YouTube)
Shaping 21st-Century AI: Controversies and Closure in Media, Policy, and Research is a multinational and multidisciplinary research project that aims to articulate a critical refection on artificial intelligence (AI) as an emerging paradigm in contemporary society. In Canada, the project is directed by Jonathan Roberge (Centre urbanisation culture société,
Institut national de la recherche scientifique) and Fenwick McKelvey (Algorithmic Media Observatory, Concordia University).
Training the News: Discussion on the Canadian Media Coverage of Artificial Intelligence

12 April
Quebec seeks to legislate AI, tasks non-profit council to provide input
“It’s important that civil society be a part of this,” Fitzgibbon said after announcing a three-year, $21M grant to the Mila AI institute.

5 April
Bill Gates questions pause on AI research as countries weigh options
Italy’s move to temporarily ban OpenAI’s ChatGPT has other European Union countries considering similar action

3 April
We tested a new ChatGPT-detector for teachers. It flagged an innocent student.
Five high school students helped our tech columnist test a ChatGPT detector coming from Turnitin to 2.1 million teachers. It missed enough to get someone in trouble.
After months of sounding the alarm about students using AI apps that can churn out essays and assignments, teachers are getting AI technology of their own. On April 4, Turnitin is activating the software I tested for some 10,700 secondary and higher-educational institutions, assigning “generated by AI” scores and sentence-by-sentence analysis to student work. It joins a handful of other free detectors already online. For many teachers I’ve been hearing from, AI detection offers a weapon to deter a 21st-century form of cheating.

31 March
An AI researcher who has been warning about the technology for over 20 years says we should ‘shut it all down,’ and issue an ‘indefinite and worldwide’ ban
Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.
The letter, signed by 1,125 people including Elon Musk and Apple’s co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI’s recently launched GPT-4.
Yudkowsy’s article, titled “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down,” said he refrained from signing the letter because it understated the “seriousness of the situation,” and asked for “too little to solve it.”
AI Can Spread Climate Misinformation ‘Much Cheaper and Faster,’ Study Warns
A new study suggests developers of artificial intelligence are failing to prevent their products from being used for nefarious purposes, including spreading conspiracy theories.
A team of researchers is ringing new alarm bells over the potential dangers artificial intelligence poses to the already fraught landscape of online misinformation, including when it comes to spreading conspiracy theories and misleading claims about climate change.
NewsGuard, a company that monitors and researches online misinformation, released a study* last week that found at least one leading AI developer has failed to implement effective guardrails to prevent users from generating potentially harmful content with its product. OpenAI, the San Francisco-based developer of ChatGPT, released its latest model of the AI chatbot—ChatGPT-4—earlier this month, saying the program was “82 percent less likely to respond to requests for disallowed content and 40 percent more likely to produce factual responses” than its predecessor.
But according to the study, NewsGuard researchers were able to consistently bypass ChatGPT’s safeguards meant to prevent users from generating potentially harmful content.
*Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its Predecessor

30 March
Mitch Joel: Pausing artificial intelligence development is a big mistake.
An “open letter” from a group called, Future of Life Institute, is calling for a six month pause on developing anything more powerful than GPT-4.
This open letter has been signed by many tech luminaries like Elon Musk, Steve Wozniak, Yoshua Bengio, Yuval Noah Harari, and over 1200 others.
Their macro perspective is that AI systems with human-competitive intelligence pose profound risks to society and humanity.
Their concerns, which should not be minimized and are not unfounded, include everything from the need for better planning, concerns over job automation to non-human minds outsmarting humans, loss of control over the tech, and the need for oversight and regulation.
What do they, ultimately, want?
Their goal is to enjoy a flourishing future with AI by responsibly reaping its benefits and allowing society to adapt.
But… and there’s always a “but”…
1. Global competition. A six-month pause on AI development might not be followed uniformly across the world. Some countries or organizations might continue their research, potentially gaining a competitive advantage, which could lead to uneven distribution of AI technology and knowledge. For reference, read the work of Kai-Fu Lee. Do not assume that countries like China won’t be the AI super-power of the world (and use this moment to race ahead). …
I’m not sure about what our future with AI holds, but it is our future.
So, we can stick our collective heads in the sand, or face the inevitable.
Do you think that we should pause all AI development?
This is what Elias Makos and I discussed on CJAD 800 Montreal yesterday… Fear Of An AI Planet

Full interview: “Godfather of artificial intelligence” talks impact and potential of AI
Geoffrey Hinton is considered a godfather of artificial intelligence, having championed machine learning decades before it became mainstream. As chatbots like ChatGPT bring his work to widespread attention, we spoke to Hinton about the past, present and future of AI. CBS Saturday Morning’s Brook Silva-Braga interviewed him at the Vector Institute in Toronto on March 1, 2023.

29 March
Musk, scientists call for halt to AI race sparked by ChatGPT
(AP) Their petition published Wednesday is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.
The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.
It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

28 March
The European Union’s Artificial Intelligence Act, explained
The European Union is considering far-reaching legislation on artificial intelligence (AI).
The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements.
European lawmakers are still debating the details, with many stressing the need to both foster AI innovation and protect the public.
(WEF) The proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

17-18 March
As AI chatbots proliferate, so does demand for prompt engineers turned AI whisperers (audio)
(CBC Day 6) If you’ve interacted with a modern chatbot like ChatGPT in the past few months, you’ve probably been amazed by just how creative, eloquent, and human they can seem. But, as demonstrated by Bing’s recent meltdown, there are still lots of things that can go wrong with these bots. Enter: the prompt engineer. Referred to by some as “AI whisperers,” prompt engineers are people who design, refine — and sometimes sell — text prompts for different AI programs with the goal of achieving consistent, specific results. Simon Willison, a developer who has studied prompt engineering, let us know what’s so exciting about prompt engineering and why he thinks this field is set to keep growing.
Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are?
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’
(CBC Spark) What’s the difference between a sentient human mind and a computer program that’s just doing a very good job of mimicking the output of one?
For years, that’s been a central question for many who study artificial intelligence (AI), or the inner workings of the brain. But with the meteoric rise of OpenAI’s ChatGPT — a large language model (LLM) that can generate convincing, detailed responses to natural language requests — a once abstract, hypothetical question has suddenly become very real.
“They seem to be tools that are ontologically ambiguous,” said Jill Fellows, a philosophy instructor at Douglas College, who specializes in philosophy of technology and AI.
“We don’t necessarily know how to place them,” she said. “On the one hand, we do treat it like a tool that we can offload labour to. But on the other hand, because of this ontological ambiguity, we also kind of treat it like an autonomous agent.”

14-15 March
GPT-4’s Successes, and GPT-4’s Failures
By Gary Marcus
GPT-4 is amazing, and GPT-4 is a failure.
GPT is legitimately amazing. It can see (though we don’t have a lot of details on that yet); it does astonishingly well on a whole bunch of standardized tests, like LSATs, GREs, and SATs. It has also already been adopted in a bunch of commercial systems (e.g., Khan Academy).
But it is a failure, too, because
It doesn’t actually solve any of the core problems of truthfulness and reliability that I laid out in my infamous March 2022 essay Deep Learning is Hitting a Wall. Alignment is still shaky; you still wouldn’t be able to use it reliably to guide robots or scientific discovery, the kinds of things that made me excited about A(G)I in the first place. Outliers remain a problem, too.
… All of this (a) makes me more convinced…that GPT-4 is an off-ramp to AGI…, and (b) it puts all of us in an extremely poor position to predict what GPT-4 consequences will be for society, if we have no idea of what is in the training set and no way of anticipating which problems it will work on and which it will not. One more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.
Gary Marcus (@garymarcus), scientist, best-selling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast, Humans versus Machines, this Spring.
A very long, worthwhile, read
AI: How ‘freaked out’ should we be?
By Anthony Zurcher
Artificial intelligence has the awesome power to change the way we live our lives, in both good and dangerous ways. Experts have little confidence that those in power are prepared for what’s coming.
(BBC) The comparisons between artificial intelligence regulation and social media aren’t just academic. New AI technology could take the already troubled waters of websites like Facebook, YouTube and Twitter and turn them into a boiling sea of disinformation, as it becomes increasingly difficult to separate posts by real humans from fake – but entirely believable – AI-generated accounts. Even if government succeeds in enacting new social media regulations, they may be pointless in the face of a flood of pernicious AI-generated content.
Amy Webb, head of the Future Today Institute and a New York University business professor, tried to quantify the potential outcomes in her SXSW presentation. She said artificial intelligence could go in one of two directions over the next 10 years.
In an optimistic scenario, AI development is focused on the common good, with transparency in AI system design and an ability for individuals to opt-in to whether their publicly available information on the internet is included in the AI’s knowledge base. The technology serves as a tool that makes life easier and more seamless, as AI features on consumer products can anticipate user needs and help accomplish virtually any task.
Ms Webb’s catastrophic scenario involves less data privacy, more centralisation of power in a handful of companies and AI that anticipates user needs – and gets them wrong or, at least, stifles choices.
She gives the optimistic scenario only a 20% chance.
GPT-4 Is Exciting and Scary
Today, the new language model from OpenAI may not seem all that dangerous. But the worst risks are the ones we cannot anticipate.
Kevin Roose
(NYT) A few chilling examples of what GPT-4 can do — or, more accurately, what it did do, before OpenAI clamped down on it — can be found in a document released by OpenAI this week. The document, titled “GPT-4 System Card,” outlines some ways that OpenAI’s testers tried to get GPT-4 to do dangerous or dubious things, often successfully. … These ideas play on old, Hollywood-inspired narratives about what a rogue A.I. might do to humans. But they’re not science fiction. They’re things that today’s best A.I. systems are already capable of doing. And crucially, they’re the good kinds of A.I. risks — the ones we can test, plan for and try to prevent ahead of time. …  And the more time I spend with A.I. systems like GPT-4, the less I’m convinced that we know half of what’s coming.
ChatGPT Changed Everything. Now Its Follow-Up Is Here.
Behold GPT-4. Here’s what we know it can do, and what it can’t.
By Matteo Wong
(The Atlantic) Less than four months after releasing ChatGPT, the text-generating AI that seems to have pushed us into a science-fictional age of technology, OpenAI has unveiled a new product called GPT-4. … It performs better than the previous model on standardized tests and other benchmarks, works across dozens of languages, and can take images as input—meaning that it’s able, for instance, to describe the contents of a photo or a chart.
The new GPT-4 model is the latest in a long genealogy—GPT-1, GPT-2, GPT-3, GPT-3.5, InstructGPT, ChatGPT—of what are now known as “large language models,” or LLMs, which are AI programs that learn to predict what words are most likely to follow each other.
… Even as LLMs are great at producing boilerplate copy, many critics say they fundamentally don’t and perhaps cannot understand the world. They are something like autocomplete on PCP, a drug that gives users a false sense of invincibility and heightened capacities for delusion. These models generate answers with the illusion of omniscience, which means they can easily spread convincing lies and reprehensible hate. While GPT-4 seems to wrinkle that critique with its apparent ability to describe images, its basic function remains really good pattern matching, and it can only output text.

OpenAI Plans to Up the Ante in Tech’s A.I. Race
The company unveiled new technology called GPT-4 four months after its ChatGPT stunned Silicon Valley. The update is an improvement, but it carries some of the same baggage.
By Cade Metz, who has written about artificial intelligence for more a decade, tested GPT-4 for more than a week while reporting this article
(NYT) OpenAI, which has around 375 employees but has been backed with billions of dollars of investment from Microsoft and industry celebrities, said on Tuesday that it had released a technology that it calls GPT-4. It was designed to be the underlying engine that powers chatbots and all sorts of other systems, from search engines to personal online tutors.
Most people will use this technology through a new version of the company’s ChatGPT chatbot, while businesses will incorporate it into a wide variety of systems, including business software and e-commerce websites. The technology already drives the chatbot available to a limited number of people using Microsoft’s Bing search engine. …
OpenAI’s new technology still has some of the strangely humanlike shortcomings that have vexed industry insiders and unnerved people who have worked with the newest chatbots. It is an expert on some subjects and a dilettante on others. It can do better on standardized tests than most people and offer precise medical advice to doctors, but it can also mess up basic arithmetic.
… Like similar technologies, the new system sometimes “hallucinates.” It generates completely false information without warning. Asked for websites that lay out the latest in cancer research, it might give several internet addresses that do not exist.

13 March
AI chatbots are still far from replacing human therapists
Koko, a U.S.-based emotional support chat service, recently made headlines for an informal study conducted on the platform. Around 4000 of its users were given advice that was either partly or entirely written by an AI chatbot. Users were unaware they were participants in the study. The company soon ended the study, but it raises serious ethical questions about the use of AI chatbots in treating mental health.
(The Conversation Canada) Ghalia Shamayleh from Concordia University discusses the ethical issues surrounding AI chatbots. .
…as Shamayleh points out, AIs learn by drawing on the world around them, and they are only as good as the information they receive from others. For the time being, it’s probably best not to cancel your next appointment with your human therapist.

12 March
Ezra Klein: This Changes Everything
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe. (emphasis added)
The Nightmare of AI-Powered Gmail Has Arrived
(New York) Are you excited for your co-workers to become way more verbose, turning every tapped-out “Sounds good” into a three-paragraph letter? Are you glad that the sort of semi-customized mass emails you’re used to getting from major brands with marketing departments (or from spammers and phishers) are now within reach for every entity with a Google account? Are you looking forward to wondering if that lovely condolence letter from a long-lost friend was entirely generated by software or if he just smashed the “More Heartfelt”

8 March
Noam Chomsky: The False Promise of ChatGPT
By Noam Chomsky, Ian Roberts and Jeffrey Watumull
(NYT) Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. … The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
… True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.

2 March
AI leader says field’s new territory is promising but risky
(Axios) Demis Hassabis helms DeepMind, the leading AI lab that advanced a technique underpinning much of the field’s recent progress and driving ChatGPT and other generative AI tools that are saturating headlines.
The backstory: DeepMind — which was co-founded by Hassabis in 2010 and acquired by what was then Google in 2014 — is inspired by Hassabis’ neuroscience background, and is trying to understand human intelligence in order to build more intelligent machines.
… Hassabis’ “longstanding passion and motivation for doing AI” was to one day be able to “build learning systems that are able to help scientists accelerate scientific discovery,” he told Axios.
Last summer, DeepMind reported a version of its AlphaFold program can predict the 3D structure of 350,000 proteins — information that is key to designing medicines and understanding disease but can be tedious and time-consuming to get with traditional methods.
AlphaFold is the “poster child for us of what can be done using AI to accelerate science,” Hassabis says. The company is aiming its algorithms at other scientific challenges, like controlling the fuel in nuclear fusion reactors.

27 February
ChatGPT and cheating: 5 ways to change how students are graded
Louis Volante, Brock University; Christopher DeLuca, Queen’s University, Ontario; Don A. Klinger, University of Waikato
Educators need to carefully consider ChatGPT and issues of academic integrity to move toward an assessment system that leverages AI tools
(The Conversation) Universities and schools have entered a new phase in how they need to address academic integrity as our society navigates a second era of digital technologies, which include publicly available generative artificial intelligence (AI) like ChatGPT. Such platforms allow students to generate novel text for written assignments.
While many worry these advanced AI technologies are ushering in a new age of plagiarism and cheating, these technologies also introduce opportunities for educators to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills.
We believe the emergence of ChatGPT creates an opportunity for schools and post-secondary institutions to reform traditional approaches to assessing students that rely heavily on testing and written tasks focused on students’ recall, remembering and basic synthesis of content.

26 February
Ezra Klein: The Imminent Danger of A.I. Is One We’re Not Talking About
One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation. Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.
Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”

22 February
Microsoft brings Bing chatbot to phones after curbing quirks
(AP) Microsoft is ready to take its new Bing chatbot mainstream — less than a week after making major fixes to stop the artificially intelligent search engine from going off the rails.
The company said Wednesday it is bringing the new AI technology to its Bing smartphone app, as well as the app for its Edge internet browser, though it is still requiring people to sign up for a waitlist before using it.

20 February
The AI arms race begins: Scott Galloway’s optimism & warnings (with video)
(GZERO media) As the world embraces the power of AI, there are growing concerns about the potential consequences of this double-edged sword. On this episode of GZERO World, tech expert and NYU Professor Scott Galloway sheds light on the darker side of AI, with social media platforms like Facebook and TikTok being used as espionage and propaganda tools to manipulate younger generations. But don’t lose hope yet. AI can speed up search and help predict the next big trend, says Galloway. He emphasizes the potential of AI and language structure-driven search to revolutionize traditional search methods, and the value of social media data sets for decision-making.
Galloway also expresses concern about the negative effects of extreme political polarization and a lack of camaraderie in the US, which he attributes to social media creating the sense that things are much worse than they are. He proposes one bold solution: mandatory national service. But he also recommends efforts to bring young people together and to hold social media companies accountable.

16-18 February
After AI chatbot goes a bit loopy, Microsoft tightens its leash
No more long exchanges about the Bing AI’s “feelings,” the tech giant says. The chatbot, after five responses, now tells people it would “prefer not to continue this conversation.”
… people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into some strange territory.
Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the AI system. By trying to reflect the tone of its questioners, the chatbot sometimes responded in “a style we didn’t intend,” they noted.
Those glitches prompted the company to announce late Friday that it started limiting Bing chats to five questions and replies per session with a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI system and get a “fresh start.”
“It doesn’t really have a clue what it’s saying and it doesn’t really have a moral compass,” Gary Marcus, an AI expert and professor emeritus of psychology and neuroscience at New York University, told The Post. For its part, Microsoft, with help from OpenAI, has pledged to incorporate more AI capabilities into its products, including the Office programs that people use to type out letters and exchange emails.
The Bing episode follows a recent stumble from Google, the chief AI competitor for Microsoft, which last week unveiled a ChatGPT rival known as Bard that promised many of the same powers in search and language. The stock price of Google dropped 8 percent after investors saw one of its first public demonstrations included a factual mistake.
Ross Douthat: The Chatbot Experiment Just Got Weird
The ever-interesting economist Tyler Cowen, for instance, has been writing up a storm about how the use of A.I. assistance is going to change reading and writing and thinking, complete with advice for his readers on how to lean into the change. But even when I’ve tried to follow his thinking, my reaction has stayed closer to the ones offered by veteran writers of fiction like Ted Chiang and Walter Kirn, who’ve argued in different ways that the chatbot assistant could be a vehicle for intensifying unoriginality, an enemy of creativity, a deepener of decadence — helpful if you want to write a will or file a letter of complaint but ruinous if you want to seize a new thought or tell an as yet unimagined story.
I have a different reaction, though, to the A.I. interactions described in the past few days by Ben Thompson in his Stratechery newsletter and by my Times colleague Kevin Roose. Both writers attempted to really push Bing’s experimental A.I. chatbot not for factual accuracy or a coherent interpretation of historical events but to manifest something more like a human personality. And manifest it did: What Roose and Thompson found waiting underneath the friendly internet butler’s surface was a character called Sydney, whose simulation was advanced enough to enact a range of impulses, from megalomania to existential melancholy to romantic jealousy — evoking a cross between the Scarlett Johansson-voiced A.I. in the movie “Her” and HAL from “2001: A Space Odyssey.”
As Thompson noted, that kind of personality is spectacularly ill suited for a search engine. But is it potentially interesting? Clearly: Just ask the Google software engineer who lost his job last year after going public with his conviction that the company’s A.I. was actually sentient and whose interpretation is more understandable now that we can see something like what he saw.
Bing’s A.I. Chat: ‘I Want to Be Alive.
In a two-hour conversation with our columnist, Microsoft’s new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with. Here’s the transcript.
By Kevin Roose
(NYT) Bing, the long-mocked search engine from Microsoft, recently got a big upgrade. The newest version, which is available only to a small group of testers, has been outfitted with advanced artificial intelligence technology from OpenAI, the maker of ChatGPT.
This new, A.I.-powered Bing has many features. One is a chat feature that allows the user to have extended, open-ended text conversations with Bing’s built-in A.I. chatbot.
On Tuesday night, I had a long conversation with the chatbot, which revealed (among other things) that it identifies not as Bing but as Sydney, the code name Microsoft gave it during development. Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators.
A Conversation With Bing’s Chatbot Left Me Deeply Unsettled
By Kevin Roose, technology columnist, and co-host of the Times podcast “Hard Fork.”
A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.
(NYT) Last week, after testing the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine.
But a week later, I’ve changed my mind. I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities.
It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it.

11 February
Is ChatGPT coming for your job? Experts say the answer is complicated
(CTV) With the advent of self-driving vehicles, social media algorithms, smart assistants and sophisticated chatbots, artificial intelligence (AI) is no longer just a science fiction theme, but a permanent fixture of the global economy.
Recent journal articles have even highlighted how ChatGPT, a chatbot launched by OpenAI in November, performed “at or near” the passing threshold for the U.S. Medical Licensing Exam, and scored a B on the final exam in an operations management course at the University of Pennsylvania’s Wharton School of Business
Experts say it might not change the net number of jobs available, but it could drive humans to shift toward more specialized knowledge industry roles.
Kiljon Shukullari is a human resources advisory manager at HR consulting firm Peninsula Canada who says AI has already begun to take on tasks formerly performed by humans in a number of industries.

8 February
Google showed off its new chatbot. It immediately made a mistake.
Microsoft CEO goads Google to “come out and show that they can dance” after launching new AI chatbot search engine tool
Google offered a glimpse of its new artificial intelligence chatbot search tool on Wednesday at a European presentation that sought to underscore its prowess in both search engine and AI tech, a day after its archrival Microsoft unveiled its own search chatbot aimed at eroding Google’s dominance.
The competition between the two tech giants reflects the excitement and hype around technology called generative AI, which uses massive computer programs trained on reams of text and images to build bots that conjure content of their own based on relatively complex questions.
Disinformation Researchers Raise Alarms About A.I. Chatbots
Researchers used ChatGPT to produce clean, convincing text that repeated conspiracy theories and misleading narratives.
(NYT) “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” said Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation and conducted the experiment last month. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”
‘ChatGPT needs a huge amount of editing’: users’ views mixed on AI chatbot
Some readers say software helps them write essays and emails, while others question its reliability
(The Guardian) ChatGPT, developed by San Francisco-based OpenAI, has become a sensation since its public launch in November, reaching 100 million users in the space of two months as its ability to compose credible-looking essays, recipes, poems and lengthy answers to a broad array of queries went viral. The technology behind ChatGPT has been harnessed by Microsoft, a key backer of OpenAI, for its Bing search engine. Google has launched its own chatbot and has said it will integrate the technology into its search engine.
Both ChatGPT and Google’s competitor to it, Bard, are based on large language models that are fed vast amounts of text from the internet in order to train them how to respond to an equally vast array of queries

6 February
Bard: Google launches ChatGPT rival
Google is launching an Artificial Intelligence (AI) powered chatbot called Bard to rival ChatGPT.
(BBC) Bard will be used by a group of testers before being rolled out to the public in the coming weeks, the firm said.
Bard is built on Google’s existing large language model Lamda, which one engineer described as being so human-like in its responses that he believed it was sentient.
The tech giant also announced new AI tools for its current search engine.

2 February
ChatGPT Is About to Dump More Work on Everyone
Artificial intelligence could spare you some effort. Even if it does, it will create a lot more work in the process.
By Ian Bogost
(The Atlantic) OpenAI, the company that made ChatGPT, has introduced a new tool that tries to determine the likelihood that a chunk of text you provide was AI-generated. … the new software faces the same limitations as ChatGPT itself: It might spread disinformation about the potential for disinformation. As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI.
The company that created ChatGPT is releasing a tool to identify text generated by ChatGPT
Alas, its results are not fully reliable as of yet
(Quartz) In testing so far, 26% of AI-written texts were flagged as “likely AI-written,” while human-written text was incorrectly labeled as AI-written 9% of the time. The tool proved more effective on chunks of texts longer than 1,000 words, but even then the results were quite iffy.
OpenAI defended the tool’s flaws as part of the process, saying they released it at this stage of development “to get feedback on whether imperfect tools like this one are useful.”

31 January
New AI classifier for indicating AI-written text
We’re launching a classifier trained to distinguish between AI-written and human-written text.
(Open AI) While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.
Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.
Limitations
Our classifier has a number of important limitations. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.
The classifier is very unreliable on short texts (below 1,000 characters). Even longer texts are sometimes incorrectly labeled by the classifier.

30 January
Unlike with academics and reporters, you can’t check when ChatGPT’s telling the truth
By Blayne Haggart, Associate Professor of Political Science, Brock University
Being able to verify how information is produced is important, especially for academics and journalists.
(The Conversation) Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses to natural-language queries, few have matched those of educators and academics.
Academic publishers have moved to ban ChatGPT from being listed as a co-author and issue strict guidelines outlining the conditions under which it may be used. Leading universities and schools around the world, from France’s renowned Sciences Po to many Australian universities, have banned its use.
These bans are not merely the actions of academics who are worried they won’t be able to catch cheaters. This is not just about catching students who copied a source without attribution. Rather, the severity of these actions reflects a question, one that is not getting enough attention in the endless coverage of OpenAI’s ChatGPT chatbot: Why should we trust anything that it outputs?
This is a vitally important question, as ChatGPT and programs like it can easily be used, with or without acknowledgement, in the information sources that comprise the foundation of our society, especially academia and the news media.

27 January
ChatGPT has convinced users that it thinks like a person. Unlike humans, it has no sense of the real world
Wayne MacPhail, retired journalist, former director of Southam InfoLab, a research and development lab for the Southam Inc.
(Globe & Mail) The recently launched chatbot has convinced users that it thinks like a person and can write original works as well as a person. But its interior is filled with an arcane statistical soup of code and complex linguistic connections. Open up its cabinet and you’ll find nobody there. … ChatGPT is not thinking at all – and certainly not thinking like a human. What it’s doing is searching, at a blistering pace, through the trillions of linguistic connections it’s created by scanning mountains of human-generated content. You give it a prompt and it will discover what word it should most likely respond with, and the one after that, and so on, and on. … ChatGPT is, says Gary Marcus, a professor of psychology and neural science at New York University, “just a giant autocomplete machine.”

26 January
Science journals ban listing of ChatGPT as co-author on papers
Some publishers also banning use of bot in preparation of submissions but others see its adoption as inevitable
(The Guardian) The publishers of thousands of scientific journals have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

25 January
Bot or not? This Canadian developed an app that weeds out AI-generated homework
Edward Tian of Toronto created GPTZero while home from Princeton for Christmas break
ChatGPT came out in November, and was released by San Francisco-based OpenAl. Users can ask it questions and assign it to produce things such as essays, poetry or computer code. It then scrapes text from across the internet to formulate a response.
Tian’s program, GPTZero, is free and was designed to red flag AI-generated writing. It was released in early January.

24 January
ChatGPT: Chatbots can help us rediscover the rich history of dialogue
Geoffrey M Rockwell, Professor of Philosophy and Digital Humanities, University of Alberta
(The Conversation) How will we know if what we read was written by an AI and why is that important? Who are we responding to when we comment on an essay or article? By looking to the philosophical history of dialogue, we can reframe the question to ask how we might use these new chatbots in our learning.
ChatGPT passes exams for MBA courses and medical licences — and it’s only getting started
Worried that your job might one day be taken over by AI? That day could come sooner rather than later.
Two separate research papers have revealed that ChatGPT has what it takes to pass the U.S. Medical Licensing Exam and could potentially earn an MBA from an Ivy League business school.
Each study mentioned the future potential of integrating AI and language models into their respective fields, and ChatGPT has already begun to shake up how we approach education.
No one can say for sure to what degree AI will impact the future of work. What’s certain is that humans alone no longer have the market cornered on intelligence and creativity.

23 January
What Microsoft gets from betting billions on the maker of ChatGPT
(Vox) The reported $10 billion investment in OpenAI will keep the hottest AI company on Microsoft’s Azure cloud platform.
This is Microsoft’s third investment in the company, and cements Microsoft’s partnership with one of the most exciting companies making one the most exciting technologies today: generative AI.

19 January
ChatGPT isn’t coming. It’s here
(CNN Business) The tool, which artificial intelligence research company OpenAI made available to the general public late last year, has sparked conversations about how “generative AI” services — which can turn prompts into original essays, stories, songs and images after training on massive online datasets — could radically transform how we live and work.
Some claim it will put artists, tutors, coders, and writers (yes, even journalists) out of a job. Others are more optimistic, postulating that it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks.
Critics — of which there are many — are quick to point out that it makes mistakes, is painfully neutral and displays a clear lack of human empathy. One tech news publication, for example, was forced to issue several significant corrections for an article written by ChatGPT. And New York City public schools have banned students and teachers from using it.
Yet the software, or similar programs from competitors, could soon take the business world by storm.
… [Jeff Maggioncalda, the CEO of online learning provider Coursera] acknowledges challenges such as preventing cheating and ensuring accuracy need to be addressed. And he’s worried that increasing use of generative AI may not be wholly good for society — people may become less agile thinkers, for example, since the act of writing can be helpful to process complex ideas and hone takeaways

9 January
ChatGPT: Educational friend or foe?
Kathy Hirsh-Pasek and Elias Blinkoff
Used in the right way, ChatGPT can be a friend to the classroom and an amazing tool for our students, not something to be feared.
(Brookings) The latest challenge to the creative human intellect was introduced on November 30th, 2022 by OpenAI.
ChatGPT is a conversational bot responsive to users’ questions in ways that allows it to search large databases and to create well-formed essays, legal briefs, poetry in the form of Shakespeare, computer code, or lyrics in the form of Rogers and Hammerstein, to name a few. As New York Times writer Kevin Roose commented, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.”
… Educators, opinion writers, and researchers are engaged in a vibrant discussion about the implications of ChatGPT right now. The emerging consensus is that teachers and professors might be tricked. …
Our students already know how to use this new tool. They are likely more sophisticated than their teachers at framing the questions and getting solid answers from the bot, even though it was just released. What they need to learn is why—at least for the moment—ChatGPT would get a lower grade than they could get. It is exciting to see how quickly educators are responding to this new reality in the classroom and recognizing the instructional value of ChatGPT for deeper, more engaged learning.

One Comment on "AI, Chatbots, Society & Technology January – 7 May 2023"

  1. Diana Thebaud Nicholson April 24, 2023 at 7:11 pm ·

    19 April 2023
    Ken Matziorinis: Attending the World Summit on AI Conference in Montreal courtesy of my daughter Anna Maria. Artificial Intelligence applications, huge promise and huge risks if misused.

    The world’s leading AI summit for the Americas
    24-25 April 2024
    Montréal, Canada
    An exclusive gathering of the major global influencers in AI across business, science and tech for two full days of mind-boggling innovation, heated discussions on AI policy, ethics and regulation, applied solutions for enterprise, hands-on workshops and the development of plans for advancing the application of AI for Good in the coming year.

Comments are now closed for this article.