AI, Chatbots, Society & Technology January 2023-
Written by Diana Thebaud Nicholson // March 31, 2023 // Science & Technology // No comments
AI Chatbot
A chatbot is a computer program or application that simulates and processes human conversation (either through text or voice), enables the user/human to interact with digital entities as if they were communicating with a real human.
Chatbots are in two major categories.
AI chatbot is an artificial intelligence (AI) program that can simulate a conversation (or a text communication) with a human or user in common language through any messaging systems, websites, mobile apps, or through the phone via Interactive voice responses (IVR). AI chatbots will automatically learn after an initial training period by a bot developer.
Rules-Based Chatbots are also referred to as decision-tree bots. As the name suggests it follows pre-designed rules, often built using a graphical user interface where a chatbot builder will design paths using a decision tree.
Foreign Affairs September/October 2022
Spirals of Delusion -How AI Distorts Decision-Making and Makes Dictators More Dangerous
By Henry Farrell, Abraham Newman, and Jeremy Wallace
AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.
6 January
A Skeptical Take on the A.I. Revolution
The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?
26 January
3 AI predictions for 2023 and beyond, according to an AI expert
Michael Schmidt, Chief Technology Officer, DataRobot
The field of artificial intelligence (AI) has seen huge growth in recent years.
Companies seeking to harness AI must overcome key societal concerns.
Key predictions outline how to achieve value from responsible AI growth.
An AI researcher who has been warning about the technology for over 20 years says we should ‘shut it all down,’ and issue an ‘indefinite and worldwide’ ban
Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.
The letter, signed by 1,125 people including Elon Musk and Apple’s co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI’s recently launched GPT-4.
Yudkowsy’s article, titled “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down,” said he refrained from signing the letter because it understated the “seriousness of the situation,” and asked for “too little to solve it.”
29 March
Musk, scientists call for halt to AI race sparked by ChatGPT
(AP) Their petition published Wednesday is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.
The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.
It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
17-18 March
As AI chatbots proliferate, so does demand for prompt engineers turned AI whisperers (audio)
(CBC Day 6) If you’ve interacted with a modern chatbot like ChatGPT in the past few months, you’ve probably been amazed by just how creative, eloquent, and human they can seem. But, as demonstrated by Bing’s recent meltdown, there are still lots of things that can go wrong with these bots. Enter: the prompt engineer. Referred to by some as “AI whisperers,” prompt engineers are people who design, refine — and sometimes sell — text prompts for different AI programs with the goal of achieving consistent, specific results. Simon Willison, a developer who has studied prompt engineering, let us know what’s so exciting about prompt engineering and why he thinks this field is set to keep growing.
Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are?
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’
(CBC Spark) What’s the difference between a sentient human mind and a computer program that’s just doing a very good job of mimicking the output of one?
For years, that’s been a central question for many who study artificial intelligence (AI), or the inner workings of the brain. But with the meteoric rise of OpenAI’s ChatGPT — a large language model (LLM) that can generate convincing, detailed responses to natural language requests — a once abstract, hypothetical question has suddenly become very real.
“They seem to be tools that are ontologically ambiguous,” said Jill Fellows, a philosophy instructor at Douglas College, who specializes in philosophy of technology and AI.
“We don’t necessarily know how to place them,” she said. “On the one hand, we do treat it like a tool that we can offload labour to. But on the other hand, because of this ontological ambiguity, we also kind of treat it like an autonomous agent.”
14-15 March
GPT-4’s Successes, and GPT-4’s Failures
By Gary Marcus
GPT-4 is amazing, and GPT-4 is a failure.
GPT is legitimately amazing. It can see (though we don’t have a lot of details on that yet); it does astonishingly well on a whole bunch of standardized tests, like LSATs, GREs, and SATs. It has also already been adopted in a bunch of commercial systems (e.g., Khan Academy).
But it is a failure, too, because
It doesn’t actually solve any of the core problems of truthfulness and reliability that I laid out in my infamous March 2022 essay Deep Learning is Hitting a Wall. Alignment is still shaky; you still wouldn’t be able to use it reliably to guide robots or scientific discovery, the kinds of things that made me excited about A(G)I in the first place. Outliers remain a problem, too.
… All of this (a) makes me more convinced…that GPT-4 is an off-ramp to AGI…, and (b) it puts all of us in an extremely poor position to predict what GPT-4 consequences will be for society, if we have no idea of what is in the training set and no way of anticipating which problems it will work on and which it will not. One more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.
Gary Marcus (@garymarcus), scientist, best-selling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. Watch for his new podcast, Humans versus Machines, this Spring.
A very long, worthwhile, read
AI: How ‘freaked out’ should we be?
By Anthony Zurcher
Artificial intelligence has the awesome power to change the way we live our lives, in both good and dangerous ways. Experts have little confidence that those in power are prepared for what’s coming.
(BBC) The comparisons between artificial intelligence regulation and social media aren’t just academic. New AI technology could take the already troubled waters of websites like Facebook, YouTube and Twitter and turn them into a boiling sea of disinformation, as it becomes increasingly difficult to separate posts by real humans from fake – but entirely believable – AI-generated accounts. Even if government succeeds in enacting new social media regulations, they may be pointless in the face of a flood of pernicious AI-generated content.
Amy Webb, head of the Future Today Institute and a New York University business professor, tried to quantify the potential outcomes in her SXSW presentation. She said artificial intelligence could go in one of two directions over the next 10 years.
In an optimistic scenario, AI development is focused on the common good, with transparency in AI system design and an ability for individuals to opt-in to whether their publicly available information on the internet is included in the AI’s knowledge base. The technology serves as a tool that makes life easier and more seamless, as AI features on consumer products can anticipate user needs and help accomplish virtually any task.
Ms Webb’s catastrophic scenario involves less data privacy, more centralisation of power in a handful of companies and AI that anticipates user needs – and gets them wrong or, at least, stifles choices.
She gives the optimistic scenario only a 20% chance.
GPT-4 Is Exciting and Scary
Today, the new language model from OpenAI may not seem all that dangerous. But the worst risks are the ones we cannot anticipate.
Kevin Roose
(NYT) A few chilling examples of what GPT-4 can do — or, more accurately, what it did do, before OpenAI clamped down on it — can be found in a document released by OpenAI this week. The document, titled “GPT-4 System Card,” outlines some ways that OpenAI’s testers tried to get GPT-4 to do dangerous or dubious things, often successfully. … These ideas play on old, Hollywood-inspired narratives about what a rogue A.I. might do to humans. But they’re not science fiction. They’re things that today’s best A.I. systems are already capable of doing. And crucially, they’re the good kinds of A.I. risks — the ones we can test, plan for and try to prevent ahead of time. … And the more time I spend with A.I. systems like GPT-4, the less I’m convinced that we know half of what’s coming.
ChatGPT Changed Everything. Now Its Follow-Up Is Here.
Behold GPT-4. Here’s what we know it can do, and what it can’t.
By Matteo Wong
(The Atlantic) Less than four months after releasing ChatGPT, the text-generating AI that seems to have pushed us into a science-fictional age of technology, OpenAI has unveiled a new product called GPT-4. … It performs better than the previous model on standardized tests and other benchmarks, works across dozens of languages, and can take images as input—meaning that it’s able, for instance, to describe the contents of a photo or a chart.
The new GPT-4 model is the latest in a long genealogy—GPT-1, GPT-2, GPT-3, GPT-3.5, InstructGPT, ChatGPT—of what are now known as “large language models,” or LLMs, which are AI programs that learn to predict what words are most likely to follow each other.
… Even as LLMs are great at producing boilerplate copy, many critics say they fundamentally don’t and perhaps cannot understand the world. They are something like autocomplete on PCP, a drug that gives users a false sense of invincibility and heightened capacities for delusion. These models generate answers with the illusion of omniscience, which means they can easily spread convincing lies and reprehensible hate. While GPT-4 seems to wrinkle that critique with its apparent ability to describe images, its basic function remains really good pattern matching, and it can only output text.
OpenAI Plans to Up the Ante in Tech’s A.I. Race
The company unveiled new technology called GPT-4 four months after its ChatGPT stunned Silicon Valley. The update is an improvement, but it carries some of the same baggage.
By Cade Metz, who has written about artificial intelligence for more a decade, tested GPT-4 for more than a week while reporting this article
(NYT) OpenAI, which has around 375 employees but has been backed with billions of dollars of investment from Microsoft and industry celebrities, said on Tuesday that it had released a technology that it calls GPT-4. It was designed to be the underlying engine that powers chatbots and all sorts of other systems, from search engines to personal online tutors.
Most people will use this technology through a new version of the company’s ChatGPT chatbot, while businesses will incorporate it into a wide variety of systems, including business software and e-commerce websites. The technology already drives the chatbot available to a limited number of people using Microsoft’s Bing search engine. …
OpenAI’s new technology still has some of the strangely humanlike shortcomings that have vexed industry insiders and unnerved people who have worked with the newest chatbots. It is an expert on some subjects and a dilettante on others. It can do better on standardized tests than most people and offer precise medical advice to doctors, but it can also mess up basic arithmetic.
… Like similar technologies, the new system sometimes “hallucinates.” It generates completely false information without warning. Asked for websites that lay out the latest in cancer research, it might give several internet addresses that do not exist.
13 March
AI chatbots are still far from replacing human therapists
Koko, a U.S.-based emotional support chat service, recently made headlines for an informal study conducted on the platform. Around 4000 of its users were given advice that was either partly or entirely written by an AI chatbot. Users were unaware they were participants in the study. The company soon ended the study, but it raises serious ethical questions about the use of AI chatbots in treating mental health.
(The Conversation Canada) Ghalia Shamayleh from Concordia University discusses the ethical issues surrounding AI chatbots. .
…as Shamayleh points out, AIs learn by drawing on the world around them, and they are only as good as the information they receive from others. For the time being, it’s probably best not to cancel your next appointment with your human therapist.
12 March
Ezra Klein: This Changes Everything
“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe. (emphasis added)
The Nightmare of AI-Powered Gmail Has Arrived
(New York) Are you excited for your co-workers to become way more verbose, turning every tapped-out “Sounds good” into a three-paragraph letter? Are you glad that the sort of semi-customized mass emails you’re used to getting from major brands with marketing departments (or from spammers and phishers) are now within reach for every entity with a Google account? Are you looking forward to wondering if that lovely condolence letter from a long-lost friend was entirely generated by software or if he just smashed the “More Heartfelt”
8 March
Noam Chomsky: The False Promise of ChatGPT
By Noam Chomsky, Ian Roberts and Jeffrey Watumull
(NYT) Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. … The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
… True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
2 March
AI leader says field’s new territory is promising but risky
(Axios) Demis Hassabis helms DeepMind, the leading AI lab that advanced a technique underpinning much of the field’s recent progress and driving ChatGPT and other generative AI tools that are saturating headlines.
The backstory: DeepMind — which was co-founded by Hassabis in 2010 and acquired by what was then Google in 2014 — is inspired by Hassabis’ neuroscience background, and is trying to understand human intelligence in order to build more intelligent machines.
… Hassabis’ “longstanding passion and motivation for doing AI” was to one day be able to “build learning systems that are able to help scientists accelerate scientific discovery,” he told Axios.
Last summer, DeepMind reported a version of its AlphaFold program can predict the 3D structure of 350,000 proteins — information that is key to designing medicines and understanding disease but can be tedious and time-consuming to get with traditional methods.
AlphaFold is the “poster child for us of what can be done using AI to accelerate science,” Hassabis says. The company is aiming its algorithms at other scientific challenges, like controlling the fuel in nuclear fusion reactors.
27 February
ChatGPT and cheating: 5 ways to change how students are graded
Louis Volante, Brock University; Christopher DeLuca, Queen’s University, Ontario; Don A. Klinger, University of Waikato
Educators need to carefully consider ChatGPT and issues of academic integrity to move toward an assessment system that leverages AI tools
(The Conversation) Universities and schools have entered a new phase in how they need to address academic integrity as our society navigates a second era of digital technologies, which include publicly available generative artificial intelligence (AI) like ChatGPT. Such platforms allow students to generate novel text for written assignments.
While many worry these advanced AI technologies are ushering in a new age of plagiarism and cheating, these technologies also introduce opportunities for educators to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills.
We believe the emergence of ChatGPT creates an opportunity for schools and post-secondary institutions to reform traditional approaches to assessing students that rely heavily on testing and written tasks focused on students’ recall, remembering and basic synthesis of content.
26 February
Ezra Klein: The Imminent Danger of A.I. Is One We’re Not Talking About
One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I. There is a wisdom to that, but wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation. Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell? I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.
Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”
22 February
Microsoft brings Bing chatbot to phones after curbing quirks
(AP) Microsoft is ready to take its new Bing chatbot mainstream — less than a week after making major fixes to stop the artificially intelligent search engine from going off the rails.
The company said Wednesday it is bringing the new AI technology to its Bing smartphone app, as well as the app for its Edge internet browser, though it is still requiring people to sign up for a waitlist before using it.
20 February
The AI arms race begins: Scott Galloway’s optimism & warnings (with video)
(GZERO media) As the world embraces the power of AI, there are growing concerns about the potential consequences of this double-edged sword. On this episode of GZERO World, tech expert and NYU Professor Scott Galloway sheds light on the darker side of AI, with social media platforms like Facebook and TikTok being used as espionage and propaganda tools to manipulate younger generations. But don’t lose hope yet. AI can speed up search and help predict the next big trend, says Galloway. He emphasizes the potential of AI and language structure-driven search to revolutionize traditional search methods, and the value of social media data sets for decision-making.
Galloway also expresses concern about the negative effects of extreme political polarization and a lack of camaraderie in the US, which he attributes to social media creating the sense that things are much worse than they are. He proposes one bold solution: mandatory national service. But he also recommends efforts to bring young people together and to hold social media companies accountable.
16-18 February
After AI chatbot goes a bit loopy, Microsoft tightens its leash
No more long exchanges about the Bing AI’s “feelings,” the tech giant says. The chatbot, after five responses, now tells people it would “prefer not to continue this conversation.”
… people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into some strange territory.
Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the AI system. By trying to reflect the tone of its questioners, the chatbot sometimes responded in “a style we didn’t intend,” they noted.
Those glitches prompted the company to announce late Friday that it started limiting Bing chats to five questions and replies per session with a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI system and get a “fresh start.”
“It doesn’t really have a clue what it’s saying and it doesn’t really have a moral compass,” Gary Marcus, an AI expert and professor emeritus of psychology and neuroscience at New York University, told The Post. For its part, Microsoft, with help from OpenAI, has pledged to incorporate more AI capabilities into its products, including the Office programs that people use to type out letters and exchange emails.
The Bing episode follows a recent stumble from Google, the chief AI competitor for Microsoft, which last week unveiled a ChatGPT rival known as Bard that promised many of the same powers in search and language. The stock price of Google dropped 8 percent after investors saw one of its first public demonstrations included a factual mistake.
Ross Douthat: The Chatbot Experiment Just Got Weird
The ever-interesting economist Tyler Cowen, for instance, has been writing up a storm about how the use of A.I. assistance is going to change reading and writing and thinking, complete with advice for his readers on how to lean into the change. But even when I’ve tried to follow his thinking, my reaction has stayed closer to the ones offered by veteran writers of fiction like Ted Chiang and Walter Kirn, who’ve argued in different ways that the chatbot assistant could be a vehicle for intensifying unoriginality, an enemy of creativity, a deepener of decadence — helpful if you want to write a will or file a letter of complaint but ruinous if you want to seize a new thought or tell an as yet unimagined story.
I have a different reaction, though, to the A.I. interactions described in the past few days by Ben Thompson in his Stratechery newsletter and by my Times colleague Kevin Roose. Both writers attempted to really push Bing’s experimental A.I. chatbot not for factual accuracy or a coherent interpretation of historical events but to manifest something more like a human personality. And manifest it did: What Roose and Thompson found waiting underneath the friendly internet butler’s surface was a character called Sydney, whose simulation was advanced enough to enact a range of impulses, from megalomania to existential melancholy to romantic jealousy — evoking a cross between the Scarlett Johansson-voiced A.I. in the movie “Her” and HAL from “2001: A Space Odyssey.”
As Thompson noted, that kind of personality is spectacularly ill suited for a search engine. But is it potentially interesting? Clearly: Just ask the Google software engineer who lost his job last year after going public with his conviction that the company’s A.I. was actually sentient and whose interpretation is more understandable now that we can see something like what he saw.
Bing’s A.I. Chat: ‘I Want to Be Alive.
In a two-hour conversation with our columnist, Microsoft’s new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with. Here’s the transcript.
By Kevin Roose
(NYT) Bing, the long-mocked search engine from Microsoft, recently got a big upgrade. The newest version, which is available only to a small group of testers, has been outfitted with advanced artificial intelligence technology from OpenAI, the maker of ChatGPT.
This new, A.I.-powered Bing has many features. One is a chat feature that allows the user to have extended, open-ended text conversations with Bing’s built-in A.I. chatbot.
On Tuesday night, I had a long conversation with the chatbot, which revealed (among other things) that it identifies not as Bing but as Sydney, the code name Microsoft gave it during development. Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators.
A Conversation With Bing’s Chatbot Left Me Deeply Unsettled
By Kevin Roose, technology columnist, and co-host of the Times podcast “Hard Fork.”
A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.
(NYT) Last week, after testing the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine.
But a week later, I’ve changed my mind. I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities.
It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it.
11 February
Is ChatGPT coming for your job? Experts say the answer is complicated
(CTV) With the advent of self-driving vehicles, social media algorithms, smart assistants and sophisticated chatbots, artificial intelligence (AI) is no longer just a science fiction theme, but a permanent fixture of the global economy.
Recent journal articles have even highlighted how ChatGPT, a chatbot launched by OpenAI in November, performed “at or near” the passing threshold for the U.S. Medical Licensing Exam, and scored a B on the final exam in an operations management course at the University of Pennsylvania’s Wharton School of Business
Experts say it might not change the net number of jobs available, but it could drive humans to shift toward more specialized knowledge industry roles.
Kiljon Shukullari is a human resources advisory manager at HR consulting firm Peninsula Canada who says AI has already begun to take on tasks formerly performed by humans in a number of industries.
8 February
Google showed off its new chatbot. It immediately made a mistake.
Microsoft CEO goads Google to “come out and show that they can dance” after launching new AI chatbot search engine tool
Google offered a glimpse of its new artificial intelligence chatbot search tool on Wednesday at a European presentation that sought to underscore its prowess in both search engine and AI tech, a day after its archrival Microsoft unveiled its own search chatbot aimed at eroding Google’s dominance.
The competition between the two tech giants reflects the excitement and hype around technology called generative AI, which uses massive computer programs trained on reams of text and images to build bots that conjure content of their own based on relatively complex questions.
Disinformation Researchers Raise Alarms About A.I. Chatbots
Researchers used ChatGPT to produce clean, convincing text that repeated conspiracy theories and misleading narratives.
(NYT) “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” said Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation and conducted the experiment last month. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having A.I. agents contributing to disinformation.”
‘ChatGPT needs a huge amount of editing’: users’ views mixed on AI chatbot
Some readers say software helps them write essays and emails, while others question its reliability
(The Guardian) ChatGPT, developed by San Francisco-based OpenAI, has become a sensation since its public launch in November, reaching 100 million users in the space of two months as its ability to compose credible-looking essays, recipes, poems and lengthy answers to a broad array of queries went viral. The technology behind ChatGPT has been harnessed by Microsoft, a key backer of OpenAI, for its Bing search engine. Google has launched its own chatbot and has said it will integrate the technology into its search engine.
Both ChatGPT and Google’s competitor to it, Bard, are based on large language models that are fed vast amounts of text from the internet in order to train them how to respond to an equally vast array of queries
6 February
Bard: Google launches ChatGPT rival
Google is launching an Artificial Intelligence (AI) powered chatbot called Bard to rival ChatGPT.
(BBC) Bard will be used by a group of testers before being rolled out to the public in the coming weeks, the firm said.
Bard is built on Google’s existing large language model Lamda, which one engineer described as being so human-like in its responses that he believed it was sentient.
The tech giant also announced new AI tools for its current search engine.
2 February
ChatGPT Is About to Dump More Work on Everyone
Artificial intelligence could spare you some effort. Even if it does, it will create a lot more work in the process.
By Ian Bogost
(The Atlantic) OpenAI, the company that made ChatGPT, has introduced a new tool that tries to determine the likelihood that a chunk of text you provide was AI-generated. … the new software faces the same limitations as ChatGPT itself: It might spread disinformation about the potential for disinformation. As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI.
The company that created ChatGPT is releasing a tool to identify text generated by ChatGPT
Alas, its results are not fully reliable as of yet
(Quartz) In testing so far, 26% of AI-written texts were flagged as “likely AI-written,” while human-written text was incorrectly labeled as AI-written 9% of the time. The tool proved more effective on chunks of texts longer than 1,000 words, but even then the results were quite iffy.
OpenAI defended the tool’s flaws as part of the process, saying they released it at this stage of development “to get feedback on whether imperfect tools like this one are useful.”
31 January
New AI classifier for indicating AI-written text
We’re launching a classifier trained to distinguish between AI-written and human-written text.
(Open AI) While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.
Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.
Limitations
Our classifier has a number of important limitations. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.
The classifier is very unreliable on short texts (below 1,000 characters). Even longer texts are sometimes incorrectly labeled by the classifier.
30 January
Unlike with academics and reporters, you can’t check when ChatGPT’s telling the truth
By Blayne Haggart, Associate Professor of Political Science, Brock University
Being able to verify how information is produced is important, especially for academics and journalists.
(The Conversation) Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses to natural-language queries, few have matched those of educators and academics.
Academic publishers have moved to ban ChatGPT from being listed as a co-author and issue strict guidelines outlining the conditions under which it may be used. Leading universities and schools around the world, from France’s renowned Sciences Po to many Australian universities, have banned its use.
These bans are not merely the actions of academics who are worried they won’t be able to catch cheaters. This is not just about catching students who copied a source without attribution. Rather, the severity of these actions reflects a question, one that is not getting enough attention in the endless coverage of OpenAI’s ChatGPT chatbot: Why should we trust anything that it outputs?
This is a vitally important question, as ChatGPT and programs like it can easily be used, with or without acknowledgement, in the information sources that comprise the foundation of our society, especially academia and the news media.
27 January
ChatGPT has convinced users that it thinks like a person. Unlike humans, it has no sense of the real world
Wayne MacPhail, retired journalist, former director of Southam InfoLab, a research and development lab for the Southam Inc.
(Globe & Mail) The recently launched chatbot has convinced users that it thinks like a person and can write original works as well as a person. But its interior is filled with an arcane statistical soup of code and complex linguistic connections. Open up its cabinet and you’ll find nobody there. … ChatGPT is not thinking at all – and certainly not thinking like a human. What it’s doing is searching, at a blistering pace, through the trillions of linguistic connections it’s created by scanning mountains of human-generated content. You give it a prompt and it will discover what word it should most likely respond with, and the one after that, and so on, and on. … ChatGPT is, says Gary Marcus, a professor of psychology and neural science at New York University, “just a giant autocomplete machine.”
26 January
Science journals ban listing of ChatGPT as co-author on papers
Some publishers also banning use of bot in preparation of submissions but others see its adoption as inevitable
(The Guardian) The publishers of thousands of scientific journals have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.
25 January
Bot or not? This Canadian developed an app that weeds out AI-generated homework
Edward Tian of Toronto created GPTZero while home from Princeton for Christmas break
ChatGPT came out in November, and was released by San Francisco-based OpenAl. Users can ask it questions and assign it to produce things such as essays, poetry or computer code. It then scrapes text from across the internet to formulate a response.
Tian’s program, GPTZero, is free and was designed to red flag AI-generated writing. It was released in early January.
24 January
ChatGPT: Chatbots can help us rediscover the rich history of dialogue
Geoffrey M Rockwell, Professor of Philosophy and Digital Humanities, University of Alberta
(The Conversation) How will we know if what we read was written by an AI and why is that important? Who are we responding to when we comment on an essay or article? By looking to the philosophical history of dialogue, we can reframe the question to ask how we might use these new chatbots in our learning.
ChatGPT passes exams for MBA courses and medical licences — and it’s only getting started
Worried that your job might one day be taken over by AI? That day could come sooner rather than later.
Two separate research papers have revealed that ChatGPT has what it takes to pass the U.S. Medical Licensing Exam and could potentially earn an MBA from an Ivy League business school.
Each study mentioned the future potential of integrating AI and language models into their respective fields, and ChatGPT has already begun to shake up how we approach education.
No one can say for sure to what degree AI will impact the future of work. What’s certain is that humans alone no longer have the market cornered on intelligence and creativity.
23 January
What Microsoft gets from betting billions on the maker of ChatGPT
(Vox) The reported $10 billion investment in OpenAI will keep the hottest AI company on Microsoft’s Azure cloud platform.
This is Microsoft’s third investment in the company, and cements Microsoft’s partnership with one of the most exciting companies making one the most exciting technologies today: generative AI.
19 January
ChatGPT isn’t coming. It’s here
(CNN Business) The tool, which artificial intelligence research company OpenAI made available to the general public late last year, has sparked conversations about how “generative AI” services — which can turn prompts into original essays, stories, songs and images after training on massive online datasets — could radically transform how we live and work.
Some claim it will put artists, tutors, coders, and writers (yes, even journalists) out of a job. Others are more optimistic, postulating that it will allow employees to tackle to-do lists with greater efficiency or focus on higher-level tasks.
Critics — of which there are many — are quick to point out that it makes mistakes, is painfully neutral and displays a clear lack of human empathy. One tech news publication, for example, was forced to issue several significant corrections for an article written by ChatGPT. And New York City public schools have banned students and teachers from using it.
Yet the software, or similar programs from competitors, could soon take the business world by storm.
… [Jeff Maggioncalda, the CEO of online learning provider Coursera] acknowledges challenges such as preventing cheating and ensuring accuracy need to be addressed. And he’s worried that increasing use of generative AI may not be wholly good for society — people may become less agile thinkers, for example, since the act of writing can be helpful to process complex ideas and hone takeaways
9 January
ChatGPT: Educational friend or foe?
Kathy Hirsh-Pasek and Elias Blinkoff
Used in the right way, ChatGPT can be a friend to the classroom and an amazing tool for our students, not something to be feared.
(Brookings) The latest challenge to the creative human intellect was introduced on November 30th, 2022 by OpenAI.
ChatGPT is a conversational bot responsive to users’ questions in ways that allows it to search large databases and to create well-formed essays, legal briefs, poetry in the form of Shakespeare, computer code, or lyrics in the form of Rogers and Hammerstein, to name a few. As New York Times writer Kevin Roose commented, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.”
… Educators, opinion writers, and researchers are engaged in a vibrant discussion about the implications of ChatGPT right now. The emerging consensus is that teachers and professors might be tricked. …
Our students already know how to use this new tool. They are likely more sophisticated than their teachers at framing the questions and getting solid answers from the bot, even though it was just released. What they need to learn is why—at least for the moment—ChatGPT would get a lower grade than they could get. It is exciting to see how quickly educators are responding to this new reality in the classroom and recognizing the instructional value of ChatGPT for deeper, more engaged learning.