AI, Chatbots, Society & Technology July 2024-

Written by  //  September 26, 2024  //  AI Artificial Intelligence, Science & Technology  //  No comments

5 September
Council of Europe opens first ever global treaty on AI for signature
The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.
The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union.
The AI Convention, the first international artificial intelligence treaty, opened for signing Thursday [5 September]. It is sponsored by the US, Britain and the EU and comes after months of negotiations between 57 countries. The Convention will address both the risks and responsible use of AI, especially pertaining to human rights and data privacy. Critics, however, worry that the treaty is too broad and includes caveats that may limit its enforceability. For instance, the Convention allows several exemptions for AI technology used for national security purposes and in the private sector.

26 September
Artist sues after US rejects copyright for AI-generated image
Who owns that AI copyright?
Not the artist: Jason M. Allen asked a Colorado federal court to reverse a U.S. Copyright Office decision that rejected protection for an award-winning image he created with AI. Investors want to know more about power needs for AI and advanced computing to determine whether they should continue considering the sector as sustainable. The U.S. fined a political consultant $6 million over fake robocalls that mimicked President Biden’s voice. And OpenAI is moving away from a non-profit structure, a move that could have implications for how the company manages AI risks.

19 September
The UN unveils plan for AI
(GZERO media) Overnight, and after months of deliberation, a United Nations advisory body studying artificial intelligence released its final report. Aptly called “Governing AI for Humanity” it is a set of findings and policy recommendations for the international organization and an update since the group’s interim report in December 2023.
“As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place,” the report’s authors wrote. “The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”
The HLAB-AI report asks the UN to begin working on a “globally inclusive” system for AI governance, calls on governments and stakeholders to develop AI in a way that protects human rights, and it makes seven recommendations
An international scientific panel on AI: A new group of volunteer experts would issue an annual report on AI risks and opportunities. They’d also contribute regular research on how AI could help achieve the UN’s Sustainable Development Goals, or SDGs.
Policy dialogue on AI governance: A twice-yearly policy dialogue with governments and stakeholders on best practices for AI governance. It’d have an emphasis on “international interoperability” of AI governance.
AI standards exchange: This effort would develop common definitions and standards for evaluating AI systems. It’d create a new process for identifying gaps in these definitions and how to write them, as well.
Capacity development network: A network of new development centers that will provide researchers and social entrepreneurs with expertise, training data, and computing. It’d also develop online educational resources for university students and a fellowship program for individuals to spend time in academic institutions and tech companies.
Global fund for AI: A new fund that would collect donations from public and private groups and disburse money to “put a floor under the AI divide,” focused on countries with fewer resources to fund AI.
Global AI data framework: An initiative to set common standards and best practices governing AI training data and its provenance. It’d hold a repository of data sets and models to help achieve the SDGs.
AI office within the Secretariat: This new office would see through the proposals in this report and advise the Secretary-General on all matters relating to AI.

15 September
The Real AI Threat Starts When the Polls Close
The seeds of an AI election backlash were sown even before this election. The process started in the late 2010s, when fears about the influence of a deepfake apocalypse began, or perhaps even earlier, when Americans finally noticed the rapid spread of mis- and disinformation on social media. But if AI actually becomes a postelection scapegoat, it likely won’t be because the technology singlehandedly determined the results.
By Matteo Wong
(The Atlantic) Over the past several months, multiple polls have shown that large swaths of Americans fear that AI will be used to sway the election. In a survey conducted in April by researchers at Elon University, 78 percent of participants said they believed AI would be used to affect the presidential election by running fake social-media accounts, generating misinformation, or persuading people not to vote. More than half thought all three were at least somewhat likely to happen. Research conducted by academics in March found that half of Americans think AI will make elections worse.
There are, to be clear, very real reasons to worry that generative AI could influence voters, as I have written: Chatbots regularly assert incorrect but believable claims with confidence, and AI-generated photos and videos can be challenging to immediately detect. The technology could be used to manipulate people’s beliefs, impersonate candidates, or spread disenfranchising false information about how to vote.
… Politicians and public figures have begun to invoke AI-generated disinformation, legitimately and not, as a way to brush off criticism, disparage opponents, and stoke the culture wars.

13 September
Revolution, interrupted
One of the promises of machine learning was better drugs, faster. A decade in, AI has failed to live up to the hype
Joe Castaldo, Sean Silcoff
(Globe & Mail) AI was supposed to revolutionize drug discovery. It hasn’t. Not yet, anyway. Machine learning promised to speed up a lengthy and fraught process, and achieve breakthroughs beyond the capabilities of the human mind. But there are no drugs solely designed by AI in the market today, and companies that have used AI to assist with development have suffered setbacks.
… True expertise in drug discovery is also scarce. Relatively few people have toiled away in labs and navigated the regulatory process to have a new drug approved. “They know exactly where the difficulties lie, and what to look out for,” said Clarissa Desjardins, founder of Congruence Therapeutics in Montreal, another Amplitude-backed drug developer, and a veteran biotech entrepreneur.
Experts in AI are typically not experts in pharmaceuticals either.
… Companies that have overcome these issues and brought drugs developed with the assistance of AI into clinical trials are grappling with another sobering reality: AI does not guarantee success.

Microsoft’s Hypocrisy on AI
Can artificial intelligence really enrich fossil-fuel companies and fight climate change at the same time? The tech giant says yes.
By Karen Hao
(The Atlantic) …as Microsoft attempts to buoy its reputation as an AI leader in climate innovation, the company is also selling its AI to fossil-fuel companies. Hundreds of pages of internal documents I’ve obtained, plus interviews I’ve conducted over the past year with 15 current and former employees and executives, show that the tech giant has sought to market the technology to companies such as ExxonMobil and Chevron as a powerful tool for finding and developing new oil and gas reserves and maximizing their production—all while publicly committing to dramatically reduce emissions.
… Microsoft[‘s]…latest environmental report, released this May, shows a 29 percent increase in emissions since 2020—a change that has been driven in no small part by recent AI development, as the company explains in the report.
… The root issue is Microsoft’s unflagging support of fossil-fuel extraction. In March 2021, for example, Microsoft expanded its partnership with Schlumberger, an oil-technology company, to develop and launch an AI-enhanced service on Microsoft’s Azure platform. Azure provides cloud computing to a variety of organizations, but this product was tailor-made for the oil and gas industries, to assist in the production of fossil fuels, among other uses.

12 September
Sam Altman tells Oprah he wants the government to safety test AI like it does aircraft and medicine
(Quartz) OpenAI chief Sam Altman called for more testing of artificial intelligence in an interview with Oprah for her new television special, “AI and the Future of Us,” which aired Thursday evening.
Altman, who co-founded and heads the AI startup behind popular generative AI chatbot ChatGPT, said the next step is for the government to start carrying out safety tests of the rapidly advancing new technology.

OpenAI’s new ChatGPT that can ‘think’ is here
The company released the first of its AI models that can “reason” through more complex tasks and problems in science, coding, and math
The new model series, called OpenAI o1, is “designed to spend more time thinking before they respond,” the company said. The models, the first of which are available in preview in ChatGPT and through the company’s API, can “reason” through more complex tasks and problems in science, coding, and math than earlier OpenAI models. The company also announced OpenAI o1-mini, a smaller and cheaper version of the new model, which can help developers with coding tasks.
AI has an energy crisis. Sam Altman, Jensen Huang, and others went to the White House about it
Executives from major tech and AI companies reportedly met with power and utility companies at the White House
The tech executives, White House officials, and American energy companies reportedly discussed how the public and private sectors can work together on infrastructure to sustainably support AI’s intense energy consumption. Discussions also reportedly covered data center capacity and semiconductor manufacturing. Tech giants’ emissions are climbing as they race to build more advanced — and increasing power-hungry — AI tools. At risk are the climate goals laid out several years ago by companies including Google and Microsoft.
Nvidia’s highly anticipated Blackwell AI chip, for example, consumes 1,200 watts of electricity — almost enough to power an average home in the U.S.

9 September
Has AI hacked the operating system of human civilisation? Yuval Noah Harari sounds a warning
Just as artificial intelligence (AI) models are trained on vast data sets to learn and predict, Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow have trained us to expect disruptive ideas from bestselling historian Yuval Noah Harari.
His latest book, Nexus: A Brief History of Information Networks from the Stone Age to AI, is a sweeping exploration of the history and future of human networks. Harari draws on a wide range of historical and contemporary examples to illustrate how information has shaped, and continues to shape, human societies
Apple’s iPhone 16 event kicks off ‘a historic week’ for the tech giant
Apple Intelligence and its latest slate of iPhones are taking center stage on Monday
Apple’s artificial intelligence-capable iPhone 16 lineup will set off a new rush for consumer AI, and spark a new era of growth for the tech giant, according to analysts at Wedbush.
The integration of AI into Apple’s popular devices will allow users to customize their home screens even more, get better answers from its voice assistant, and even access ChatGPT directly from their device, among other new features.
Given the popularity of Apple’s iPhone, Wedbush estimates that roughly 20% of consumers worldwide will access and interact with generative AI apps through Apple devices over the coming years, starting with the iPhone 16.

5 September
US, Britain, EU to sign first international AI treaty
AI Convention was adopted in May
Convention covers human rights aspects of AI
It was negotiated by 57 countries
(Reuters) – The first legally binding international AI treaty will be open for signing on Thursday [5 September] by the countries that negotiated it, including European Union members, the United States and Britain, the Council of Europe human rights organisation said.
The AI Convention, which has been in the works for years and was adopted in May after discussions between 57 countries, addresses the risks AI may pose, while promoting responsible innovation.

Why and how should we regulate the use of AI in health care?
Matt Kasman and Ross A. Hammond
(Brookings) Key Takeaways:
There is substantial interest in the application of AI tools to health care by some of the largest players in the tech industry.
If and when large-scale application of AI tools to health care takes place, it could have substantial implications for Americans’ health as well as the U.S. economy.
The next presidential administration and Congress should preemptively identify a regulatory framework that can guard against potential negative consequences of the widespread use of AI tools in health care.

4 September
Authoritarian Countries’ AI Advantage
Angela Huyue Zhang
Analysts often attribute the rapid development of AI technologies in countries like the United Arab Emirates and China to state support and cheap energy. But another important driver is their authoritarian governance model, which enables AI companies to train their models on vast amounts of personal data.
(Project Syndicate) Last year, the United Arab Emirates made global headlines with the release of Falcon, its open-source large language model (LLM). Remarkably, by several key metrics, Falcon managed to outperform or measure up against the LLMs of tech giants like Meta (Facebook) and Alphabet (Google).

3 September
The Feds vs. California: Inside the twin efforts to regulate AI in the US
(GZERO media) Silicon Valley is home to the world’s most influential artificial intelligence companies. But there’s currently a split approach between the Golden State and Washington, DC, over how to regulate this emerging technology.
The federal approach is relatively hands-off. After Joe Biden’s administration persuaded leading AI companies to sign a voluntary pledge in July 2023 to mitigate risks posed by AI, it issued a sweeping executive order on artificial intelligence in October 2023. That order commanded federal agencies and departments to begin writing rules and explore how they can incorporate AI to improve their current work. The administration also signed onto the UK’s Bletchley Declaration, a multi-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In April, the White House clarified that under the executive order, agencies have until December to “assess, test, and monitor” the impact of AI on their work, mitigate algorithmic discrimination, and provide transparency into how they’re using AI.
But perhaps its biggest win came on Aug. 29 when OpenAI and Anthropic voluntarily agreed to share their new models with the government so officials can safety-test them before they’re released to the public. The models will be shared with the US AI Safety Institute, housed under the Commerce Department’s National Institute of Standards and Technology, or NIST.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altmanwrote on X. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Altman’s clarification that regulation should happen at the national level implied an additional rebuke of how California seeks to regulate the company and its tech.
Brian Albrecht, the chief economist at the International Center for Law & Economics, was not surprised by the companies’ willingness to share their models with the government. “This is a very standard response to expected regulation,” Albrecht said. “And it’s always tough to know how voluntary any of this is.”

29 August
The good, the not-so-good, and the ugly of the UN’s blueprint for AI
Cameron F. Kerry
A leaked report from the United Nations’ High-Level Advisory Body on AI indicates a desire for increasing UN involvement in international AI governance functions.
Rapidly expanding networks on AI policy, safety, and development have produced unprecedented levels of international cooperation around AI.
Rather than forming a superstructure over these efforts, the UN should focus on promoting AI access and capacity-building while leveraging the agility and flexibility of the emerging networks of global governance initiatives.
(Brookings) In the “AI summer” of recent years, centers of artificial intelligence (AI) policymaking have blossomed around the globe as governments, international organizations, and other groups seek to realize the technology’s promise while identifying and mitigating its accompanying risks. Since Canada became the first country to announce a national AI strategy in 2017 and then led G7 adoption of a “common vision for the future of artificial intelligence” in 2018, at least 70 countries have developed AI strategies, almost every multilateral organization also has adopted a policy statement on AI, and the Council of Europe identifies some 450 AI governance initiatives from a wide variety of stakeholders. This worldwide flurry reflects how much generative AI models and the explosive uptake of ChatGPT have captured mainstream attention.
Now, the United Nations (UN) aims to impose order on this expanding landscape. Secretary General António Guterres—a prominent voice in calling for a global body to govern perceived existential risks of emerging foundational AI models—initiated a global digital compact to be finalized alongside this September’s UN General Assembly.

15-16 August
Noxious images spread after Elon Musk launches AI tool with few guardrails
By using his platform to favor Donald Trump and launch a loosely-controlled AI chatbot, Grok, Musk has dragged the company into uncharted territory ahead of a contentious election.
(WaPo) A flurry of provocative artificial intelligence-generated content has spread across Elon Musk’s social media platform X…. The images stem from new tools on the site that allow users to quickly create photorealistic visuals using a built-in chatbot called Grok, which Musk touted in a post this week as the “most fun AI in the world!” Unlike rival AI image generators, X’s technology appears to have few guardrails to limit the production of offensive or misleading depictions of real people, trademarked characters or violence, according to user comments and tests by The Washington Post.

Robert Reich: How to stop Musk
Lies on Elon Musk’s X have instigated some of the worst racial riots in Britain’s history. Musk recently posted a comment to his hundreds of millions of followers claiming “Civil war is inevitable” in the U.K., and asserted that the British criminal justice system treats Muslims more leniently than far-right activists.
European Union commissioner Thierry Breton sent Musk an open letter reminding him of EU laws against amplifying harmful content “that promotes hatred, disorder, incitement to violence, or certain instances of disinformation” and warning that the EU “will be extremely vigilant” about protecting “EU citizens from serious harm.”

9 August
Elizabeth Warren is coming for Elon Musk — and calling out his corporate ‘entanglements’
His leadership of two AI-focused companies — Tesla and xAI — has raised investors’ concerns
(Quartz) The outreach from Warren comes after a wave of concern over Musk’s handling of his portfolio, which includes aerospace firm SpaceX and artificial intelligence startup xAI. He also leads brain chip startup Neuralink, social media company X Corp., and tunneling venture The Boring Co.
Warren, like other critics of Musk, said she is concerned with Musk’s launch of xAI as a separate venture, even as he continues to bill Tesla as an AI company and plots its future based off advancements in that industry, not electric vehicles. Much of Tesla’s future value is expected to be derived from self-driving vehicles, humanoid robots, and the “Dojo” supercomputer. Musk has also threatened to build future AI projects outside of Tesla if he doesn’t get more control over the company.

Leave a Comment

comm comm comm

Wednesday-Night