This is such sad news, Diana. He was a presence of calm and reason in our discussions which were sometimes…
AI, Chatbots, Society & Technology July 2024-
Written by Diana Thebaud Nicholson // January 8, 2025 // AI Artificial Intelligence, Science & Technology // No comments
The Brookings AI Equity Lab
The AI Equity Lab is housed in the Center for Technology Innovation (CTI) at Brookings and is focused on advancing inclusive, ethical, nondiscriminatory, and democratized artificial intelligence (AI) models and systems throughout the United States and the Global South, including the African Union, India, Southeast Asia, the Caribbean, and Latin America.
In particular, the AI Equity Lab is focused on some of the most consequential areas of AI whose design implications, and autonomous decisions contribute to online biases, and can erode the quality of lives for people and their communities, including in criminal justice, education, health care, hiring and employment, housing, and voting rights. About
5 September 2024
Council of Europe opens first ever global treaty on AI for signature
The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.
The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union.
The AI Convention, the first international artificial intelligence treaty, opened for signing Thursday [5 September]. It is sponsored by the US, Britain and the EU and comes after months of negotiations between 57 countries. The Convention will address both the risks and responsible use of AI, especially pertaining to human rights and data privacy. Critics, however, worry that the treaty is too broad and includes caveats that may limit its enforceability. For instance, the Convention allows several exemptions for AI technology used for national security purposes and in the private sector.
8 January
Biden to Further Limit Nvidia AI Chip Exports in Final Push
Gulf states and Southeast Asian countries would face new caps
Move would expand semiconductor restrictions to most of world
(Bloomberg) President Joe Biden’s administration plans one additional round of restrictions on the export of artificial intelligence chips from the likes of Nvidia Corp. just days before leaving office, a final push in his effort to keep advanced technologies out of the hands of China and Russia.
The US wants to curb the sale of AI chips used in data centers on both a country and company basis, with the goal of concentrating AI development in friendly nations and getting businesses around the world to align with American standards, according to people familiar with the matter.
The result would be an expansion of semiconductor trade restrictions to most of the world — an attempt to control the spread of AI technology at a time of soaring demand. The regulations, which could be issued as soon as Friday, would create three tiers of chip curbs, said the people, who asked not to be identified because the discussions are private.
5-6 January
Sam Altman on ChatGPT’s First Two Years, Elon Musk and AI Under Trump
2024
29 November
The World Needs a Pro-Human AI Agenda
Daron Acemoglu
Judging by the current paradigm in the technology industry, we cannot rule out the worst of all possible worlds: none of the transformative potential of AI, but all of the labor displacement, misinformation, and manipulation. But it’s not too late to change course.
(Project Syndicate) These are uncertain and confusing times. Not only are we contending with pandemics, climate change, societal aging in major economies, and rising geopolitical tensions, but artificial intelligence is poised to change the world as we know it. What remains to be seen is how quickly things will change and for whose benefit.
If you listen to industry insiders or technology reporters at leading newspapers, you might think artificial general intelligence (AGI) – AI technologies that can perform any human cognitive task – is just around the corner. Accordingly, there is much debate about whether these amazing capabilities will make us prosperous beyond our wildest dreams (with less hyperbolic observers estimating more than 1-2% faster GDP growth), or instead bring about the end of human civilization, with superintelligent AI models becoming our masters.But if you look at what is going on in the real economy, you will not find any break with the past so far. There is no evidence yet of AI delivering revolutionary productivity benefits. Contrary to what many technologists promised, we still need radiologists (more than before, in fact), journalists, paralegals, accountants, office workers, and human drivers. As I noted recently, we should not expect much more than about 5% of what humans do to be replaced by AI over the next decade. It will take significantly longer for AI models to acquire the judgment, multi-dimensional reasoning abilities, and the social skills necessary for most jobs, and for AI and computer vision technologies to advance to the point where they can be combined with robots to perform high-precision physical tasks (such as manufacturing and construction).
Canadian media companies sue OpenAI in case potentially worth billions
Litigants say AI company used their articles to train its popular ChatGPT software without authorization
(The Guardian) Canada’s major news organizations have sued tech firm OpenAI for potentially billions of dollars, alleging the company is “strip-mining journalism” and unjustly enriching itself by using news articles to train its popular ChatGPT software.
The suit, filed on Friday in Ontario’s superior court of justice, calls for punitive damages, a share of profits made by OpenAI from using the news organizations’ articles, and an injunction barring the San Francisco-based company from using any of the news articles in the future.
“These artificial intelligence companies cannibalize proprietary content and are free-riding on the backs of news publishers who invest real money to employ real journalists who produce real stories for real people,” said Paul Deegan, president of News Media Canada.
“They are strip-mining journalism while substantially, unjustly and unlawfully enriching themselves to the detriment of publishers.”
The litigants include the Globe and Mail, the Canadian Press, the CBC, the Toronto Star, Metroland Media and Postmedia. They want up to C$20,000 in damages for each article used by OpenAI, suggesting a victory in court could be worth billions.
29 October
Can the US defense industry harness AI power while mitigating risks?
(GZERO media) Just days before the 2024 presidential election…Biden outlined his case for military-industrial cooperation on AI to get the most out of the emerging technology.
The new National Security Memorandum (NSM) outlines new ways to accelerate the safe, secure, and responsible use of AI in the US defense agencies and intelligence community.
The NSM, released Oct. 24, is a follow-up to Biden’s October 2023 executive order on AI. It directs federal agencies to “act decisively” in adopting AI while safeguarding against potential risks. The memo names the AI Safety Institute, housed within the Commerce Department, as the primary point of contact between the government and the private sector’s AI developers. It requires new testing protocols, creates an AI National Security Coordination Group to coordinate policies across agencies, and encourages cooperation with international organizations like the UN and G7.
“Many countries — especially military powers — have accepted that AI will play a role in military affairs and national security,” said Owen Daniels, associate director of analysis at Georgetown University’s Center for Security and Emerging Technology. “That AI will be used in future operations is both inevitable and generally accepted today, which wasn’t the case even a few years ago.” Daniels says AI is already being used for command and control, intelligence analysis, and targeting.
24 October
Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence
23 October
Big Tech is going all in on nuclear power as sustainability concerns around AI grow
(Yahoo!Finance) Artificial Intelligence has driven shares of tech companies like Microsoft (MSFT), Amazon (AMZN), Nvidia (NVDA), and Google (GOOG, GOOGL) to new highs this year. But the technology, which companies promise will revolutionize our lives, is driving something else just as high as stock prices: energy consumption.
AI data centers use huge amounts of power and could increase energy demand by as much as 20% over the next decade, according to a Department of Energy spokesperson. Pair that with the continued growth of the broader cloud computing market, and you’ve got an energy squeeze.
But Big Tech has also set ambitious sustainability goals focused on the use of low-carbon and zero-carbon sources to reduce its impact on climate change. While renewable energy like solar and wind are certainly part of that equation, tech companies need uninterruptible power sources. And for that, they’re leaning into nuclear power.
Tech giants aren’t just planning to hook into existing plants, either. They’re working with energy companies to bring mothballed facilities like Pennsylvania’s Three Mile Island back online and looking to build small modular reactors (SMRs) that take up less space than traditional plants and, the hope is, are cheaper to construct.
17 October
The AI Boom Has an Expiration Date
Tech executives are setting deadlines for the arrival of superintelligence. They might regret it.
By Matteo Wong
(The Atlantic) … Perhaps this new and newly bullish wave of forecasts doesn’t actually imply a surge of confidence but just the opposite. These grand pronouncements are being made at the same time that a flurry of industry news has been clarifying AI’s historically immense energy and capital requirements. Generative-AI models are far larger and more complex than traditional software, and the corresponding data centers require land, very expensive computer chips, and huge amounts of power to build, run, and cool. Right now, there simply isn’t enough electricity available, and data-center power demands are already straining grids around the world. Anticipating further growth, old fossil-fuel plants are staying online for longer; in the past month alone, Microsoft, Google, and Amazon have all signed contracts to purchase electricity from or support the building of nuclear power plants.
… Absent a solid, self-sustaining business model, all that the generative-AI industry has to run on is faith. Both costs and expectations are so high that no product or amount of revenue, in the near term, can sustain them—but raising the stakes could. Promises of superintelligence help justify further, unprecedented spending.
… All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.
8 October
The Prize in Physics goes to John J. Hopfield, of Princeton and Geoffrey E. Hinton, University of Toronto,“for foundational discoveries and inventions that enable machine learning with artificial neural networks”.
Geoffrey Hinton, who warned of AI’s dangers, co-wins Nobel Prize in Physics
(Globe & Mail) Beginning in the 1980s, Dr. Hopfield and Dr. Hinton made separate and crucial contributions to the development of computer algorithms known as neural networks. Based on the way neurons connect and interact in the brain, the networks can adjust themselves in response to their own performance, gradually optimizing their ability to solve various kinds of calculations.
While their progress came slowly at first, neural networks have since proved to be a watershed for machine learning. Aided by blindingly fast computer processors and vast repositories of training data that were not available decades ago, systems based on neural networks first became powerful and then transformative.
“I want to emphasize that AI is going to do tremendous good,” Dr. Hinton told The Globe and Mail after learning of his Nobel win. “In areas like health care, it’s going to be amazing. That’s why its development is never going to be stopped. The real question is, can we keep it safe?”
12 June
For Geoffrey Hinton, the godfather of AI, machines are closer to humans than we think
Geoffrey Hinton left his position at Google last year so he could speak more freely about artificial intelligence, a field he helped to pioneer and is increasingly critical of.
26 September
Artist sues after US rejects copyright for AI-generated image
Who owns that AI copyright?
Not the artist: Jason M. Allen asked a Colorado federal court to reverse a U.S. Copyright Office decision that rejected protection for an award-winning image he created with AI. Investors want to know more about power needs for AI and advanced computing to determine whether they should continue considering the sector as sustainable. The U.S. fined a political consultant $6 million over fake robocalls that mimicked President Biden’s voice. And OpenAI is moving away from a non-profit structure, a move that could have implications for how the company manages AI risks.
19 September
The UN unveils plan for AI
(GZERO media) Overnight, and after months of deliberation, a United Nations advisory body studying artificial intelligence released its final report. Aptly called “Governing AI for Humanity” it is a set of findings and policy recommendations for the international organization and an update since the group’s interim report in December 2023.
“As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place,” the report’s authors wrote. “The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”
The HLAB-AI report asks the UN to begin working on a “globally inclusive” system for AI governance, calls on governments and stakeholders to develop AI in a way that protects human rights, and it makes seven recommendations
An international scientific panel on AI: A new group of volunteer experts would issue an annual report on AI risks and opportunities. They’d also contribute regular research on how AI could help achieve the UN’s Sustainable Development Goals, or SDGs.
Policy dialogue on AI governance: A twice-yearly policy dialogue with governments and stakeholders on best practices for AI governance. It’d have an emphasis on “international interoperability” of AI governance.
AI standards exchange: This effort would develop common definitions and standards for evaluating AI systems. It’d create a new process for identifying gaps in these definitions and how to write them, as well.
Capacity development network: A network of new development centers that will provide researchers and social entrepreneurs with expertise, training data, and computing. It’d also develop online educational resources for university students and a fellowship program for individuals to spend time in academic institutions and tech companies.
Global fund for AI: A new fund that would collect donations from public and private groups and disburse money to “put a floor under the AI divide,” focused on countries with fewer resources to fund AI.
Global AI data framework: An initiative to set common standards and best practices governing AI training data and its provenance. It’d hold a repository of data sets and models to help achieve the SDGs.
AI office within the Secretariat: This new office would see through the proposals in this report and advise the Secretary-General on all matters relating to AI.
15 September
The Real AI Threat Starts When the Polls Close
The seeds of an AI election backlash were sown even before this election. The process started in the late 2010s, when fears about the influence of a deepfake apocalypse began, or perhaps even earlier, when Americans finally noticed the rapid spread of mis- and disinformation on social media. But if AI actually becomes a postelection scapegoat, it likely won’t be because the technology singlehandedly determined the results.
By Matteo Wong
(The Atlantic) Over the past several months, multiple polls have shown that large swaths of Americans fear that AI will be used to sway the election. In a survey conducted in April by researchers at Elon University, 78 percent of participants said they believed AI would be used to affect the presidential election by running fake social-media accounts, generating misinformation, or persuading people not to vote. More than half thought all three were at least somewhat likely to happen. Research conducted by academics in March found that half of Americans think AI will make elections worse.
There are, to be clear, very real reasons to worry that generative AI could influence voters, as I have written: Chatbots regularly assert incorrect but believable claims with confidence, and AI-generated photos and videos can be challenging to immediately detect. The technology could be used to manipulate people’s beliefs, impersonate candidates, or spread disenfranchising false information about how to vote.
… Politicians and public figures have begun to invoke AI-generated disinformation, legitimately and not, as a way to brush off criticism, disparage opponents, and stoke the culture wars.
13 September
Revolution, interrupted
One of the promises of machine learning was better drugs, faster. A decade in, AI has failed to live up to the hype
Joe Castaldo, Sean Silcoff
(Globe & Mail) AI was supposed to revolutionize drug discovery. It hasn’t. Not yet, anyway. Machine learning promised to speed up a lengthy and fraught process, and achieve breakthroughs beyond the capabilities of the human mind. But there are no drugs solely designed by AI in the market today, and companies that have used AI to assist with development have suffered setbacks.
… True expertise in drug discovery is also scarce. Relatively few people have toiled away in labs and navigated the regulatory process to have a new drug approved. “They know exactly where the difficulties lie, and what to look out for,” said Clarissa Desjardins, founder of Congruence Therapeutics in Montreal, another Amplitude-backed drug developer, and a veteran biotech entrepreneur.
Experts in AI are typically not experts in pharmaceuticals either.
… Companies that have overcome these issues and brought drugs developed with the assistance of AI into clinical trials are grappling with another sobering reality: AI does not guarantee success.
Microsoft’s Hypocrisy on AI
Can artificial intelligence really enrich fossil-fuel companies and fight climate change at the same time? The tech giant says yes.
By Karen Hao
(The Atlantic) …as Microsoft attempts to buoy its reputation as an AI leader in climate innovation, the company is also selling its AI to fossil-fuel companies. Hundreds of pages of internal documents I’ve obtained, plus interviews I’ve conducted over the past year with 15 current and former employees and executives, show that the tech giant has sought to market the technology to companies such as ExxonMobil and Chevron as a powerful tool for finding and developing new oil and gas reserves and maximizing their production—all while publicly committing to dramatically reduce emissions.
… Microsoft[‘s]…latest environmental report, released this May, shows a 29 percent increase in emissions since 2020—a change that has been driven in no small part by recent AI development, as the company explains in the report.
… The root issue is Microsoft’s unflagging support of fossil-fuel extraction. In March 2021, for example, Microsoft expanded its partnership with Schlumberger, an oil-technology company, to develop and launch an AI-enhanced service on Microsoft’s Azure platform. Azure provides cloud computing to a variety of organizations, but this product was tailor-made for the oil and gas industries, to assist in the production of fossil fuels, among other uses.
12 September
Sam Altman tells Oprah he wants the government to safety test AI like it does aircraft and medicine
(Quartz) OpenAI chief Sam Altman called for more testing of artificial intelligence in an interview with Oprah for her new television special, “AI and the Future of Us,” which aired Thursday evening.
Altman, who co-founded and heads the AI startup behind popular generative AI chatbot ChatGPT, said the next step is for the government to start carrying out safety tests of the rapidly advancing new technology.
OpenAI’s new ChatGPT that can ‘think’ is here
The company released the first of its AI models that can “reason” through more complex tasks and problems in science, coding, and math
The new model series, called OpenAI o1, is “designed to spend more time thinking before they respond,” the company said. The models, the first of which are available in preview in ChatGPT and through the company’s API, can “reason” through more complex tasks and problems in science, coding, and math than earlier OpenAI models. The company also announced OpenAI o1-mini, a smaller and cheaper version of the new model, which can help developers with coding tasks.
AI has an energy crisis. Sam Altman, Jensen Huang, and others went to the White House about it
Executives from major tech and AI companies reportedly met with power and utility companies at the White House
The tech executives, White House officials, and American energy companies reportedly discussed how the public and private sectors can work together on infrastructure to sustainably support AI’s intense energy consumption. Discussions also reportedly covered data center capacity and semiconductor manufacturing. Tech giants’ emissions are climbing as they race to build more advanced — and increasing power-hungry — AI tools. At risk are the climate goals laid out several years ago by companies including Google and Microsoft.
Nvidia’s highly anticipated Blackwell AI chip, for example, consumes 1,200 watts of electricity — almost enough to power an average home in the U.S.
9 September
Has AI hacked the operating system of human civilisation? Yuval Noah Harari sounds a warning
Just as artificial intelligence (AI) models are trained on vast data sets to learn and predict, Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow have trained us to expect disruptive ideas from bestselling historian Yuval Noah Harari.
His latest book, Nexus: A Brief History of Information Networks from the Stone Age to AI, is a sweeping exploration of the history and future of human networks. Harari draws on a wide range of historical and contemporary examples to illustrate how information has shaped, and continues to shape, human societies
Apple’s iPhone 16 event kicks off ‘a historic week’ for the tech giant
Apple Intelligence and its latest slate of iPhones are taking center stage on Monday
Apple’s artificial intelligence-capable iPhone 16 lineup will set off a new rush for consumer AI, and spark a new era of growth for the tech giant, according to analysts at Wedbush.
The integration of AI into Apple’s popular devices will allow users to customize their home screens even more, get better answers from its voice assistant, and even access ChatGPT directly from their device, among other new features.
Given the popularity of Apple’s iPhone, Wedbush estimates that roughly 20% of consumers worldwide will access and interact with generative AI apps through Apple devices over the coming years, starting with the iPhone 16.
5 September
US, Britain, EU to sign first international AI treaty
AI Convention was adopted in May
Convention covers human rights aspects of AI
It was negotiated by 57 countries
(Reuters) – The first legally binding international AI treaty will be open for signing on Thursday [5 September] by the countries that negotiated it, including European Union members, the United States and Britain, the Council of Europe human rights organisation said.
The AI Convention, which has been in the works for years and was adopted in May after discussions between 57 countries, addresses the risks AI may pose, while promoting responsible innovation.
Why and how should we regulate the use of AI in health care?
Matt Kasman and Ross A. Hammond
(Brookings) Key Takeaways:
There is substantial interest in the application of AI tools to health care by some of the largest players in the tech industry.
If and when large-scale application of AI tools to health care takes place, it could have substantial implications for Americans’ health as well as the U.S. economy.
The next presidential administration and Congress should preemptively identify a regulatory framework that can guard against potential negative consequences of the widespread use of AI tools in health care.
4 September
Authoritarian Countries’ AI Advantage
Angela Huyue Zhang
Analysts often attribute the rapid development of AI technologies in countries like the United Arab Emirates and China to state support and cheap energy. But another important driver is their authoritarian governance model, which enables AI companies to train their models on vast amounts of personal data.
(Project Syndicate) Last year, the United Arab Emirates made global headlines with the release of Falcon, its open-source large language model (LLM). Remarkably, by several key metrics, Falcon managed to outperform or measure up against the LLMs of tech giants like Meta (Facebook) and Alphabet (Google).
3 September
The Feds vs. California: Inside the twin efforts to regulate AI in the US
(GZERO media) Silicon Valley is home to the world’s most influential artificial intelligence companies. But there’s currently a split approach between the Golden State and Washington, DC, over how to regulate this emerging technology.
The federal approach is relatively hands-off. After Joe Biden’s administration persuaded leading AI companies to sign a voluntary pledge in July 2023 to mitigate risks posed by AI, it issued a sweeping executive order on artificial intelligence in October 2023. That order commanded federal agencies and departments to begin writing rules and explore how they can incorporate AI to improve their current work. The administration also signed onto the UK’s Bletchley Declaration, a multi-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In April, the White House clarified that under the executive order, agencies have until December to “assess, test, and monitor” the impact of AI on their work, mitigate algorithmic discrimination, and provide transparency into how they’re using AI.
But perhaps its biggest win came on Aug. 29 when OpenAI and Anthropic voluntarily agreed to share their new models with the government so officials can safety-test them before they’re released to the public. The models will be shared with the US AI Safety Institute, housed under the Commerce Department’s National Institute of Standards and Technology, or NIST.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altmanwrote on X. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Altman’s clarification that regulation should happen at the national level implied an additional rebuke of how California seeks to regulate the company and its tech.
Brian Albrecht, the chief economist at the International Center for Law & Economics, was not surprised by the companies’ willingness to share their models with the government. “This is a very standard response to expected regulation,” Albrecht said. “And it’s always tough to know how voluntary any of this is.”
29 August
The good, the not-so-good, and the ugly of the UN’s blueprint for AI
Cameron F. Kerry
A leaked report from the United Nations’ High-Level Advisory Body on AI indicates a desire for increasing UN involvement in international AI governance functions.
Rapidly expanding networks on AI policy, safety, and development have produced unprecedented levels of international cooperation around AI.
Rather than forming a superstructure over these efforts, the UN should focus on promoting AI access and capacity-building while leveraging the agility and flexibility of the emerging networks of global governance initiatives.
(Brookings) In the “AI summer” of recent years, centers of artificial intelligence (AI) policymaking have blossomed around the globe as governments, international organizations, and other groups seek to realize the technology’s promise while identifying and mitigating its accompanying risks. Since Canada became the first country to announce a national AI strategy in 2017 and then led G7 adoption of a “common vision for the future of artificial intelligence” in 2018, at least 70 countries have developed AI strategies, almost every multilateral organization also has adopted a policy statement on AI, and the Council of Europe identifies some 450 AI governance initiatives from a wide variety of stakeholders. This worldwide flurry reflects how much generative AI models and the explosive uptake of ChatGPT have captured mainstream attention.
Now, the United Nations (UN) aims to impose order on this expanding landscape. Secretary General António Guterres—a prominent voice in calling for a global body to govern perceived existential risks of emerging foundational AI models—initiated a global digital compact to be finalized alongside this September’s UN General Assembly.
15-16 August
Noxious images spread after Elon Musk launches AI tool with few guardrails
By using his platform to favor Donald Trump and launch a loosely-controlled AI chatbot, Grok, Musk has dragged the company into uncharted territory ahead of a contentious election.
(WaPo) A flurry of provocative artificial intelligence-generated content has spread across Elon Musk’s social media platform X…. The images stem from new tools on the site that allow users to quickly create photorealistic visuals using a built-in chatbot called Grok, which Musk touted in a post this week as the “most fun AI in the world!” Unlike rival AI image generators, X’s technology appears to have few guardrails to limit the production of offensive or misleading depictions of real people, trademarked characters or violence, according to user comments and tests by The Washington Post.
Robert Reich: How to stop Musk
Lies on Elon Musk’s X have instigated some of the worst racial riots in Britain’s history. Musk recently posted a comment to his hundreds of millions of followers claiming “Civil war is inevitable” in the U.K., and asserted that the British criminal justice system treats Muslims more leniently than far-right activists.
European Union commissioner Thierry Breton sent Musk an open letter reminding him of EU laws against amplifying harmful content “that promotes hatred, disorder, incitement to violence, or certain instances of disinformation” and warning that the EU “will be extremely vigilant” about protecting “EU citizens from serious harm.”
9 August
Elizabeth Warren is coming for Elon Musk — and calling out his corporate ‘entanglements’
His leadership of two AI-focused companies — Tesla and xAI — has raised investors’ concerns
(Quartz) The outreach from Warren comes after a wave of concern over Musk’s handling of his portfolio, which includes aerospace firm SpaceX and artificial intelligence startup xAI. He also leads brain chip startup Neuralink, social media company X Corp., and tunneling venture The Boring Co.
Warren, like other critics of Musk, said she is concerned with Musk’s launch of xAI as a separate venture, even as he continues to bill Tesla as an AI company and plots its future based off advancements in that industry, not electric vehicles. Much of Tesla’s future value is expected to be derived from self-driving vehicles, humanoid robots, and the “Dojo” supercomputer. Musk has also threatened to build future AI projects outside of Tesla if he doesn’t get more control over the company.