Bankman-Fried: A Warrior Against Artificial Intelligence?

Many like Bankman-Fried who embrace "effective altruism" believe AI could eventually destroy the world or damage humanity.

By Cade Metz

In April, a San Francisco artificial intelligence lab called Anthropic raised $580 million for research involving “AI safety.”

Few in Silicon Valley had heard of the one-year-old lab, which is building AI systems that generate language. But the amount of money promised to the tiny company dwarfed what venture capitalists were investing in other AI startups, including those stocked with some of the most experienced researchers in the field.

The funding round was led by Sam Bankman-Fried, the founder and CEO of FTX, the cryptocurrency exchange that filed for bankruptcy in November. After FTX’s sudden collapse, a leaked balance sheet showed that Bankman-Fried and his colleagues had fed at least $500 million into Anthropic.

Their investment was part of a quiet and quixotic effort to explore and mitigate the dangers of artificial intelligence, which many in Bankman-Fried’s circle believed could eventually destroy the world and damage humanity. Over the past two years, the 30-year-old entrepreneur and his FTX colleagues funneled more than $530 million — through either grants or investments — into more than 70 AI-related companies, academic labs, think tanks, independent projects and individual researchers to address concerns over the technology, according to a tally by The New York Times.

Now some of these organizations and individuals are unsure whether they can continue to spend that money, said four people close to the AI efforts who were not authorized to speak publicly. They said they were worried that Bankman-Fried’s fall could cast doubt over their research and undermine their reputations. And some of the AI startups and organizations may eventually find themselves embroiled in FTX’s bankruptcy proceedings, with their grants potentially clawed back in court, they said.

The concerns in the AI world are an unexpected fallout from FTX’s disintegration, showing how far the ripple effects of the crypto exchange’s collapse and Bankman-Fried’s vaporizing fortune have traveled.

“Some might be surprised by the connection between these two emerging fields of technology,” Andrew Burt, a lawyer and visiting fellow at Yale Law School who specializes in the risks of artificial intelligence, said of AI and crypto. “But under the surface, there are direct links between the two.”

Bankman-Fried, who faces investigations into FTX’s collapse and who spoke at the Times’ DealBook conference Wednesday, declined to comment. Anthropic declined to comment on his investment in the company.

Stems from “effective altruism”

Bankman-Fried’s attempts to influence AI stem from his involvement in “effective altruism,” a philanthropic movement in which donors seek to maximize the impact of their giving for the long term. Effective altruists are often concerned with what they call catastrophic risks, such as pandemics, bioweapons and nuclear war.

Their interest in artificial intelligence is particularly acute. Many effective altruists believe that increasingly powerful AI can do good for the world but worry that it can cause serious harm if it is not built in a safe way. While AI experts agree that any doomsday scenario is a long way off — if it happens at all — effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, companies and governments should prepare for it.

Over the last decade, many effective altruists have worked inside top AI research labs, including DeepMind, which is owned by Google’s parent company, and OpenAI, which was founded by Elon Musk and others. They helped create a research field called AI safety, which aims to explore how AI systems might be used to do harm or might unexpectedly malfunction on their own.

Effective altruists have helped drive similar research at Washington think tanks that shape policy. Georgetown University’s Center for Security and Emerging Technology, which studies the impact of AI and other emerging technologies on national security, was largely funded by Open Philanthropy, an effective altruist giving organization backed by a Facebook co-founder, Dustin Moskovitz. Effective altruists also work as researchers inside these think tanks.

Bankman-Fried has been a part of the effective altruist movement since 2014. Embracing an approach called earning to give, he told the Times in April that he had deliberately chosen a lucrative career so he could give away much larger amounts of money.

In February, he and several of his FTX colleagues announced the Future Fund, which would support “ambitious projects in order to improve humanity’s long-term prospects.” The fund was led partly by Will MacAskill, a founder of the Center for Effective Altruism, as well as other key figures in the movement.

The Future Fund promised $160 million in grants to a wide range of projects by the beginning of September, including in research involving pandemic preparedness and economic growth. About $30 million was earmarked for donations to an array of organizations and individuals exploring ideas related to AI.

Among the Future Fund’s AI-related grants was $2 million to a little-known company, Lightcone Infrastructure. Lightcone runs the online discussion site LessWrong, which in the mid-2000s began exploring the possibility that AI would one day destroy humanity.

Bankman-Fried and his colleagues also funded several other efforts that were working to mitigate the long-term risks of AI, including $1.25 million to the Alignment Research Center, an organization that aims to align future AI systems with human interests so that the technology does not go rogue. They also gave $1.5 million for similar research at Cornell University.

The Future Fund also donated nearly $6 million to three projects involving “large language models,” an increasingly powerful breed of AI that can write tweets, emails and blog posts and even generate computer programs. The grants were intended to help mitigate how the technology might be used to spread disinformation and to reduce unexpected and unwanted behavior from these systems.

After FTX filed for bankruptcy, MacAskill and others who ran the Future Fund resigned from the project, citing “fundamental questions about the legitimacy and integrity of the business operations” behind it. MacAskill did not respond to a request for comment.

Beyond the Future Fund’s grants, Bankman-Fried and his colleagues directly invested in startups with the $500 million financing of Anthropic. The company was founded in 2021 by a group that included a contingent of effective altruists who had left OpenAI. It is working to make AI safer by developing its own language models, which can cost tens of millions of dollars to build.

Some organizations and individuals have already received their funds from Bankman-Fried and his colleagues. Others got only a portion of what was promised to them. Some are unsure whether the grants will have to be returned to FTX’s creditors, said the four people with knowledge of the organizations.

Charities are vulnerable to clawbacks when donors go bankrupt, said Jason Lilien, a partner at the law firm Loeb & Loeb who specializes in charities. Companies that receive venture investments from bankrupt companies may be in a somewhat stronger position than charities, but they are also vulnerable to clawback claims, he said.

Dewey Murdick, the director of the Center for Security and Emerging Technology, the Georgetown think tank that is backed by Open Philanthropy, said effective altruists had contributed to important research involving AI.

“Because they have increased funding, it has increased attention on these issues,” he said, citing how there is more discussion over how AI systems can be designed with safety in mind.

But Oren Etzioni of the Allen Institute for Artificial Intelligence, a Seattle AI lab, said that the views of the effective altruist community were sometimes extreme and that they often made today’s technologies seem more powerful or more dangerous than they really were.

He said the Future Fund had offered him money this year for research that would help predict the arrival and risks of “artificial general intelligence,” a machine that can do anything the human brain can do. But that idea is not something that can be reliably predicted, Etzioni said, because scientists do not yet know how to build it.

“These are smart, sincere people committing dollars into a highly speculative enterprise,” he said.

c.2022 The New York Times Company. This article originally appeared in The New York Times.

Latest news

Raymond James Welcomes Tampa, Fla., Financial Advisor With $125M

Sloane Fox and her practice, Sloane Financial Planning in Tampa, Fla., previously were affiliated with Merrill Lynch.

U.S. Annuity Sales Hit First Quarter Record of $113.5B, up 21%

Fixed-rate deferred annuities dominated in the first quarter with $48 billion in sales, 42% of the total annuity market.

Business Groups Sue FTC to Stop Noncompete Ban

The suit called the ban “a vast overhaul of the national economy, and applies to a host of contracts that could not harm competition in any way.”

FTC Issues Ban on Worker Noncompete Clauses

The Federal Trade Commission says employers can no longer, in most cases, stop their employees from going to work for rival companies.

Inspire Investing’s newest faith-based ETF surpasses $100M AUM in 11 days

The new Inspire 500 ETF offers access to U.S. large cap, “biblically screened companies” at the lowest price point available.

Biden Rule Grants Overtime Pay to 4 Million Workers

The new Biden rule goes even further to extend overtime pay than an Obama-era rule that was struck down in court.