AI

Gary Marcus is happy to help regulate AI for US government: ‘I’m interested’

Comment

IBM’s Chief Privacy & Trust Officer Christina Montgomery, New York University Professor Emeritus Gary Marcus and OpenAI’s CEO Samuel Altman testify before the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law hearing on artificial intelligence in Washington, DC
Image Credits: Jack Gruber / USA TODAY

On Tuesday of this week, neuroscientist, founder and author Gary Marcus, sat between OpenAI CEO Sam Altman and Christina Montgomery, who is IBM’s chief privacy trust officer, as all three testified before the Senate Judiciary Committee for over three hours. The senators were largely focused on Altman because he runs one of the most powerful companies on the planet at the moment, and because Altman has repeatedly asked them to help regulate his work. (Most CEOs beg Congress to leave their industry alone.)

Though Marcus has been known in academic circles for some time, his star has been on the rise lately thanks to his newsletter (“The Road to AI We Can Trust“), a podcast (“Humans vs. Machines“) and his relatable unease around the unchecked rise of AI. In addition to this week’s hearing, for example, he has this month appeared on Bloomberg television and been featured in The New York Times Sunday Magazine and Wired, among other places.

Because this week’s hearing seemed truly historic in ways — Senator Josh Hawley said of generative AI that “we could be looking at one of the most significant technological innovations in human history,” while Senator John Kennedy was so charmed by Altman that he asked Altman to pick his own regulators — we wanted to talk with Marcus, too, to discuss the experience and see what he knows about what happens next. Our chat below has been edited for length.

Are you still in Washington? 

I am still in Washington. I’m meeting with lawmakers and their staff and various other interesting people and trying to see if we can turn the kinds of things that I talked about into reality.

You’ve taught at NYU. You’ve co-founded a couple of AI companies, including one with famed roboticist Rodney Brooks. I interviewed Brooks on stage back in 2017 and he said then he didn’t think Elon Musk really understood AI and that he thought Musk was wrong that AI was an existential threat. 

I think Rod and I share skepticism about whether current AI is anything like artificial general intelligence. There are several issues you have to take apart. One is: Are we close to AGI, and the other is how dangerous is the current AI we have? I don’t think the current AI we have is an existential threat but that it is dangerous. In many ways, I think it’s a threat to democracy. That’s not a threat to humanity. It’s not going to annihilate all humans. But it’s a pretty serious risk.

Not so long ago, you were debating Yann LeCun, Meta’s chief AI scientist. I’m not sure what that flap was about — the true significance of deep learning neural networks?

So LeCun and I have actually debated many things for many years. We had a public debate that David Chalmers, the philosopher, moderated in 2017. I’ve been trying to get [LeCun] to have another real debate ever since and he won’t do it. He prefers to subtweet me on Twitter and stuff like that, which I don’t think is the most adult way of having conversations, but because he is an important figure, I do respond.

One thing that I think we disagree about [currently] is, LeCun thinks it’s fine to use these [large language models] and that there’s no possible harm here. I think he’s extremely wrong about that. There are potential threats to democracy, ranging from misinformation that is deliberately produced by bad actors, to accidental misinformation — like the law professor who was accused of sexual harassment even though he didn’t commit it — [to the ability to] subtly shape people’s political beliefs based on training data that the public doesn’t even know anything about. It’s like social media, but even more insidious. You can also use these tools to manipulate other people and probably trick them into anything you want. You can scale them massively. There’s definitely risks here.

You said something interesting about Sam Altman on Tuesday, telling the senators that he didn’t tell them what his worst fear is, which you called “germane,” and redirecting them to him. What he still didn’t say is anything having to do with autonomous weapons, which I talked with him about a few years ago as a top concern. I thought it was interesting that weapons didn’t come up.

We covered a bunch of ground, but there are lots of things we didn’t get to, including enforcement, which is really important, and national security and autonomous weapons and things like that. There will be several more of [these].

Was there any talk of open source versus closed systems?

It hardly came up. It’s obviously a really complicated and interesting question. It’s really not clear what the right answer is. You want people to do independent science. Maybe you want to have some kind of licensing around things that are going to be deployed at very large scale, but they carry particular risks, including security risks. It’s not clear that we want every bad actor to get access to arbitrarily powerful tools. So there are arguments for and there are arguments against, and probably the right answer is going to include allowing a fair degree of open source but also having some limitations on what can be done and how it can be deployed.

Any specific thoughts about Meta’s strategy of letting its language model out into the world for people to tinker with?

I don’t think it’s great that [Meta’s AI technology] LLaMA is out there to be honest. I think that was a little bit careless. And, you know, that literally is one of the genies that is out of the bottle. There was no legal infrastructure in place; they didn’t consult anybody about what they were doing, as far as I know. Maybe they did, but the decision process with that or, say, Bing, is basically just: a company decides we’re going to do this.

But some of the things that companies decide might carry harm, whether in the near future or in the long term. So I think governments and scientists should increasingly have some role in deciding what goes out there [through a kind of] FDA for AI where, if you want to do widespread deployment, first you do a trial. You talk about the cost benefits. You do another trial. And eventually, if we’re confident that the benefits outweigh the risks, [you do the] release at large scale. But right now, any company at any time can decide to deploy something to 100 million customers and have that done without any kind of governmental or scientific supervision. You have to have some system where some impartial authorities can go in.

Where would these impartial authorities come from? Isn’t everyone who knows anything about how these things work already working for a company?

I’m not. [Canadian computer scientist] Yoshua Bengio is not. There are lots of scientists who aren’t working for these companies. It is a real worry, how to get enough of those auditors and how to give them incentive to do it. But there are 100,000 computer scientists with some facet of expertise here. Not all of them are working for Google or Microsoft on contract.

Would you want to play a role in this AI agency?

I’m interested, I feel that whatever we build should be global and neutral, presumably nonprofit, and I think I have a good, neutral voice here that I would like to share and try to get us to a good place.

What did it feel like sitting before the Senate Judiciary Committee? And do you think you’ll be invited back?

I wouldn’t be shocked if I was invited back but I have no idea. I was really profoundly moved by it and I was really profoundly moved to be in that room. It’s a little bit smaller than on television, I suppose. But it felt like everybody was there to try to do the best they could for the U.S. — for humanity. Everybody knew the weight of the moment and by all accounts, the senators brought their best game. We knew that we were there for a reason and we gave it our best shot.

More TechCrunch

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

23 hours ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

1 day ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares